Artigos de revistas sobre o tema "Non-Centralized algorithms"

Siga este link para ver outros tipos de publicações sobre o tema: Non-Centralized algorithms.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Non-Centralized algorithms".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Li, Xiang, e Yuxuan Ma. "Analysis of Multi-Robot Patrolling Algorithms". Journal of Physics: Conference Series 2419, n.º 1 (1 de janeiro de 2023): 012100. http://dx.doi.org/10.1088/1742-6596/2419/1/012100.

Texto completo da fonte
Resumo:
Abstract This article is dedicated to analyzing the common problems of robots when patrolling indoors and the corresponding solution algorithms. In the article, four problems of patrol robots (whether centralized or non-centralized) during operation are listed: the overload of data to be processed when the robotic sensors are transmitting the data, the repetitiveness and simultaneity of the robot work, the incorrect execution of the algorithm and how to deal with the inevitable unknown external factors. The simultaneity of the robots is to be eliminated because robots need to explore independently in the patrol task. Subsequently, four algorithms and functionalities are introduced (Monte Carlo Tree Planning Algorithm, Rapid Exploring Random Tree Planning, Mutual Exclusion Algorithm, and Kalman Filter Algorithm). The question of how the four algorithms address the corresponding factors will be illustrated by an elaboration of the process by which the algorithms solve the problem.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

GE, Quan-Bo, Wen-Bin LI, Ruo-Yu SUN e Zi XU. "Centralized Fusion Algorithms Based on EKF for Multisensor Non-linear Systems". Acta Automatica Sinica 39, n.º 6 (25 de março de 2014): 816–25. http://dx.doi.org/10.3724/sp.j.1004.2013.00816.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Dong, Da Wei, Xiao Guo Liu e Tian Jing. "Channel Assignment Algorithm in Centralized WLAN". Applied Mechanics and Materials 721 (dezembro de 2014): 728–31. http://dx.doi.org/10.4028/www.scientific.net/amm.721.728.

Texto completo da fonte
Resumo:
To reduce the number of inter-disturb access points and the interference among access points in same channel, with research on interference issus and channel assignment algorithms of wireless local area network, a scheme suitable for centralized wireless local area network was proposed aiming to minimize the total interference among access points, which comprehensively considerate the number of neighbor and the received power. And then the algorithm with cases was simulated and analyzed, the result of NS2 simulation indicated that the algorithm was simple, effective and feasible, which could realize dynamic adjustment to the wireless LAN RF channel and had a better load balance effect among non-overlapping channels.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Xian, Wenhan, Feihu Huang e Heng Huang. "Communication-Efficient Frank-Wolfe Algorithm for Nonconvex Decentralized Distributed Learning". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 12 (18 de maio de 2021): 10405–13. http://dx.doi.org/10.1609/aaai.v35i12.17246.

Texto completo da fonte
Resumo:
Recently decentralized optimization attracts much attention in machine learning because it is more communication-efficient than the centralized fashion. Quantization is a promising method to reduce the communication cost via cutting down the budget of each single communication using the gradient compression. To further improve the communication efficiency, more recently, some quantized decentralized algorithms have been studied. However, the quantized decentralized algorithm for nonconvex constrained machine learning problems is still limited. Frank-Wolfe (a.k.a., conditional gradient or projection-free) method is very efficient to solve many constrained optimization tasks, such as low-rank or sparsity-constrained models training. In this paper, to fill the gap of decentralized quantized constrained optimization, we propose a novel communication-efficient Decentralized Quantized Stochastic Frank-Wolfe (DQSFW) algorithm for non-convex constrained learning models. We first design a new counterexample to show that the vanilla decentralized quantized stochastic Frank-Wolfe algorithm usually diverges. Thus, we propose DQSFW algorithm with the gradient tracking technique to guarantee the method will converge to the stationary point of non-convex optimization safely. In our theoretical analysis, we prove that to achieve the stationary point our DQSFW algorithm achieves the same gradient complexity as the standard stochastic Frank-Wolfe and centralized Frank-Wolfe algorithms, but has much less communication cost. Experiments on matrix completion and model compression applications demonstrate the efficiency of our new algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Liu, Zonglin, e Olaf Stursberg. "Distributed control of networked systems with coupling constraints". at - Automatisierungstechnik 67, n.º 12 (18 de novembro de 2019): 1007–18. http://dx.doi.org/10.1515/auto-2019-0085.

Texto completo da fonte
Resumo:
Abstract This paper proposes algorithms for the distributed solution of control problems for networked systems with coupling constraints. This type of problem is practically relevant, e. g., for subsystems which share common resources, or need to go through a bottleneck, while considering non-convex state constraints. Centralized solution schemes, which typically first cast the non-convexities into mixed-integer formulations that are then solved by mixed-integer programming, suffer from high computational complexity for larger numbers of subsystems. The distributed solution proposed in this paper decomposes the centralized problem into a set of small subproblems to be solved in parallel. By iterating over the subproblems and exchanging information either among all subsystems, or within subsets selected by a coordinator, locally optimal solutions of the global problem are determined. The paper shows for two instances of distributed algorithms that feasibility as well as continuous cost reduction over the iterations up to termination can be guaranteed, while the solutions times are considerably shorter than for the centralized problem. These properties are illustrated for a multi-vehicle motion problem.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Capocasale, Vittorio. "Trapdoor proof of work". PeerJ Computer Science 10 (19 de janeiro de 2024): e1815. http://dx.doi.org/10.7717/peerj-cs.1815.

Texto completo da fonte
Resumo:
Consensus algorithms play a crucial role in facilitating decision-making among a group of entities. In certain scenarios, some entities may attempt to hinder the consensus process, necessitating the use of Byzantine fault-tolerant consensus algorithms. Conversely, in scenarios where entities trust each other, more efficient crash fault-tolerant consensus algorithms can be employed. This study proposes an efficient consensus algorithm for an intermediate scenario that is both frequent and underexplored, involving a combination of non-trusting entities and a trusted entity. In particular, this study introduces a novel mining algorithm, based on chameleon hash functions, for the Nakamoto consensus. The resulting algorithm enables the trusted entity to generate tens of thousands blocks per second even on devices with low energy consumption, like personal laptops. This algorithm holds promise for use in centralized systems that require temporary decentralization, such as the creation of central bank digital currencies where service availability is of utmost importance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Huanca-Anquise, Candy A., Ana Lúcia Cetertich Bazzan e Anderson R. Tavares. "Multi-Objective, Multi-Armed Bandits: Algorithms for Repeated Games and Application to Route Choice". Revista de Informática Teórica e Aplicada 30, n.º 1 (30 de janeiro de 2023): 11–23. http://dx.doi.org/10.22456/2175-2745.122929.

Texto completo da fonte
Resumo:
Multi-objective decision-making in multi-agent scenarios poses multiple challenges. Dealing with multiple objectives and non-stationarity caused by simultaneous learning are only two of them, which have been addressed separately. In this work, reinforcement learning algorithms that tackle both issues together are proposed and applied to a route choice problem, where drivers must select an action in a single-state formulation, while aiming to minimize both their travel time and toll. Hence, we deal with repeated games, now with a multi-objective approach. Advantages, limitations and differences of these algorithms are discussed. Our results show that the proposed algorithms for action selection using reinforcement learning deal with non-stationarity and multiple objectives, while providing alternative solutions to those of centralized methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Zhao, Weiwei, Hairong Chu, Xikui Miao, Lihong Guo, Honghai Shen, Chenhao Zhu, Feng Zhang e Dongxin Liang. "Research on the Multiagent Joint Proximal Policy Optimization Algorithm Controlling Cooperative Fixed-Wing UAV Obstacle Avoidance". Sensors 20, n.º 16 (13 de agosto de 2020): 4546. http://dx.doi.org/10.3390/s20164546.

Texto completo da fonte
Resumo:
Multiple unmanned aerial vehicle (UAV) collaboration has great potential. To increase the intelligence and environmental adaptability of multi-UAV control, we study the application of deep reinforcement learning algorithms in the field of multi-UAV cooperative control. Aiming at the problem of a non-stationary environment caused by the change of learning agent strategy in reinforcement learning in a multi-agent environment, the paper presents an improved multiagent reinforcement learning algorithm—the multiagent joint proximal policy optimization (MAJPPO) algorithm with the centralized learning and decentralized execution. This algorithm uses the moving window averaging method to make each agent obtain a centralized state value function, so that the agents can achieve better collaboration. The improved algorithm enhances the collaboration and increases the sum of reward values obtained by the multiagent system. To evaluate the performance of the algorithm, we use the MAJPPO algorithm to complete the task of multi-UAV formation and the crossing of multiple-obstacle environments. To simplify the control complexity of the UAV, we use the six-degree of freedom and 12-state equations of the dynamics model of the UAV with an attitude control loop. The experimental results show that the MAJPPO algorithm has better performance and better environmental adaptability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Tan, Fuxiao. "The Algorithms of Distributed Learning and Distributed Estimation about Intelligent Wireless Sensor Network". Sensors 20, n.º 5 (27 de fevereiro de 2020): 1302. http://dx.doi.org/10.3390/s20051302.

Texto completo da fonte
Resumo:
The intelligent wireless sensor network is a distributed network system with high “network awareness”. Each intelligent node (agent) is connected by the topology within the neighborhood which not only can perceive the surrounding environment, but can adjusts its own behavior according to its local perception information to constructs a distributed learning algorithms. Therefore, three basic intelligent network topologies of centralized, non-cooperative, and cooperative are intensively investigated in this paper. The main contributions of the paper include two aspects. First, based on algebraic graph, three basic theoretical frameworks for distributed learning and distributed parameter estimation of cooperative strategy are surveyed: increment strategy, consensus strategy, and diffusion strategy. Second, based on classical adaptive learning algorithm and online updating law, the implementation process of distributed estimation algorithm and the latest research progress of above three distributed strategies are investigated.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Dr. C.V. Nageswara, Rao, e Putta Mr. Vihari. "RNGA based Centralized PI Controller for Multivariable Non Square Systems using Direct Synthesis Method". International Journal of Innovative Technology and Exploring Engineering 12, n.º 6 (30 de maio de 2023): 1–10. http://dx.doi.org/10.35940/ijitee.f9518.0512623.

Texto completo da fonte
Resumo:
Design of centralized PI controllers for multivariable non square systems is proposed in the present work. The centralized controller is designed based on the direct synthesis method. The method includes approximating the inverse of the process transfer matrix with the effective transfer function matrix. The effective transfer function for each element in the process transfer function matrix is derived by using the relative normalized gain array (RNGA), and relative average residence time array (RARTA) concepts proposed by Cai et al [1]. The transfer function models used in the present work include first order processes with time delay (FOPDT). Maclaurin series is applied to reduce the resulting controllers in to standard PI forms. The design method requires a single tuning parameter (filter time constant) to adjust the performance of the controller. Simulation study is carried out for various case studies and the results show the advantage of proposed method over the literature reported methods. The control algorithms are comparatively analyzed using standard robust stability measure. The designed controllers give a good performance with lesser interaction compared to the literature methods, Davison Method [2] and Tanttu and Lieslehto’s method [3].
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Jia, Si Yu, Wen Hui Hao e Xing Jian Wang. "A distributed beamforming interference minimization algorithm based on node selection". Journal of Physics: Conference Series 2525, n.º 1 (1 de junho de 2023): 012022. http://dx.doi.org/10.1088/1742-6596/2525/1/012022.

Texto completo da fonte
Resumo:
Abstract Distributed beamforming uses nodes in a wireless sensor network to transmit signals in different phases with controllable delay, to obtain coherent output signals with a gain after superposition. However, a wireless sensor network has a large topological area and wide distribution range, and it is difficult for distributed beamforming to obtain a highly directional beam as centralized beamforming does, which will cause interference to the non-target base stations. To solve this problem, a discrete adaptive dual-population cooperative differential evolution (DPCDE) algorithm is proposed, which can effectively reduce the interference by selecting nodes suitable for participating in distributed beamforming in a wireless sensor network. Simulation results show that the proposed algorithm can optimize the node set participating in distributed beamforming to minimize the interference of the wireless sensor network to the non-target base stations, and the effect is better than other classic intelligent optimization algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Baranov, L. A., V. G. Sidorenko, E. P. Balakina e L. N. Loginova. "Intelligent centralized traffic management of a rapid transit system under heavy traffic". Dependability 21, n.º 2 (2 de junho de 2021): 17–23. http://dx.doi.org/10.21683/1729-2646-2021-21-2-17-23.

Texto completo da fonte
Resumo:
Aim. In today’s major cities, increased utilization and capacity of the rapid transit systems (metro, light rail, commuter trains with stops within the city limits) – under condi[1]tions of positive traffic safety – is achieved through smart automatic train traffic management. The aim of this paper is to choose and substantiate the design principles and architecture of such system.Methods. Using systems analysis, the design principles and architecture of the system are substantiated. Genetic algorithms allow automating train traffic planning. Methods of the optimal control theory allow managing energy-efficient train movement patterns along open lines, assigning individual station-to-station running times following the principle of mini[1]mal energy consumption, developing energy-efficient target traffic schedules. Methods of the automatic control theory are used for selecting and substantiating the train traffic algorithms at various functional levels, for constructing random disturbance extrapolators that minimize the number of train stops between stations.Results. Development and substantiation of the design principles and architecture of a centralized intelligent hierarchical system for automatic rapid transit traffic management. The distribution of functions between the hierarchy levels is described, the set of subsystems is shown that implement the purpose of management, i.e., ensuring traffic safety and comfort of passengers. The criteria are defined and substantiated of management quality under compensated and non-compensated disturbances. Traffic management and target scheduling automation algorithms are examined. The application of decision algorithms is demonstrated in the context of uncertainty, use of disturbance prediction and genetic algorithms for the purpose of train traffic planning automation. The design principles of the algorithms of traffic planning and management are shown that ensure reduced traction energy consumption. The efficiency of centralized intelligent rapid transit management system is demonstrated; the fundamental role of the system in the digitalization of the transport system is noted.Conclusion. The examined design principles and operating algorithms of a centralized intelligent rapid transit management system showed the efficiency of such systems that ensured by the following: increased capacity of the rapid transit system; improved energy efficiency of train traffic planning and management; improved train traffic safety; assurance of operational traffic management during emergencies and major traffic disruptions; improved passenger comfort.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Shofer, Bar, Guy Shani e Roni Stern. "Multi Agent Path Finding under Obstacle Uncertainty". Proceedings of the International Conference on Automated Planning and Scheduling 33, n.º 1 (1 de julho de 2023): 402–10. http://dx.doi.org/10.1609/icaps.v33i1.27219.

Texto completo da fonte
Resumo:
In multi-agent path finding (MAPF), several agents must move from their current positions to their target positions without colliding. Prior work on MAPF commonly assumed perfect knowledge of the environment. We consider a MAPF setting where this is not the case, and the planner does not know a-priori whether some positions are blocked or not. To sense whether such a position is traversable, an agent must move close to it and adapt its behavior accordingly. In this work we focus on solving this type of MAPF problem, for cases where planning is centralized but cannot be done during execution. In this setting, a solution can be formulated as a plan tree for each agent, branching on the observations. We propose algorithms for finding such plans trees for two modes of executions: centralized, where the agents share information concerning observed obstacles during execution, a decentralized, where such communication is not allowed. The proposed algorithms are complete and can be configured to optimize solution cost, measured for either the best case or the worst case. We implemented these algorithms and provide experimental results demonstrating how our approach scales with respect to the number of agents and the number of positions we are uncertain about. The results show that our algorithms can solve non-trivial problems, but also highlight that this type of MAPF problems is significantly harder than classical MAPF.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Hesham ElBakoury, Martin Reisslein, Akhilesh S. Thyagaturu, Venkatraman Balasubramanian e Ahmed Nasrallah. "Reconfiguration algorithms for high precision communications in time sensitive networks: Time-aware shaper configuration with IEEE 802.1Qcc". ITU Journal on Future and Evolving Technologies 2, n.º 1 (15 de março de 2021): 13–34. http://dx.doi.org/10.52953/sivv2522.

Texto completo da fonte
Resumo:
As new networking paradigms emerge for different networking applications, e.g., cyber-physical systems, and different services are handled under a converged data link technology, e.g., Ethernet, certain applications with mission critical traffic cannot coexist on the same physical networking infrastructure using traditional Ethernet packet-switched networking protocols. The IEEE 802.1Q Time Sensitive Networking (TSN) Task Group is developing protocol standards to provide deterministic properties, i.e., eliminates non-deterministic delays, on Ethernet based packet-switched networks. In particular, the IEEE 802.1Qcc, centralized management and control, and the IEEE 802.1Qbv, Time-Aware Shaper (TAS), can be used to manage and control Scheduled Traffic (ST) streams with periodic properties along with Best-Effort (BE) traffic on the same network infrastructure. We investigate the effects of using the IEEE 802.1Qcc management protocol to accurately and precisely configure TAS enabled switches (with transmission windows governed by Gate Control Lists (GCLs) with Gate Control Entries (GCEs)) ensuring ultra-low bounded latency, zero packet loss, and minimal jitter for ST TSN traffic. We examine both a centralized network/distributed user model (hybrid model) and a fully-distributed (decentralized) 802.1Qcc model on a typical industrial control network with the goal of maximizing the number of ST streams.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Ignaciuk, Przemysław, e Łukasz Wieczorek. "Continuous Genetic Algorithms in the Optimization of Logistic Networks: Applicability Assessment and Tuning". Applied Sciences 10, n.º 21 (5 de novembro de 2020): 7851. http://dx.doi.org/10.3390/app10217851.

Texto completo da fonte
Resumo:
Globalization opens up new perspectives for handling goods distribution in logistic networks. However, establishing an efficient inventory policy is challenging by virtue of the analytical and computational complexity. In this study, the goods distribution process that was governed by the order-up-to policy, implemented in either a distributed or centralized way, was investigated in the logistic systems with complex interconnection topologies. Uncertain demand may be imposed at any node, not just at conveniently chosen contact points, with a lost-sales assumption that introduces a non-linearity into the node dynamics. In order to adjust the policy parameters, the continuous genetic algorithm (CGA) was applied, with the fitness function incorporating both the operational costs and customer satisfaction level. This study investigated how to select the parameters of the popular inventory management policy when operating in the non-trivial networked structures. Moreover, precise guidelines for the CGA tuning in the considered class of problems were provided and evaluated in extensive numerical experiments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Lin, Liu, Li, Wang, Zeng, Chen e Yu. "Optimal Placement of Multiple Feeder Terminal Units Using Intelligent Algorithms". Applied Sciences 10, n.º 1 (31 de dezembro de 2019): 299. http://dx.doi.org/10.3390/app10010299.

Texto completo da fonte
Resumo:
In order to solve the placement problem of three kinds of feeder terminal units (FTU) in the distribution network, this paper proposes a novel mathematical model. The model considers economic cost and electricity supply reliability from the perspective of life cycle cost. The reliability algorithm in this model is established for the distribution network configured with centralized feeder automation. Different evaluation indices of reliability and the importance of several kinds of customers are also considered in this model. Aiming at the reliability evaluation in this model, this paper puts forward the reliability analysis method for the distribution network with three kinds of FTUs. In view of the difficulty to express the reliability of distribution network in a formula with decision variables, and the non-deterministic polynomial hard (NP-hard) nature of this model, a variety of intelligent algorithms are applied to solve the model. The feasibility and effectiveness of the model and methods for FTUs placement optimization problem are verified by a case study of the Roy Billinton test system (RBTS) Bus 5 system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Wang, Jing. "On Sign-board Based Inter-Robot Communication in Distributed Robotic Systems". Journal of Robotics and Mechatronics 8, n.º 5 (20 de outubro de 1996): 467–72. http://dx.doi.org/10.20965/jrm.1996.p0467.

Texto completo da fonte
Resumo:
Inter-robot communication based on the conceptual mechanism of ""sign-board"" in Distributed Robotic Systems (DRS) is discussed. Equipped by each robot, a sign-board can be written only by the robot that carries it, and be read by robots in the neighborhood. Consistent with DRS principles, the sign-board model is not supported by any centralized mechanism, and is considered a natural way of interaction among autonomous robotic units. It is shown that along with message passing, the sign-board model is one of the two important mechanisms for inter-robot communication. Previous research on DRS algorithms employing the sign-board model assume zero signal propagation delay. These algorithms may fail if non-zero propagation delay is taken into account. A simple fix for these algorithms exists if the propagation delay is bounded. Implementation strategies for the conceptual sign-board are also discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Späth, Julian, Julian Matschinske, Frederick K. Kamanu, Sabina A. Murphy, Olga Zolotareva, Mohammad Bakhtiari, Elliott M. Antman et al. "Privacy-aware multi-institutional time-to-event studies". PLOS Digital Health 1, n.º 9 (6 de setembro de 2022): e0000101. http://dx.doi.org/10.1371/journal.pdig.0000101.

Texto completo da fonte
Resumo:
Clinical time-to-event studies are dependent on large sample sizes, often not available at a single institution. However, this is countered by the fact that, particularly in the medical field, individual institutions are often legally unable to share their data, as medical data is subject to strong privacy protection due to its particular sensitivity. But the collection, and especially aggregation into centralized datasets, is also fraught with substantial legal risks and often outright unlawful. Existing solutions using federated learning have already demonstrated considerable potential as an alternative for central data collection. Unfortunately, current approaches are incomplete or not easily applicable in clinical studies owing to the complexity of federated infrastructures. This work presents privacy-aware and federated implementations of the most used time-to-event algorithms (survival curve, cumulative hazard rate, log-rank test, and Cox proportional hazards model) in clinical trials, based on a hybrid approach of federated learning, additive secret sharing, and differential privacy. On several benchmark datasets, we show that all algorithms produce highly similar, or in some cases, even identical results compared to traditional centralized time-to-event algorithms. Furthermore, we were able to reproduce the results of a previous clinical time-to-event study in various federated scenarios. All algorithms are accessible through the intuitive web-app Partea (https://partea.zbh.uni-hamburg.de), offering a graphical user interface for clinicians and non-computational researchers without programming knowledge. Partea removes the high infrastructural hurdles derived from existing federated learning approaches and removes the complexity of execution. Therefore, it is an easy-to-use alternative to central data collection, reducing bureaucratic efforts but also the legal risks associated with the processing of personal data to a minimum.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Liao, Xuankun, Qing Liu, Jiaxin Jiang, Xin Huang, Jianliang Xu e Byron Choi. "Distributed D-core decomposition over large directed graphs". Proceedings of the VLDB Endowment 15, n.º 8 (abril de 2022): 1546–58. http://dx.doi.org/10.14778/3529337.3529340.

Texto completo da fonte
Resumo:
Given a directed graph G and integers k and l , a D-core is the maximal subgraph H ⊆ G such that for every vertex of H , its in-degree and out-degree are no smaller than k and l , respectively. For a directed graph G , the problem of D-core decomposition aims to compute the non-empty D-cores for all possible values of k and l. In the literature, several peeling-based algorithms have been proposed to handle D-core decomposition. However, the peeling-based algorithms that work in a sequential fashion and require global graph information during processing are mainly designed for centralized settings, which cannot handle large-scale graphs efficiently in distributed settings. Motivated by this, we study the distributed D-core decomposition problem in this paper. We start by defining a concept called anchored coreness , based on which we propose a new H-index-based algorithm for distributed D-core decomposition. Furthermore, we devise a novel concept, namely skyline coreness , and show that the D-core decomposition problem is equivalent to the computation of skyline corenesses for all vertices. We design an efficient D-index to compute the skyline corenesses distributedly. We implement the proposed algorithms under both vertex-centric and block-centric distributed graph processing frameworks. Moreover, we theoretically analyze the algorithm and message complexities. Extensive experiments on large real-world graphs with billions of edges demonstrate the efficiency of the proposed algorithms in terms of both the running time and communication overhead.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Sánchez, Paloma, Rafael Casado e Aurelio Bermúdez. "Real-Time Collision-Free Navigation of Multiple UAVs Based on Bounding Boxes". Electronics 9, n.º 10 (3 de outubro de 2020): 1632. http://dx.doi.org/10.3390/electronics9101632.

Texto completo da fonte
Resumo:
Predictably, future urban airspaces will be crowded with autonomous unmanned aerial vehicles (UAVs) offering different services to the population. One of the main challenges in this new scenario is the design of collision-free navigation algorithms to avoid conflicts between flying UAVs. The most appropriate collision avoidance strategies for this scenario are non-centralized ones that are dynamically executed (in real time). Existing collision avoidance methods usually entail a high computational cost. In this work, we present Bounding Box Collision Avoidance (BBCA) algorithm, a simplified velocity obstacle-based technique that achieves a balance between efficiency and cost. The performance of the proposal is analyzed in detail in different airspace configurations. Simulation results show that the method is able to avoid all the conflicts in two UAV scenarios and most of them in multi-UAV ones. At the same time, we have found that the penalty of using the BBCA collision avoidance technique on the flying time and the distance covered by the UAVs involved in the conflict is reasonably acceptable. Therefore, we consider that BBCA may be an excellent candidate for the design of collision-free navigation algorithms for UAVs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Yao, Wenbin, Bangli Pan, Yingying Hou, Xiaoyong Li e Yamei Xia. "An Adaptive Model Filtering Algorithm Based on Grubbs Test in Federated Learning". Entropy 25, n.º 5 (26 de abril de 2023): 715. http://dx.doi.org/10.3390/e25050715.

Texto completo da fonte
Resumo:
Federated learning has been popular for its ability to train centralized models while protecting clients’ data privacy. However, federated learning is highly susceptible to poisoning attacks, which can result in a decrease in model performance or even make it unusable. Most existing defense methods against poisoning attacks cannot achieve a good trade-off between robustness and training efficiency, especially on non-IID data. Therefore, this paper proposes an adaptive model filtering algorithm based on the Grubbs test in federated learning (FedGaf), which can achieve great trade-offs between robustness and efficiency against poisoning attacks. To achieve a trade-off between system robustness and efficiency, multiple child adaptive model filtering algorithms have been designed. Meanwhile, a dynamic decision mechanism based on global model accuracy is proposed to reduce additional computational costs. Finally, a global model weighted aggregation method is incorporated, which improves the convergence speed of the model. Experimental results on both IID and non-IID data show that FedGaf outperforms other Byzantine-robust aggregation rules in defending against various attack methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Permatasari, Hanifah, Triyono Triyono e Eko Purwanto. "The Overview of Algorithms Implementation in File Search Applications or Digital Archives". IJEEIT : International Journal of Electrical Engineering and Information Technology 6, n.º 1 (31 de março de 2023): 18–24. http://dx.doi.org/10.29138/ijeeit.v6i1.2069.

Texto completo da fonte
Resumo:
As time goes by, the number of documents or files or archives in a company continues to increase. These files are in the form of data collection, activity reports, accountability reports, work proposals, letters, decrees, regulations, and so on. At first, the company tried to convert paper-based files into digital files, then took advantage of several free platforms to store them. Furthermore, companies find it difficult to control digital files because of the non-centralized storage. Finally, companies are trying to create various information systems that allow them to centrally manage digital files. A large number of digital files in it makes the search process even longer, so it is necessary to optimize the search features. The purpose of this study is to review several algorithms that have been implemented to optimize file searches in previous studies. This research takes Indonesian scientific articles from Google Scholar from 2015 to 2022. The result of this research is the explanation of the use of algorithms in previous studies so that this research can become a reference and suggestion for further research. This study found the fact that every information system or application that functions to manage company files or archives requires a search algorithm to improve system performance
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

He, Chun, Ke Guo e Huayue Chen. "An Improved Image Filtering Algorithm for Mixed Noise". Applied Sciences 11, n.º 21 (4 de novembro de 2021): 10358. http://dx.doi.org/10.3390/app112110358.

Texto completo da fonte
Resumo:
In recent years, image filtering has been a hot research direction in the field of image processing. Experts and scholars have proposed many methods for noise removal in images, and these methods have achieved quite good denoising results. However, most methods are performed on single noise, such as Gaussian noise, salt and pepper noise, multiplicative noise, and so on. For mixed noise removal, such as salt and pepper noise + Gaussian noise, although some methods are currently available, the denoising effect is not ideal, and there are still many places worthy of improvement and promotion. To solve this problem, this paper proposes a filtering algorithm for mixed noise with salt and pepper + Gaussian noise that combines an improved median filtering algorithm, an improved wavelet threshold denoising algorithm and an improved Non-local Means (NLM) algorithm. The algorithm makes full use of the advantages of the median filter in removing salt and pepper noise and demonstrates the good performance of the wavelet threshold denoising algorithm and NLM algorithm in filtering Gaussian noise. At first, we made improvements to the three algorithms individually, and then combined them according to a certain process to obtain a new method for removing mixed noise. Specifically, we adjusted the size of window of the median filtering algorithm and improved the method of detecting noise points. We improved the threshold function of the wavelet threshold algorithm, analyzed its relevant mathematical characteristics, and finally gave an adaptive threshold. For the NLM algorithm, we improved its Euclidean distance function and the corresponding distance weight function. In order to test the denoising effect of this method, salt and pepper + Gaussian noise with different noise levels were added to the test images, and several state-of-the-art denoising algorithms were selected to compare with our algorithm, including K-Singular Value Decomposition (KSVD), Non-locally Centralized Sparse Representation (NCSR), Structured Overcomplete Sparsifying Transform Model with Block Cosparsity (OCTOBOS), Trilateral Weighted Sparse Coding (TWSC), Block Matching and 3D Filtering (BM3D), and Weighted Nuclear Norm Minimization (WNNM). Experimental results show that our proposed algorithm is about 2–7 dB higher than the above algorithms in Peak Signal-Noise Ratio (PSNR), and also has better performance in Root Mean Square Error (RMSE), Structural Similarity (SSIM), and Feature Similarity (FSIM). In general, our algorithm has better denoising performance, better restoration of image details and edge information, and stronger robustness than the above-mentioned algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Basaligheh, Parvaneh. "Optimal Coverage in Wireless Sensor Network using Augmented Nature-Inspired Algorithm". International Journal on Future Revolution in Computer Science & Communication Engineering 8, n.º 2 (30 de junho de 2022): 71–78. http://dx.doi.org/10.17762/ijfrcsce.v8i2.2082.

Texto completo da fonte
Resumo:
One of the difficult problems that must be carefully considered before any network configuration is getting the best possible network coverage. The amount of redundant information that is sensed is decreased due to optimal network coverage, which also reduces the restricted energy consumption of battery-powered sensors. WSN sensors can sense, receive, and send data concurrently. Along with the energy limitation, accurate sensors and non-redundant data are a crucial challenge for WSNs. To maximize the ideal coverage and reduce the waste of the constrained sensor battery lifespan, all these actions must be accomplished. Augmented Nature-inspired algorithm is showing promise as a solution to the crucial problems in “Wireless Sensor Networks” (WSNs), particularly those related to the reduced sensor lifetime. For “Wireless Sensor Networks” (WSNs) to provide the best coverage, we focus on algorithms that are inspired by Augmented Nature in this research. In wireless sensor networks, the cluster head is chosen using the Diversity-Driven Multi-Parent Evolutionary Algorithm. For Data encryption Improved Identity Based Encryption (IIBE) is used. For centralized optimization and reducing coverage gaps in WSNs Time variant Particle Swarm Optimization (PSO) is used. The suggested model's metrics are examined and compared to various traditional algorithms. This model solves the reduced sensor lifetime and redundant information in Wireless Sensor Networks (WSNs) as well as will give real and effective optimum coverage to the Wireless Sensor Networks (WSNs).
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Wu, Weicong, Tao Yu, Zhuohuan Li e Hanxin Zhu. "Decentralized Optimization of Electricity-Natural Gas Flow Considering Dynamic Characteristics of Networks". Applied Sciences 10, n.º 10 (12 de maio de 2020): 3348. http://dx.doi.org/10.3390/app10103348.

Texto completo da fonte
Resumo:
The interconnection of power and natural gas systems can improve the flexibility of system operation and the capacity of renewable energy consumption. It is necessary to consider the interaction between both, and carry out collaborative optimization of energy flow. For space-time related line packs, this paper studies the optimal multi-energy flow (OMEF) model of an integrated electricity-gas system, taking into account the dynamic characteristics of a natural gas system. Besides, in order to avoid the problem of large data collection in centralized algorithms and consider the characteristics of decentralized autonomous decision-making for each subsystem, this paper proposes a decentralized algorithm for the OMEF problem. This algorithm transforms the original non-convex OMEF problem into an iterative convex programming problem through penalty convex-concave procedure (PCCP), and then, uses the alternating direction method of multipliers (ADMM) algorithm at each iteration of PCCP to develop a decentralized collaborative optimization of power flow and natural gas flow. Finally, numerical simulations verify the effectiveness and accuracy of the algorithm proposed in this paper, and analyze the effects of dynamic characteristics of networks on system operation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Hassan Rahmah Zagi e Abeer Tariq Maolood. "A NOVEL SERPENT ALGORITHM IMPROVEMENT BY THE KEY SCHEDULE INCREASE SECURITY". Tikrit Journal of Pure Science 25, n.º 6 (24 de dezembro de 2020): 114–25. http://dx.doi.org/10.25130/tjps.v25i6.320.

Texto completo da fonte
Resumo:
Block encryption algorithms rely on the two most important features of their complexity and ease of use to support security requirements (confidentiality, data integrity, and non-repudiation) to prevent unauthorized users from entering the system and tampering with centralized data, disrupting it or disclosing it. The data encryption and decryption process is done using the (Serpent) algorithm, which is one of the most important of these operations. AES Algorithm Proposals. In this paper, a new proposal is presented to improve and support the confidentiality of data while adhering to the external structure of the standard algorithm, relying on designing a new approach to the key generation function because the sobriety of block cipher relies on the use of a strong and unique key. Where several functions were used (Gost external structure) with a combination of (Shift <<<), (AES -Key Schedule), (MD5)). The results of the proposed method were examined using statistical measures, yielding good results, and overcoming the weakness of the key generation function of the original algorithm, in addition to enhancing the most important cryptographic features “confusion”, “diffusion” and “increased randomness”.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Montes-Gonzalez, F., P. Bautista-Cabrera e V. Escobar-Ruiz. "The Use of Evolution in a Central Action Selection Model". Applied Bionics and Biomechanics 4, n.º 3 (2007): 91–100. http://dx.doi.org/10.1155/2007/496480.

Texto completo da fonte
Resumo:
The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat) has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Zhang, Yi, e Yangkun Zhou. "Research on Microgrid Optimal Dispatching Based on a Multi-Strategy Optimization of Slime Mould Algorithm". Biomimetics 9, n.º 3 (23 de fevereiro de 2024): 138. http://dx.doi.org/10.3390/biomimetics9030138.

Texto completo da fonte
Resumo:
In order to cope with the problems of energy shortage and environmental pollution, carbon emissions need to be reduced and so the structure of the power grid is constantly being optimized. Traditional centralized power networks are not as capable of controlling and distributing non-renewable energy as distributed power grids. Therefore, the optimal dispatch of microgrids faces increasing challenges. This paper proposes a multi-strategy fusion slime mould algorithm (MFSMA) to tackle the microgrid optimal dispatching problem. Traditional swarm intelligence algorithms suffer from slow convergence, low efficiency, and the risk of falling into local optima. The MFSMA employs reverse learning to enlarge the search space and avoid local optima to overcome these challenges. Furthermore, adaptive parameters ensure a thorough search during the algorithm iterations. The focus is on exploring the solution space in the early stages of the algorithm, while convergence is accelerated during the later stages to ensure efficiency and accuracy. The salp swarm algorithm’s search mode is also incorporated to expedite convergence. MFSMA and other algorithms are compared on the benchmark functions, and the test showed that the effect of MFSMA is better. Simulation results demonstrate the superior performance of the MFSMA for function optimization, particularly in solving the 24 h microgrid optimal scheduling problem. This problem considers multiple energy sources such as wind turbines, photovoltaics, and energy storage. A microgrid model based on the MFSMA is established in this paper. Simulation of the proposed algorithm reveals its ability to enhance energy utilization efficiency, reduce total network costs, and minimize environmental pollution. The contributions of this paper are as follows: (1) A comprehensive microgrid dispatch model is proposed. (2) Environmental costs, operation and maintenance costs are taken into consideration. (3) Two modes of grid-tied operation and island operation are considered. (4) This paper uses a multi-strategy optimized slime mould algorithm to optimize scheduling, and the algorithm has excellent results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Ahmad, Iman Ameer, Muna Mohammed Jawad Al-Nayar e Ali M. Mahmood. "Investigation of Energy Efficient Clustering Algorithms in WSNs: A Review". Mathematical Modelling of Engineering Problems 9, n.º 6 (31 de dezembro de 2022): 1693–703. http://dx.doi.org/10.18280/mmep.090631.

Texto completo da fonte
Resumo:
In recent years, Wireless Sensor Networks (WSNs) are attracting more attention in many fields as they are extensively used in a wide range of applications, such as environment monitoring, the Internet of Things, industrial operation control, electric distribution, and the oil industry. One of the major concerns in these networks is the limited energy sources. Clustering and routing algorithms represent one of the critical issues that directly contribute to power consumption in WSNs. Therefore, optimization techniques and routing protocols for such networks have to be studied and developed. This paper focuses on the most recent studies and algorithms that handle energy-efficiency clustering and routing in WSNs. In addition, the prime issues in these networks are discussed and summarized using comparison tables, including the main features, limitations, and the kind of simulation toolbox. Energy efficiency is compared between some techniques and showed that according to clustering mode “Distributed” and CH distribution “Uniform”, HEED and EECS are best, while in the non-uniform clustering, both DDAR and THC are efficient. According to clustering mode “Centralized” and CH distribution “Uniform”, the LEACH-C protocol is more effective.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Deng, Weichu, Teng Huang e Haiyang Wang. "A Review of the Key Technology in a Blockchain Building Decentralized Trust Platform". Mathematics 11, n.º 1 (26 de dezembro de 2022): 101. http://dx.doi.org/10.3390/math11010101.

Texto completo da fonte
Resumo:
Currently, the trust mechanisms of various Internet application platforms are still built under the orders of centralized authorities. This centralized trust mechanism generally suffers from problems such as excessive power of central nodes, single point of failure and data privacy leakage. Blockchain is a new type of distributed data architecture with non-tamperability, openness and transparency, and traceability, which can achieve secure and trustworthy sharing of data without the participation of third-party authorities. The decentralized trust mechanism built based on the blockchain provides a new research paradigm with broad development prospects to solve the problem of establishing reliable information sharing under the environmental conditions of incomplete reliability in finance, healthcare, energy, and data security. In response to the issues exposed by centralized trust mechanisms in recent years, based on the critical technology of blockchain, this paper surveys the relevant literature around the vital issue of building a decentralized and secure trust mechanism. First, the decentralized trust mechanism architecture is sorted out by comparing different decentralized platforms. The blockchain is divided into the data layer, network layer, consensus layer, contract layer and application layer, which correspond to the theory, implementation, operation, extension, and application of the decentralized trust mechanism of a blockchain, a district-centric platform. Secondly, the principles and technologies of blockchain are elaborated in detail, focusing on the underlying principles, consensus algorithms, and smart contracts. Finally, blockchain problems and development directions are summarized in light of relevant literature.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Ariza Vesga, Luis Felipe, Johan Sebastián Eslava Garzón e Rafael Puerta Ramirez. "EF1-NSGA-III: An evolutionary algorithm based on the first front to obtain non-negative and non-repeated extreme points". Ingeniería e Investigación 40, n.º 3 (21 de outubro de 2020): 55–69. http://dx.doi.org/10.15446/inginvestig.v40n3.82906.

Texto completo da fonte
Resumo:
Multi-Objective and Many-objective Optimization problems have been extensively solved through evolutionary algorithms over a few decades. Despite the fact that NSGA-II and NSGA-III are frequently employed as a reference for a comparative evaluation of new evolutionary algorithms, the latter is proprietary. In this paper, we used the basic framework of the NSGA-II, which is very similar to the NSGA-III, with significant changes in its selection operator. We took the first front generated at the non-dominating sort procedure to obtain nonnegative and nonrepeated extreme points. This opensource version of the NSGA-III is called EF1-NSGA-III, and its implementation does not start from scratch; that would be reinventing the wheel. Instead, we took the NSGA-II code from the authors in the repository of the Kanpur Genetic Algorithms Laboratory to extend the EF1-NSGA-III. We then adjusted its selection operator from diversity, based on the crowding distance, to the one found on reference points and preserved its parameters. After that, we continued with the adaptive EF1-NSGA-III (A-EF1-NSGA-III), and the efficient adaptive EF1-NSGA-III (A2-EF1-NSGA-III), while also contributing to explain how to generate different types of reference points. The proposed algorithms resolve optimization problems with constraints of up to 10 objective functions. We tested them on a wide range of benchmark problems, and they showed notable improvements in terms of convergence and diversity by using the Inverted Generational Distance (IGD) and the HyperVolume (HV) performance metrics. The EF1-NSGA-III aims to resolve the power consumption for Centralized Radio Access Networks and the BiObjective Minimum DiameterCost Spanning Tree problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Kalech, Meir, e Avraham Natan. "Model-Based Diagnosis of Multi-Agent Systems: A Survey". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 11 (28 de junho de 2022): 12334–41. http://dx.doi.org/10.1609/aaai.v36i11.21498.

Texto completo da fonte
Resumo:
As systems involving multiple agents are increasingly deployed, there is a growing need to diagnose failures in such systems. Model-Based Diagnosis (MBD) is a well-known AI technique to diagnose faults in systems. In this approach, a model of the diagnosed system is given, and the real system is observed. A failure is announced when the real system's output contradicts the model's expected output. The model is then used to deduce the defective components that explain the unexpected observation. MBD has been increasingly being deployed in distributed and multi-agent systems. In this survey, we summarize twenty years of research in the field of model-based diagnosis algorithms for MAS diagnosis. We depict three attributes that should be considered when examining MAS diagnosis: (1) The objective of the diagnosis. Either diagnosing faults in the MAS plans or diagnosing coordination faults. (2) Centralized vs. distributed. The diagnosis method could be applied either by a centralized agent or by the agents in a distributed manner. (3) Temporal vs. non-temporal. Temporal diagnosis is used to diagnose the MAS's temporal behaviors, whereas non-temporal diagnosis is used to diagnose the conduct based on a single observation. We survey diverse studies in MBD of MAS based on these attributes, and provide novel research challenges in this field for the AI community.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Omran, Sherin M., Wessam H. El-Behaidy e Aliaa A. A. Youssif. "Optimization of Cryptocurrency Algorithmic Trading Strategies Using the Decomposition Approach". Big Data and Cognitive Computing 7, n.º 4 (14 de novembro de 2023): 174. http://dx.doi.org/10.3390/bdcc7040174.

Texto completo da fonte
Resumo:
A cryptocurrency is a non-centralized form of money that facilitates financial transactions using cryptographic processes. It can be thought of as a virtual currency or a payment mechanism for sending and receiving money online. Cryptocurrencies have gained wide market acceptance and rapid development during the past few years. Due to the volatile nature of the crypto-market, cryptocurrency trading involves a high level of risk. In this paper, a new normalized decomposition-based, multi-objective particle swarm optimization (N-MOPSO/D) algorithm is presented for cryptocurrency algorithmic trading. The aim of this algorithm is to help traders find the best Litecoin trading strategies that improve their outcomes. The proposed algorithm is used to manage the trade-offs among three objectives: the return on investment, the Sortino ratio, and the number of trades. A hybrid weight assignment mechanism has also been proposed. It was compared against the trading rules with their standard parameters, MOPSO/D, using normalized weighted Tchebycheff scalarization, and MOEA/D. The proposed algorithm could outperform the counterpart algorithms for benchmark and real-world problems. Results showed that the proposed algorithm is very promising and stable under different market conditions. It could maintain the best returns and risk during both training and testing with a moderate number of trades.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Menighed, Kamel, Joseph Julien Yamé e Issam Chekakta. "A Non-Cooperative Distributed Model Predictive Control Using Laguerre Functions for Large-Scale Interconnected Systems". Journal Européen des Systèmes Automatisés 55, n.º 5 (30 de novembro de 2022): 555–72. http://dx.doi.org/10.18280/jesa.550501.

Texto completo da fonte
Resumo:
This paper deals with a new non-cooperative distributed controller for linear large-scale systems based on designing multiple local Model Predictive Control (MPC) algorithms using Laguerre functions to enhance the global performance of the overall closed-loop system. In this distributed control scheme, that does not require a coordinator, local MPC algorithms might transmit and receive information from other sub-controllers by means of the communication network to perform their control decisions independently on each other. Thanks to the exchanged information, the sub-controllers have in this way the ability to work together in a collaborative manner towards achieving a good overall system performance. To decrease drastically the computational load in the small-size optimization problem with a short prediction horizon, discrete-time Laguerre functions are used to tightly approximate the optimal control sequence. For evaluating the proposed distribution control framework, a simulation example is proposed to show the effectiveness of the proposed scheme and its applicability for large-scale interconnected systems. The obtained simulation results are provided to demonstrate clearly that the proposed Non-Cooperative Distributed MPC (NC-DMPC) outperforms Decentralized MPC (De-MPC) and achieves performance comparable to centralized MPC with a reduced computing time. The system performance of the proposed distributed model predictive control is given.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Saak, A. E., e V. V. Kureichik. "On the Quality of Dispatching of Mondrian-Type Arrays in Grid Systems". Informacionnye Tehnologii 28, n.º 10 (18 de outubro de 2022): 514–19. http://dx.doi.org/10.17587/it.28.514-519.

Texto completo da fonte
Resumo:
Grid systems of centralized architecture, with multisite dispatching, characterized by the ability to execute a multiprocessor application on several parallel systems simultaneously, are modeled by a resource quadrant. The user's request for maintenance by the Grid system dispatcher is modeled by a resource rectangle with horizontal and vertical dimensions, respectively, equal to the number of time and processor resource units required to fulfill the request. Due to the exponential complexity of the optimal distribution of computational and time resources, heuristic algorithms of polynomial complexity based on the operations of dynamic integration of resource rectangles in the environment of resource rectangles are of practical value. The quality of dispatching is evaluated by a non-Euclidean heuristic measure that takes into account the area and shape of the occupied resource area. The quality of dispatching arrays of exact form with applications requiring approximately the same work, understood as the product of the number of required processors at runtime, is analyzed. In this paper, the quality of six polynomial-level algorithms is evaluated when dispatching arrays with applications of approximately the same area equal to the product of the number of required processors at the time of execution of the application. The adaptability of the analyzed algorithms is demonstrated on seven test arrays induced by Mondrian squares. Such arrays contain circular, hyperbolic and parabolic type applications. It is shown that the smallest value of the heuristic measure of H-level algorithms in length is 0.76, whereas in V-level algorithms in height, the smallest value of the heuristic measure is 0.83. At the same time, the H-level algorithm in length with a minimum deviation has the lowest value of the maximum of the heuristic measure of 0.5 + 0.59 on the test arrays of resource rectangles under consideration. When servicing arrays of exact forms with applications of approximately the same measure in Grid systems, it is recommended to use the considered polynomial H-level algorithm in length with minimal deviation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Cordova, Hernan X., e Leo Van Biesen. "A Hybrid Meta-Heuristic Algorithm for Dynamic Spectrum Management in Multiuser Systems". International Journal of Applied Metaheuristic Computing 2, n.º 4 (outubro de 2011): 29–40. http://dx.doi.org/10.4018/jamc.2011100103.

Texto completo da fonte
Resumo:
One of the major sources of performance degradation of current Digital Subscriber Line systems is the electromagnetic coupling among different twisted pairs within the same cable bundle (crosstalk). Several algorithms for Dynamic Spectrum management have been proposed to counteract the crosstalk effect but their complexity remains a challenge in practice. Optimal Spectrum Balancing (OSB) is a centralized algorithm that optimally allocates the available transmit power over the tones making use of a Dual decomposition approach where Lagrange multipliers are used to enforce the constraints and decouple the problem over the tones. However, the overall complexity of this algorithm remains a challenge for practical DSL environments. The authors propose a low-complex algorithm based on a combination of simulated annealing and non-linear simplex to find local (almost global) optimum spectra for multiuser DSL systems, whilst significantly reducing the prohibitive complexity of traditional OSB. The algorithm assumes a Spectrum Management Center (at the cabinet side) but it neither relies on own end-user modem calculations nor on messaging-passing for achieving its performance objective. The approach allows furthering reducing the number of function evaluations achieving further reduction on the convergence time (up to ~27% gain) at reasonable payoff (weighted data rate sum).
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Dr. N. Dhanalakshmi, Dr. A. Thomas Paul Roy, Dr D. Suresh,. "A NOVEL PRIVACY PRESERVATION MECHANISM FOR DATA AND USER IN DISTRIBUTED SERVERS". INFORMATION TECHNOLOGY IN INDUSTRY 9, n.º 1 (17 de março de 2021): 1151–56. http://dx.doi.org/10.17762/itii.v9i1.248.

Texto completo da fonte
Resumo:
Advances in sensing and monitoring science allow location-based purposes however they additionally create tremendous privateness risks. Anonymity can supply a excessive diploma of privacy, retailer provider customers from dealing with carrier providers’ privateness policies, and limit the carrier providers’ necessities for safeguarding non-public information. However, guaranteeing nameless utilization of location-based offerings requires that the particular region facts transmitted via a person can't be without difficulty used to re-identify the subject. This paper provides a middleware structure and algorithms that can be used by using a centralized place dealer service. The adaptive algorithms regulate the decision of region data alongside spatial or temporal dimensions to meet distinct anonymity constraints based totally on the entities who can also be the use of place offerings inside a given area. Using a mannequin based totally on car site visitors counts and cartographic material, we estimate the realistically anticipated spatial decision for extraordinary anonymity constraints. The median decision generated with the aid of our algorithms is a hundred twenty five meters. Thus, nameless location-based requests for city areas would have the identical accuracy presently wanted for E-911 services; this would supply enough decision for wayfinding, automatic bus routing offerings and comparable location-dependent services.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Bnaya, Zahy, Roni Stern, Ariel Felner, Roie Zivan e Steven Okamoto. "Multi-Agent Path Finding for Self Interested Agents". Proceedings of the International Symposium on Combinatorial Search 4, n.º 1 (20 de agosto de 2021): 38–46. http://dx.doi.org/10.1609/socs.v4i1.18292.

Texto completo da fonte
Resumo:
Multi-agent pathfinding (MAPF) deals with planning paths for individual agents such that a global cost function (e.g., the sum of costs) is minimized while avoiding collisions between agents. Previous work proposed centralized or fully cooperative decentralized algorithms assuming that agents will follow paths assigned to them. When agents are {\em self-interested}, however, they are expected to follow a path only if they consider that path to be their most beneficial option. In this paper we propose the use of a taxation scheme to implicitly coordinate self-interested agents in MAPF. We propose several taxation schemes and compare them experimentally. We show that intelligent taxation schemes can result in a lower total cost than the non coordinated scheme even if we take into consideration both travel cost and the taxes paid by agents.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Xue, Wanqi, Bo An e Chai Kiat Yeo. "NSGZero: Efficiently Learning Non-exploitable Policy in Large-Scale Network Security Games with Neural Monte Carlo Tree Search". Proceedings of the AAAI Conference on Artificial Intelligence 36, n.º 4 (28 de junho de 2022): 4646–53. http://dx.doi.org/10.1609/aaai.v36i4.20389.

Texto completo da fonte
Resumo:
How resources are deployed to secure critical targets in networks can be modelled by Network Security Games (NSGs). While recent advances in deep learning (DL) provide a powerful approach to dealing with large-scale NSGs, DL methods such as NSG-NFSP suffer from the problem of data inefficiency. Furthermore, due to centralized control, they cannot scale to scenarios with a large number of resources. In this paper, we propose a novel DL-based method, NSGZero, to learn a non-exploitable policy in NSGs. NSGZero improves data efficiency by performing planning with neural Monte Carlo Tree Search (MCTS). Our main contributions are threefold. First, we design deep neural networks (DNNs) to perform neural MCTS in NSGs. Second, we enable neural MCTS with decentralized control, making NSGZero applicable to NSGs with many resources. Third, we provide an efficient learning paradigm, to achieve joint training of the DNNs in NSGZero. Compared to state-of-the-art algorithms, our method achieves significantly better data efficiency and scalability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Nguyen, Hoa Dinh. "A data-driven framework for remaining useful life estimation". Vietnam Journal of Science and Technology 55, n.º 5 (20 de outubro de 2017): 557. http://dx.doi.org/10.15625/2525-2518/55/5/8582.

Texto completo da fonte
Resumo:
Remaining useful life (RUL) estimation is one of the most common tasks in the field of prognostics and structural health management. The aim of this research is to estimate the remaining useful life of an unspecified complex system using some data-driven approaches. The approaches are suitable for problems in which a data library of complete runs of a system is available. Given a non-complete run of the system, the RUL can be predicted using these approaches. Three main RUL prediction algorithms, which cover centralized data processing, decentralize data processing, and in-between, are introduced and evaluated using the data of PHM’08 Challenge Problem. The methods involve the use of some other data processing techniques including wavelets denoise and similarity search. Experiment results show that all of the approaches are effective in performing RUL prediction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Saak, A. E., e V. V. Kureichik. "Dispatching Arrays with Tasks of Equal Resource Measure in GRID Systems". Informacionnye Tehnologii 28, n.º 12 (14 de dezembro de 2022): 663–69. http://dx.doi.org/10.17587/it.28.663-669.

Texto completo da fonte
Resumo:
The role of computing and time resource management of parallel systems when dispatching multiprocessor tasks increases significantly in Grid systems consisting of multiple sites with multiprocessor computing systems, with multisite execution of a parallel task on several sites simultaneously. Such Grid systems of centralized architecture are modeled by the resource quadrant. User's request for maintenance by the Grid system dispatcher is modeled by a resource rectangle with horizontal and vertical dimensions, respectively, equal to the number of time and processor resource units required to fulfill the request. Examples of the exponential complexity of dispatching resource rectangles are the stacking of consecutive resource squares and consecutive resource rectangles of equal perimeter into an enclosing rectangle of the minimum area. Heuristic algorithms of polynomial labor intensity based on the operations of dynamic integration of resource rectangles in the environment of resource rectangles are of practical value. The quality of dispatching is assessed by a non-Euclidean heuristic measure that takes into account the area and shape of the occupied resource area. The quality of dispatching arrays with requests of the same resource measure equal to the product of the number of required processors and runtime is analyzed. In this paper, the quality of three polynomial level algorithms in height and three polynomial level algorithms in length (with the nearest approach to the level, with exceeding of the level and with minimum deviation) is evaluated when dispatching arrays with requests of equal resource measure. Such arrays belong to the parabolic quadratic type and contain user's tasks of circular, hyperbolic and parabolic type. On five test arrays, it is shown that H-finite-level algorithms with a with the nearest approach to the level and with minimum deviation have the smallest value of the maximum of the heuristic measure of 8.45. When servicing arrays with user's tasks of the same resource measure in Grid systems, it is recommended to use the polynomial Н- finite-level algorithm in length with the nearest approach to the level which is introduced in this paper.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Wohwe Sambo, Damien, Blaise Yenke, Anna Förster e Paul Dayang. "Optimized Clustering Algorithms for Large Wireless Sensor Networks: A Review". Sensors 19, n.º 2 (15 de janeiro de 2019): 322. http://dx.doi.org/10.3390/s19020322.

Texto completo da fonte
Resumo:
During the past few years, Wireless Sensor Networks (WSNs) have become widely used due to their large amount of applications. The use of WSNs is an imperative necessity for future revolutionary areas like ecological fields or smart cities in which more than hundreds or thousands of sensor nodes are deployed. In those large scale WSNs, hierarchical approaches improve the performance of the network and increase its lifetime. Hierarchy inside a WSN consists in cutting the whole network into sub-networks called clusters which are led by Cluster Heads. In spite of the advantages of the clustering on large WSNs, it remains a non-deterministic polynomial hard problem which is not solved efficiently by traditional clustering. The recent researches conducted on Machine Learning, Computational Intelligence, and WSNs bring out the optimized clustering algorithms for WSNs. These kinds of clustering are based on environmental behaviors and outperform the traditional clustering algorithms. However, due to the diversity of WSN applications, the choice of an appropriate paradigm for a clustering solution remains a problem. In this paper, we conduct a wide review of proposed optimized clustering solutions nowadays. In order to evaluate them, we consider 10 parameters. Based on these parameters, we propose a comparison of these optimized clustering approaches. From the analysis, we observe that centralized clustering solutions based on the Swarm Intelligence paradigm are more adapted for applications with low energy consumption, high data delivery rate, or high scalability than algorithms based on the other presented paradigms. Moreover, when an application does not need a large amount of nodes within a field, the Fuzzy Logic based solution are suitable.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Xu, Xiaosa, Wen-Kang Jia, Yi Wu e Xufang Wang. "On the Optimal Lawful Intercept Access Points Placement Problem in Hybrid Software-Defined Networks". Sensors 21, n.º 2 (9 de janeiro de 2021): 428. http://dx.doi.org/10.3390/s21020428.

Texto completo da fonte
Resumo:
For the law enforcement agencies, lawful interception is still one of the main means to intercept a suspect or address most illegal actions. Due to its centralized management, however, it is easy to implement in traditional networks, but the cost is high. In view of this restriction, this paper aims to exploit software-defined network (SDN) technology to contribute to the next generation of intelligent lawful interception technology, i.e., to optimize the deployment of intercept access points (IAPs) in hybrid software-defined networks where both SDN nodes and non-SDN nodes exist simultaneously. In order to deploy IAPs, this paper puts forward an improved equal-cost multi-path shortest path algorithm and accordingly proposes three SDN interception models: T interception model, ECMP-T interception model and Fermat-point interception model. Considering the location relevance of all intercepted targets and the operation and maintenance cost of operators from the global perspective, by the way, we further propose a restrictive minimum vertex cover algorithm (RMVCA) in hybrid SDN. Implementing different SDN interception algorithms based RMVCA in real-world topologies, we can reasonably deploy the best intercept access point and intercept the whole hybrid SDN with the least SDN nodes, as well as significantly optimize the deployment efficiency of IAPs and improve the intercept link coverage in hybrid SDN, contributing to the implementation of lawful interception.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Xu, Xiaosa, Wen-Kang Jia, Yi Wu e Xufang Wang. "On the Optimal Lawful Intercept Access Points Placement Problem in Hybrid Software-Defined Networks". Sensors 21, n.º 2 (9 de janeiro de 2021): 428. http://dx.doi.org/10.3390/s21020428.

Texto completo da fonte
Resumo:
For the law enforcement agencies, lawful interception is still one of the main means to intercept a suspect or address most illegal actions. Due to its centralized management, however, it is easy to implement in traditional networks, but the cost is high. In view of this restriction, this paper aims to exploit software-defined network (SDN) technology to contribute to the next generation of intelligent lawful interception technology, i.e., to optimize the deployment of intercept access points (IAPs) in hybrid software-defined networks where both SDN nodes and non-SDN nodes exist simultaneously. In order to deploy IAPs, this paper puts forward an improved equal-cost multi-path shortest path algorithm and accordingly proposes three SDN interception models: T interception model, ECMP-T interception model and Fermat-point interception model. Considering the location relevance of all intercepted targets and the operation and maintenance cost of operators from the global perspective, by the way, we further propose a restrictive minimum vertex cover algorithm (RMVCA) in hybrid SDN. Implementing different SDN interception algorithms based RMVCA in real-world topologies, we can reasonably deploy the best intercept access point and intercept the whole hybrid SDN with the least SDN nodes, as well as significantly optimize the deployment efficiency of IAPs and improve the intercept link coverage in hybrid SDN, contributing to the implementation of lawful interception.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Yu, Hui, e Ying Xia. "An Energy Saving Control Strategy Based on Multi-Agent Q-Learning Algorithm for Data Center". Journal of Physics: Conference Series 2517, n.º 1 (1 de junho de 2023): 012018. http://dx.doi.org/10.1088/1742-6596/2517/1/012018.

Texto completo da fonte
Resumo:
Abstract In recent years, the application of green renewable energy to data centers has become an important trend. Traditional solutions lack the consideration of matching tasks to renewable energy supplies. Therefore, in the face of diverse real-time computing tasks, how to reduce the total energy cost while ensuring the quality of service is an important challenge for the data center in the future. In this paper, our focus is on using the information on renewable energy supply and task characteristics as input states to assign tasks that maximize user satisfaction while meeting the minimum total cost of energy consumption. We consider the diversity of real-time tasks and design three different task types: the most crucial task, the crucial task and the non-crucial task. According to the different characteristics of these tasks, we propose a scheduling algorithm based on multi-agent, which uses multiple sets of agents with different initial positions to parallel search in different dimensions of the parameter space to find the optimal solution. To further optimize the algorithm, we eliminate the centralized noise solution based on the Pareto sorting method and sort the multiple optimal solutions to highlight the most suitable solution. The experimental results show that the proposed algorithm compared with other algorithms can reduce the total energy consumption by 11% and increase the customer satisfaction by 13% on average, and has better performance and applicability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Wang, Yu, Ke Fu, Hao Chen, Quan Liu, Jian Huang e Zhongjie Zhang. "Efficiently Detecting Non-Stationary Opponents: A Bayesian Policy Reuse Approach under Partial Observability". Applied Sciences 12, n.º 14 (8 de julho de 2022): 6953. http://dx.doi.org/10.3390/app12146953.

Texto completo da fonte
Resumo:
In multi-agent domains, dealing with non-stationary opponents that change behaviors (policies) consistently over time is still a challenging problem, where an agent usually requires the ability to detect the opponent’s policy accurately and adopt the optimal response policy accordingly. Previous works commonly assume that the opponent’s observations and actions during online interactions are known, which can significantly limit their applications, especially in partially observable environments. This paper focuses on efficient policy detecting and reusing techniques against non-stationary opponents without their local information. We propose an algorithm called Bayesian policy reuse with LocAl oBservations (Bayes-Lab) by incorporating variational autoencoders (VAE) into the Bayesian policy reuse (BPR) framework. Following the centralized training with decentralized execution (CTDE) paradigm, we train VAE as an opponent model during the offline phase to extract the latent relationship between the agent’s local observations and the opponent’s local observations. During online execution, the trained opponent models are used to reconstruct the opponent’s local observations, which can be combined with episodic rewards to update the belief about the opponent’s policy. Finally, the agent reuses the best response policy based on the updated belief to improve online performance. We demonstrate that Bayes-Lab outperforms existing state-of-the-art methods in terms of detection accuracy, accumulative rewards, and episodic rewards in a predator–prey scenario. In this experimental environment, Bayes-Lab can achieve about 80% detection accuracy and the highest accumulative rewards, and its performance is less affected by the opponent policy switching interval. When the switching interval is less than 10, its detection accuracy is at least 10% higher than other algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Levac, Brett R., Marius Arvinte e Jonathan I. Tamir. "Federated End-to-End Unrolled Models for Magnetic Resonance Image Reconstruction". Bioengineering 10, n.º 3 (16 de março de 2023): 364. http://dx.doi.org/10.3390/bioengineering10030364.

Texto completo da fonte
Resumo:
Image reconstruction is the process of recovering an image from raw, under-sampled signal measurements, and is a critical step in diagnostic medical imaging, such as magnetic resonance imaging (MRI). Recently, data-driven methods have led to improved image quality in MRI reconstruction using a limited number of measurements, but these methods typically rely on the existence of a large, centralized database of fully sampled scans for training. In this work, we investigate federated learning for MRI reconstruction using end-to-end unrolled deep learning models as a means of training global models across multiple clients (data sites), while keeping individual scans local. We empirically identify a low-data regime across a large number of heterogeneous scans, where a small number of training samples per client are available and non-collaborative models lead to performance drops. In this regime, we investigate the performance of adaptive federated optimization algorithms as a function of client data distribution and communication budget. Experimental results show that adaptive optimization algorithms are well suited for the federated learning of unrolled models, even in a limited-data regime (50 slices per data site), and that client-sided personalization can improve reconstruction quality for clients that did not participate in training.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Isnawati, Anggun Fitrian, Risanuri Hidayat, Selo Sulistyo e I. Wayan Mustika. "Feasibility of Power Control for Multi-Channel User in Inter-Femtocell Network". International Journal of Electrical and Computer Engineering (IJECE) 6, n.º 4 (1 de agosto de 2016): 1685. http://dx.doi.org/10.11591/ijece.v6i4.10210.

Texto completo da fonte
Resumo:
<span>The importance of power control feasibility is closely related to the direct implementation of the system, which is related to interference between users in the femtocell network and also related to the optimal use of power transmit that can create a long-lasting battery. Implementation of the feasibility of power control in this study is focused on the centralized femtocell network with a multi-channel user. The research method in this study is based on the use of feasible solution algorithms in power control by observing the output of power vector that should be valued non-negative, which means that it can be implemented. This result indicates that all of user can reach the specified target SINR. SINR users will increase when there are additional channels for user groups. The average power of user will also decrease while increasing the amount of provided channels. The greater number of users in a user group, the less value SINR can be obtained</span>
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Isnawati, Anggun Fitrian, Risanuri Hidayat, Selo Sulistyo e I. Wayan Mustika. "Feasibility of Power Control for Multi-Channel User in Inter-Femtocell Network". International Journal of Electrical and Computer Engineering (IJECE) 6, n.º 4 (1 de agosto de 2016): 1685. http://dx.doi.org/10.11591/ijece.v6i4.pp1685-1694.

Texto completo da fonte
Resumo:
<span>The importance of power control feasibility is closely related to the direct implementation of the system, which is related to interference between users in the femtocell network and also related to the optimal use of power transmit that can create a long-lasting battery. Implementation of the feasibility of power control in this study is focused on the centralized femtocell network with a multi-channel user. The research method in this study is based on the use of feasible solution algorithms in power control by observing the output of power vector that should be valued non-negative, which means that it can be implemented. This result indicates that all of user can reach the specified target SINR. SINR users will increase when there are additional channels for user groups. The average power of user will also decrease while increasing the amount of provided channels. The greater number of users in a user group, the less value SINR can be obtained</span>
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Xu, Gang, De-Lun Kong, Xiu-Bo Chen e Xin Liu. "Lazy Aggregation for Heterogeneous Federated Learning". Applied Sciences 12, n.º 17 (25 de agosto de 2022): 8515. http://dx.doi.org/10.3390/app12178515.

Texto completo da fonte
Resumo:
Federated learning (FL) is a distributed neural network training paradigm with privacy protection. With the premise of ensuring that local data are not leaked, the multi-device cooperation trains the model and improves its normalization. Unlike centralized training, FL is susceptible to heterogeneous data, biased gradient estimations hinder the convergence of the global model, and traditional sampling techniques cannot apply FL due to privacy constraints. Therefore, this paper proposes a novel FL framework, federated lazy aggregation (FedLA), which reduces aggregation frequency to obtain high-quality gradients and improve robustness in non-IID. To judge the aggregating timings, the change rate of the models’ weight divergence (WDR) is introduced to FL. Furthermore, the collected gradients also facilitate FL walking out of the saddle point without extra communications. The cross-device momentum (CDM) mechanism could significantly improve the upper limit performance of the global model in non-IID. We evaluate the performance of several popular algorithms, including FedLA and FedLA with momentum (FedLAM). The results show that FedLAM achieves the best performance in most scenarios and the performance of FL can also be improved in IID scenarios.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia