Siga este enlace para ver otros tipos de publicaciones sobre el tema: Multi-Task agent.

Artículos de revistas sobre el tema "Multi-Task agent"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Multi-Task agent".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Wu, Xiaohu, Yihao Liu, Xueyan Tang, Wentong Cai, Funing Bai, Gilbert Khonstantine y Guopeng Zhao. "Multi-Agent Pickup and Delivery with Task Deadlines". Proceedings of the International Symposium on Combinatorial Search 12, n.º 1 (21 de julio de 2021): 206–8. http://dx.doi.org/10.1609/socs.v12i1.18585.

Texto completo
Resumen
We study the multi-agent pickup and delivery problem with task deadlines, where a team of agents execute tasks with individual deadlines to maximize the number of tasks completed by their deadlines. We take an integrated approach that assigns and plans one task at a time taking into account the agent states resulting from all the previous task assignments and path planning. We define metrics to effectively determine which agent ought to execute a given task and which task is most worth assignment next. We leverage the bounding technique to greatly improve the computational efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Surynek, Pavel. "Multi-Goal Multi-Agent Path Finding via Decoupled and Integrated Goal Vertex Ordering". Proceedings of the International Symposium on Combinatorial Search 12, n.º 1 (21 de julio de 2021): 197–99. http://dx.doi.org/10.1609/socs.v12i1.18582.

Texto completo
Resumen
We introduce multi-goal multi agent path finding (MG-MAPF) which generalizes the standard discrete multi-agent path finding (MAPF) problem. While the task in MAPF is to navigate agents in an undirected graph from their starting vertices to one individual goal vertex per agent, MG-MAPF assigns each agent multiple goal vertices and the task is to visit each of them at least once. Solving MG-MAPF not only requires finding collision free paths for individual agents but also determining the order of visiting agent's goal vertices so that common objectives like the sum-of-costs are optimized.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Xie, Bing, Xueqiang Gu, Jing Chen y LinCheng Shen. "A multi-responsibility–oriented coalition formation framework for dynamic task allocation in mobile–distributed multi-agent systems". International Journal of Advanced Robotic Systems 15, n.º 6 (1 de noviembre de 2018): 172988141881303. http://dx.doi.org/10.1177/1729881418813037.

Texto completo
Resumen
In this article, we study a problem of dynamic task allocation with multiple agent responsibilities in distributed multi-agent systems. Agents in the research have two responsibilities, communication and task execution. Movements in agent task execution bring changes to the system network structure, which will affect the communication. Thus, agents need to be autonomous on communication network reconstruction for good performance on task execution. First, we analyze the relationships between the two responsibilities of agents. Then, we design a multi-responsibility–oriented coalition formation framework for dynamic task allocation with two parts, namely, task execution and self-adaptation communication. For the former part, we integrate our formerly proposed algorithm in the framework for task execution coalition formation. For the latter part, we develop a constrained Bayesian overlapping coalition game model to formulate the communication network. A task-allocation efficiency–oriented communication coalition utility function is defined to optimize a coalition structure for the constrained Bayesian overlapping coalition game model. Considering the geographical location dependence between the two responsibilities, we define constrained agent strategies to map agent strategies to potential location choices. Based on the abovementioned design, we propose a distributed location pruning self-adaptive algorithm for the constrained Bayesian overlapping coalition formation. Finally, we test the performance of our framework, multi-responsibility–oriented coalition formation framework, with simulation experiments. Experimental results demonstrate that the multi-responsibility oriented coalition formation framework performs better than the other two distributed algorithms on task completion rate (by over 9.4% and over 65% on average, respectively).
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Pei, Zhaoyi, Songhao Piao, Meixiang Quan, Muhammad Zuhair Qadir y Guo Li. "Active collaboration in relative observation for multi-agent visual simultaneous localization and mapping based on Deep Q Network". International Journal of Advanced Robotic Systems 17, n.º 2 (1 de marzo de 2020): 172988142092021. http://dx.doi.org/10.1177/1729881420920216.

Texto completo
Resumen
This article proposes a unique active relative localization mechanism for multi-agent simultaneous localization and mapping, in which an agent to be observed is considered as a task, and the others who want to assist that agent will perform that task by relative observation. A task allocation algorithm based on deep reinforcement learning is proposed for this mechanism. Each agent can choose whether to localize other agents or to continue independent simultaneous localization and mapping on its own initiative. By this way, the process of each agent simultaneous localization and mapping will be interacted by the collaboration. Firstly, a unique observation function which models the whole multi-agent system is obtained based on ORBSLAM. Secondly, a novel type of Deep Q Network called multi-agent systemDeep Q Network (MAS-DQN) is deployed to learn correspondence between Q value and state–action pair, abstract representation of agents in multi-agent system is learned in the process of collaboration among agents. Finally, each agent must act with a certain degree of freedom according to MAS-DQN. The simulation results of comparative experiments prove that this mechanism improves the efficiency of cooperation in the process of multi-agent simultaneous localization and mapping.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Thiele, Veikko. "Task-specific abilities in multi-task principal–agent relationships". Labour Economics 17, n.º 4 (agosto de 2010): 690–98. http://dx.doi.org/10.1016/j.labeco.2009.12.003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Nedelmann, Déborah Conforto, Jérôme Lacan y Caroline P. C. Chanel. "SKATE : Successive Rank-based Task Assignment for Proactive Online Planning". Proceedings of the International Conference on Automated Planning and Scheduling 34 (30 de mayo de 2024): 396–404. http://dx.doi.org/10.1609/icaps.v34i1.31499.

Texto completo
Resumen
The development of online applications for services such as package delivery, crowdsourcing, or taxi dispatching has caught the attention of the research community to the domain of online multi-agent multi-task allocation. In online service applications, tasks (or requests) to be performed arrive over time and need to be dynamically assigned to agents. Such planning problems are challenging because: (i) few or almost no information about future tasks is available for long-term reasoning; (ii) agent number, as well as, task number can be impressively high; and (iii) an efficient solution has to be reached in a limited amount of time. In this paper, we propose SKATE, a successive rank-based task assignment algorithm for online multi-agent planning. SKATE can be seen as a meta-heuristic approach which successively assigns a task to the best-ranked agent until all tasks have been assigned. We assessed the complexity of SKATE and showed it is cubic in the number of agents and tasks. To investigate how multi-agent multi-task assignment algorithms perform under a high number of agents and tasks, we compare three multi-task assignment methods in synthetic and real data benchmark environments: Integer Linear Programming (ILP), Genetic Algorithm (GA), and SKATE. In addition, a proactive approach is nested to all methods to determine near-future available agents (resources) using a receding-horizon. Based on the results obtained, we can argue that the classical ILP offers the better quality solutions when treating a low number of agents and tasks, i.e. low load despite the receding-horizon size, while it struggles to respect the time constraint for high load. SKATE performs better than the other methods in high load conditions, and even better when a variable receding-horizon is used.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Rodiah, Iis, Medria Kusuma Dewi Hardhienata, Agus Buono y Karlisa Priandana. "Ant Colony Optimization Modelling for Task Allocation in Multi-Agent System for Multi-Target". Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 6, n.º 6 (27 de diciembre de 2022): 911–22. http://dx.doi.org/10.29207/resti.v6i6.4201.

Texto completo
Resumen
Task allocation in multi-agent system can be defined as a problem of allocating a number of agents to the task. One of the problems in task allocation is to optimize the allocation of heterogeneous agents when there are multiple tasks which require several capabilities. To solve that problem, this research aims to modify the Ant Colony Optimization (ACO) algorithm so that the algorithm can be employed for solving task allocation problems with multiple tasks. In this research, we optimize the performance of the algorithm by minimizing the task completion cost as well as the number of overlapping agents. We also maximize the overall system capabilities in order to increase efficiency. Simulation results show that the modified ACO algorithm has significantly decreased overall task completion cost as well as the overlapping agents factor compared to the benchmark algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Wang, Yijuan, Weijun Pan y Kaiyuan Liu. "Multi-Agent Aviation Search Task Allocation Method". IOP Conference Series: Materials Science and Engineering 646 (17 de octubre de 2019): 012058. http://dx.doi.org/10.1088/1757-899x/646/1/012058.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Pal, Anshika, Ritu Tiwari y Anupam Shukla. "Communication constraints multi-agent territory exploration task". Applied Intelligence 38, n.º 3 (15 de septiembre de 2012): 357–83. http://dx.doi.org/10.1007/s10489-012-0376-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Surynek, Pavel. "Multi-Goal Multi-Agent Path Finding via Decoupled and Integrated Goal Vertex Ordering". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 14 (18 de mayo de 2021): 12409–17. http://dx.doi.org/10.1609/aaai.v35i14.17472.

Texto completo
Resumen
We introduce multi-goal multi agent path finding (MG-MAPF) which generalizes the standard discrete multi-agent path finding (MAPF) problem. While the task in MAPF is to navigate agents in an undirected graph from their starting vertices to one individual goal vertex per agent, MG-MAPF assigns each agent multiple goal vertices and the task is to visit each of them at least once. Solving MG-MAPF not only requires finding collision free paths for individual agents but also determining the order of visiting agent's goal vertices so that common objectives like the sum-of-costs are optimized. We suggest two novel algorithms using different paradigms to address MG-MAPF: a heuristic search-based algorithm called Hamiltonian-CBS (HCBS) and a compilation-based algorithm built using the satisfiability modulo theories (SMT), called SMT-Hamiltonian-CBS (SMT-HCBS).
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Dou, Lintao, Zhen Jia y Jian Huang. "Solving large-scale multi-agent tasks via transfer learning with dynamic state representation". International Journal of Advanced Robotic Systems 20, n.º 2 (1 de marzo de 2023): 172988062311624. http://dx.doi.org/10.1177/17298806231162440.

Texto completo
Resumen
Many research results have emerged in the past decade regarding multi-agent reinforcement learning. These include the successful application of asynchronous advantage actor-critic, double deep Q-network and other algorithms in multi-agent environments, and the more representative multi-agent training method based on the classical centralized training distributed execution algorithm QMIX. However, in a large-scale multi-agent environment, training becomes a major challenge due to the exponential growth of the state-action space. In this article, we design a training scheme from small-scale multi-agent training to large-scale multi-agent training. We use the transfer learning method to enable the training of large-scale agents to use the knowledge accumulated by training small-scale agents. We achieve policy transfer between tasks with different numbers of agents by designing a new dynamic state representation network, which uses a self-attention mechanism to capture and represent the local observations of agents. The dynamic state representation network makes it possible to expand the policy model from a few agents (4 agents, 10 agents) task to large-scale agents (16 agents, 50 agents) task. Furthermore, we conducted experiments in the famous real-time strategy game Starcraft II and the multi-agent research platform MAgent. And also set unmanned aerial vehicles trajectory planning simulations. Experimental results show that our approach not only reduces the time consumption of a large number of agent training tasks but also improves the final training performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Shahid, Asad Ali, Jorge Said Vidal Sesin, Damjan Pecioski, Francesco Braghin, Dario Piga y Loris Roveda. "Decentralized Multi-Agent Control of a Manipulator in Continuous Task Learning". Applied Sciences 11, n.º 21 (1 de noviembre de 2021): 10227. http://dx.doi.org/10.3390/app112110227.

Texto completo
Resumen
Many real-world tasks require multiple agents to work together. When talking about multiple agents in robotics, it is usually referenced to multiple manipulators in collaboration to solve a given task, where each one is controlled by a single agent. However, due to the increasing development of modular and re-configurable robots, it is also important to investigate the possibility of implementing multi-agent controllers that learn how to manage the manipulator’s degrees of freedom (DoF) in separated clusters for the execution of a given application (e.g., being able to face faults or, partially, new kinematics configurations). Within this context, this paper focuses on the decentralization of the robot control action learning and (re)execution considering a generic multi-DoF manipulator. Indeed, the proposed framework employs a multi-agent paradigm and investigates how such a framework impacts the control action learning process. Multiple variations of the multi-agent framework have been proposed and tested in this research, comparing the achieved performance w.r.t. a centralized (i.e., single-agent) control action learning framework, previously proposed by some of the authors. As a case study, a manipulation task (i.e., grasping and lifting) of an unknown object (to the robot controller) has been considered for validation, employing a Franka EMIKA panda robot. The MuJoCo environment has been employed to implement and test the proposed multi-agent framework. The achieved results show that the proposed decentralized approach is capable of accelerating the learning process at the beginning with respect to the single-agent framework while also reducing the computational effort. In fact, when decentralizing the controller, it is shown that the number of variables involved in the action space can be efficiently separated into several groups and several agents. This simplifies the original complex problem into multiple ones, efficiently improving the task learning process.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Lavendelis, Egons y Janis Grundspenkis. "Design of Multi-Agent Based Intelligent Tutoring Systems". Scientific Journal of Riga Technical University. Computer Sciences 38, n.º 38 (1 de enero de 2009): 48–59. http://dx.doi.org/10.2478/v10143-009-0004-z.

Texto completo
Resumen
Design of Multi-Agent Based Intelligent Tutoring SystemsResearch of two fields, namely agent oriented software engineering and intelligent tutoring systems, have to be taken into consideration, during the design of multi-agent based intelligent tutoring systems (ITS). Thus there is a need for specific approaches for agent based ITS design, which take into consideration main ideas from both fields. In this paper we propose a top down design approach for multi-agent based ITSs. The proposed design approach consists of the two main stages: external design and internal design of agents. During the external design phase the behaviour of agents and interactions among them are designed. The following steps are done: task modelling and task allocation to agents, use case map creation, agent interaction design, ontology creation and holon design. During the external design phase agents and holons are defined according to the holonic multi-agent architecture for ITS development. During the internal design stage the internal structure of agents is specified. The internal structure of each agent is represented in the specific diagram, called internal view of the agent, consisting of agent's actions and interactions among them, rules for incoming message and perception processing, incoming and outgoing messages, and beliefs of the agent. The proposed approach is intended to be a part of the full life cycle methodology for multi-agent based ITS development. The approach is developed using the same concepts as JADE agent platform and is suitable for agent code generation from the design diagrams.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Ma, Ziyuan y Huajun Gong. "Heterogeneous multi-agent task allocation based on graph neural network ant colony optimization algorithms". Intelligence & Robotics 3, n.º 4 (31 de octubre de 2023): 581–95. http://dx.doi.org/10.20517/ir.2023.33.

Texto completo
Resumen
Heterogeneous multi-agent task allocation is a key optimization problem widely used in fields such as drone swarms and multi-robot coordination. This paper proposes a new paradigm that innovatively combines graph neural networks and ant colony optimization algorithms to solve the assignment problem of heterogeneous multi-agents. The paper introduces an innovative Graph-based Heterogeneous Neural Network Ant Colony Optimization (GHNN-ACO) algorithm for heterogeneous multi-agent scenarios. The multi-agent system is composed of unmanned aerial vehicles, unmanned ships, and unmanned vehicles that work together to effectively respond to emergencies. This method uses graph neural networks to learn the relationship between tasks and agents, forming a graph representation, which is then integrated into ant colony optimization algorithms to guide the search process of ants. Firstly, the algorithm in this paper constructs heterogeneous graph data containing different types of agents and their relationships and uses the algorithm to classify and predict linkages for agent nodes. Secondly, the GHNN-ACO algorithm performs effectively in heterogeneous multi-agent scenarios, providing an effective solution for node classification and link prediction tasks in intelligent agent systems. Thirdly, the algorithm achieves an accuracy rate of 95.31% in assigning multiple tasks to multiple agents. It holds potential application prospects in emergency response and provides a new idea for multi-agent system cooperation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Mao, Jianlin, Zhigang He, Dayan Li, Ruiqi Li, Shufan Zhang y Niya Wang. "Multi-Agent Collaborative Path Planning Algorithm with Multiple Meeting Points". Electronics 13, n.º 16 (22 de agosto de 2024): 3347. http://dx.doi.org/10.3390/electronics13163347.

Texto completo
Resumen
Traditional multi-agent path planning algorithms often lead to path overlap and excessive energy consumption when dealing with cooperative tasks due to the single-agent-single-task configuration. For this reason, the “many-to-one” cooperative planning method has been proposed, which, although improved, still faces challenges in the vast search space for meeting points and unreasonable task handover locations. This paper proposes the Cooperative Dynamic Priority Safe Interval Path Planning with a multi-meeting-point and single-meeting-point solving mode switching (Co-DPSIPPms) algorithm to achieve multi-agent path planning with task handovers at multiple or single meeting points. First, the initial priority is set based on the positional relationships among agents within the cooperative group, and the improved Fermat point method is used to locate multiple meeting points quickly. Second, considering that agents must pick up sub-tasks or conduct task handovers midway, a segmented path planning strategy is proposed to ensure that cooperative agents can efficiently and accurately complete task handovers. Finally, an automatic switching strategy between multi-meeting-point and single-meeting-point solving modes is designed to ensure the algorithm’s success rate. Tests show that Co-DPSIPPms outperforms existing algorithms in 1-to-1 and m-to-1 cooperative tasks, demonstrating its efficiency and practicality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Shah, Julie, Patrick Conrad y Brian Williams. "Fast Distributed Multi-agent Plan Execution with Dynamic Task Assignment and Scheduling". Proceedings of the International Conference on Automated Planning and Scheduling 19 (16 de octubre de 2009): 289–96. http://dx.doi.org/10.1609/icaps.v19i1.13362.

Texto completo
Resumen
An essential quality of a good partner is her responsiveness to other team members. Recent work in dynamic plan execution exhibits elements of this quality through the ability to adapt to the temporal uncertainties of others agents and the environment. However, a good teammate also has the ability to adapt on-the-fly through task assignment. We generalize the framework of dynamic execution to perform plan execution with dynamic task assignment as well as scheduling.This paper introduces Chaski, a multi-agent executive for scheduling temporal plans with online task assignment. Chaski enables an agent to dynamically update its plan in response to disturbances in task assignment and the schedule of other agents. The agent then uses the updated plan to choose, schedule and execute actions that are guaranteed to be temporally consistent and logically valid within the multi-agent plan. Chaski is made efficient through an incremental algorithm that compactly encodes all scheduling policies for all possible task assignments. We apply Chaski to perform multi-manipulator coordination using two Barrett Arms within the authors' hardware testbed. We empirically demonstrate up to one order of magnitude improvements in execution latency and solution compactness compared to prior art.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Chen, Yining, Guanghua Song, Zhenhui Ye y Xiaohong Jiang. "Scalable and Transferable Reinforcement Learning for Multi-Agent Mixed Cooperative–Competitive Environments Based on Hierarchical Graph Attention". Entropy 24, n.º 4 (18 de abril de 2022): 563. http://dx.doi.org/10.3390/e24040563.

Texto completo
Resumen
Most previous studies on multi-agent systems aim to coordinate agents to achieve a common goal, but the lack of scalability and transferability prevents them from being applied to large-scale multi-agent tasks. To deal with these limitations, we propose a deep reinforcement learning (DRL) based multi-agent coordination control method for mixed cooperative–competitive environments. To improve scalability and transferability when applying in large-scale multi-agent systems, we construct inter-agent communication and use hierarchical graph attention networks (HGAT) to process the local observations of agents and received messages from neighbors. We also adopt the gated recurrent units (GRU) to address the partial observability issue by recording historical information. The simulation results based on a cooperative task and a competitive task not only show the superiority of our method, but also indicate the scalability and transferability of our method in various scale tasks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Yang, Li, Xie Dong Cao y Jie Li. "Research on Minimum-Cost-Based Task Decomposition Model of Multiple Expert Systems for Oil-Gas Reservoir Protection Based on Agent". Applied Mechanics and Materials 275-277 (enero de 2013): 2650–53. http://dx.doi.org/10.4028/www.scientific.net/amm.275-277.2650.

Texto completo
Resumen
To solve collaboration problem in multi-expert system, intelligent agent technology is used in multi-expert system.Firstly,by introducing a minimum-cost-based formal description of task decomposition,a new agent task decompositon model is presented.Secondly, heuristic algorithm for task decomposition is analyzed to solve communication among agents.For illustration, expert systems for oil-gas reservoir protection is utilized to verify the effectiveness of the method.The application of the expert system shows that based-minimum-cost task decompostion model is valid in agent task decomposition and heuristic algorithm for task decomposition can be used as communication among agents.As a result,the proposed cooperation mechanism based on agents can effectively solve the collaboration of experts in the multiple expert systems and improve the accuracy of inference.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

KUSEK, MARIO, KRESIMIR JURASOVIC y GORDAN JEZIC. "VERIFICATION OF THE MOBILE AGENT NETWORK SIMULATOR — A TOOL FOR SIMULATING MULTI-AGENT SYSTEMS". International Journal of Software Engineering and Knowledge Engineering 18, n.º 05 (agosto de 2008): 651–82. http://dx.doi.org/10.1142/s0218194008003854.

Texto completo
Resumen
This paper deals with the verification of a multi-agent system simulator. Agents in the simulator are based on the Mobile Agent Network (MAN) formal model. It describes a shared plan representing a process which allows team formation according to task complexity and the characteristics of the distributed environment where these tasks should be performed. In order to verify the simulation results, we compared them with performance characteristics of a real multi-agent system, called the Multi-Agent Remote Maintenance Shell (MA–RMS). MA–RMS is organized as a team-oriented knowledge based system responsible for distributed software management. The results are compared and analyzed for various testing scenarios which differ with respect to network bandwidth as well as task and network complexity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Miao, Yongfei, Luo Zhong, Yufu Yin, Chengming Zou y Zhenjun Luo. "Research on dynamic task allocation for multiple unmanned aerial vehicles". Transactions of the Institute of Measurement and Control 39, n.º 4 (1 de febrero de 2017): 466–74. http://dx.doi.org/10.1177/0142331217693077.

Texto completo
Resumen
To solve the distributed task allocation problems of search and rescue missions for multiple unmanned aerial vehicles (UAVs), this paper establishes a dynamic task allocation model under three conditions: 1) when new targets are detected, 2) when UAVs break down and 3) when unexpected threats suddenly occur. A distributed immune multi-agent algorithm (DIMAA) based on an immune multi-agent network framework is then proposed. The technologies employed by the proposed algorithm include a multi-agent system (MAS) with immune memory, neighbourhood clonal selection, neighbourhood suppression, neighbourhood crossover and self-learning operators. The DIMAA algorithm simplifies the decision-making process among agents. The simulation results show that this algorithm not only obtains the global optimum solution, but also reduces the communication load between agents.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Caballero Testón, J. y Maria D. R-Moreno. "Multi-Agent Temporal Task Solving and Plan Optimization". Proceedings of the International Conference on Automated Planning and Scheduling 34 (30 de mayo de 2024): 50–58. http://dx.doi.org/10.1609/icaps.v34i1.31460.

Texto completo
Resumen
Several multi-agent techniques are utilized to reduce the complexity of classical planning tasks, however, their applicability to temporal planning domains is a currently open line of study in the field of Automated Planning. In this paper, we present MA-LAMA, a factored, centralized, unthreated, satisfying, multi-agent temporal planner, that exploits the 'multi-agent nature' of temporal domains to perform plan optimization. In MA-LAMA, temporal tasks are translated to the constrained snap-actions paradigm, and an automatic agent decomposition, goal assignment, and required cooperation analysis are carried out to build independent search steps, called Search Phases. These Search Phases are then solved by consecutive agent local searches, using classical heuristics and temporal constraints. Experiments show that MA-LAMA is able to solve a wide range of classical and temporal multi-agent domains, performing significantly better in plan quality than other state-of-the-art temporal planners.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Wan, Xiao Ping y Shu Yu Li. "Dynamic Task Allocation Based on Game Theory". Advanced Materials Research 926-930 (mayo de 2014): 2790–94. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.2790.

Texto completo
Resumen
Multi-Agent System for task allocation problem, the introduction of game theory for dynamic task allocation modeling Multi-Agent Systems,Multi-Agent System proposed dynamic task allocation algorithm based on game theory.Experimental results show that the lower complexity of dynamic task allocation algorithm based on game theory in this article, the smaller amount of calculation,better robustness,task allocation scheme to obtain higher quality,with higher distribution
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Zhang, Li, Zhi Qi, Hao Cui, Sen Hua Wang, Ya Hui Ning y Qian Zhu Wang. "Exploring Agent-Based Modeling for Emergency Logistics Collaborative Decision Making". Advanced Materials Research 710 (junio de 2013): 781–85. http://dx.doi.org/10.4028/www.scientific.net/amr.710.781.

Texto completo
Resumen
Aiming at the requirements of urgency and dynamics in emergency logistics, this paper presents a multi-agent system (MAS) concept model for emergency logistics collaborative decision making. The suggested model includes three kinds of agents, i.e., role agent, function agent and assistant agent. Role agent excutes emergency logistics activities, function agent achieves the task requirements in every work phase and assistant agent helps organizing and visiting data. Two levels agent views serve as the basic skeleton of the MAS. Top level is the global decision-making view, which describes the task distribution process with multiple agents. Local level is the execution planning view, which simulates task executing process of the performer. Finally, an extended BDI agent structure model is proposed to help the implementation at application level.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Wang, Caroline, Ishan Durugkar, Elad Liebman y Peter Stone. "DM²: Decentralized Multi-Agent Reinforcement Learning via Distribution Matching". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 10 (26 de junio de 2023): 11699–707. http://dx.doi.org/10.1609/aaai.v37i10.26382.

Texto completo
Resumen
Current approaches to multi-agent cooperation rely heavily on centralized mechanisms or explicit communication protocols to ensure convergence. This paper studies the problem of distributed multi-agent learning without resorting to centralized components or explicit communication. It examines the use of distribution matching to facilitate the coordination of independent agents. In the proposed scheme, each agent independently minimizes the distribution mismatch to the corresponding component of a target visitation distribution. The theoretical analysis shows that under certain conditions, each agent minimizing its individual distribution mismatch allows the convergence to the joint policy that generated the target distribution. Further, if the target distribution is from a joint policy that optimizes a cooperative task, the optimal policy for a combination of this task reward and the distribution matching reward is the same joint policy. This insight is used to formulate a practical algorithm (DM^2), in which each individual agent matches a target distribution derived from concurrently sampled trajectories from a joint expert policy. Experimental validation on the StarCraft domain shows that combining (1) a task reward, and (2) a distribution matching reward for expert demonstrations for the same task, allows agents to outperform a naive distributed baseline. Additional experiments probe the conditions under which expert demonstrations need to be sampled to obtain the learning benefits.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Jiang, Zhiling, Tiantian Song, Bowei Yang y Guanghua Song. "Fault-Tolerant Control for Multi-UAV Exploration System via Reinforcement Learning Algorithm". Aerospace 11, n.º 5 (8 de mayo de 2024): 372. http://dx.doi.org/10.3390/aerospace11050372.

Texto completo
Resumen
In the UAV swarm, the degradation in the health status of some UAVs often brings negative effects to the system. To compensate for the negative effect, we present a fault-tolerant Multi-Agent Reinforcement Learning Algorithm that can control an unstable Multiple Unmanned Aerial Vehicle (Multi-UAV) system to perform exploration tasks. Different from traditional multi-agent methods that require the agents to remain healthy during task execution, our approach breaks this limitation and allows the agents to change status during the task. In our algorithm, the agent can accept both the adjacency state matrix about the neighboring agents and a kind of healthy status vector to integrate both and generate the communication topology. During this process, the agents with poor health status are given more attention for returning to normal status. In addition, we integrate a temporal convolution module into our algorithm and enable the agent to capture the temporal information during the task. We introduce a scenario regarding Multi-UAV ground exploration, where the health status of UAVs gradually weakens over time before dropping into a fault status; the UAVs require rescues from time to time. We conduct some experiments in this scenario and verify our algorithm. Our algorithm can increase the drone’s survival rate and make the swarm perform better.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Nykyforchyn, I. V. "OPTIMAL CONTRACTS IN A MULTI-PURPOSE TASK". PRECARPATHIAN BULLETIN OF THE SHEVCHENKO SCIENTIFIC SOCIETY Number, n.º 1(59) (28 de enero de 2021): 66–71. http://dx.doi.org/10.31471/2304-7399-2020-1(59)-66-71.

Texto completo
Resumen
In the paper a famous multitask model of principal-agent relations is enhanced with the requirement that a reward is paid only if some minimal threshold in each type of workis attained. We deduce and analyze formulae for the expectedutility of an agent and propose a method to find his optimal behavior depending on the reward function parameters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Zhang, Mei, Jing Hua Wen y Yong Long Fan. "Modeling of Multi-Agents' Coordination". Applied Mechanics and Materials 437 (octubre de 2013): 222–25. http://dx.doi.org/10.4028/www.scientific.net/amm.437.222.

Texto completo
Resumen
It takes cooperation among multi-user in virtual geographic environment (VGE) based on Multi-Agent System (MAS) in the centralized system as researched object. Then we detailed analyze and research arithmetic of collectivistic operating behaviour learning of Multi-Agent based on Genetic Algorithm (GA). Finally we design an example which shows how 3 evolutional Agents cooperate to complete the task of colony pushing cylinder box.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Ji, Sang Hoon, Jeong Sik Choi, No San Kwak y Beom Hee Lee. "OPTIMAL PRIORITY SELECTION FOR MULTI-AGENT TASK EXECUTION". IFAC Proceedings Volumes 38, n.º 1 (2005): 583–88. http://dx.doi.org/10.3182/20050703-6-cz-1902.01367.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Kamali, Kaivan, Dan Ventura, Amulya Garga y Soundar R. T. Kumara. "GEOMETRIC TASK DECOMPOSITION IN A MULTI-AGENT ENVIRONMENT". Applied Artificial Intelligence 20, n.º 5 (junio de 2006): 437–56. http://dx.doi.org/10.1080/08839510500313737.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Nelke, Sofia Amador, Steven Okamoto y Roie Zivan. "Market Clearing–based Dynamic Multi-agent Task Allocation". ACM Transactions on Intelligent Systems and Technology 11, n.º 1 (11 de febrero de 2020): 1–25. http://dx.doi.org/10.1145/3356467.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Maniadakis, Michail, Emmanouil Hourdakis y Panos Trahanias. "Time-informed task planning in multi-agent collaboration". Cognitive Systems Research 43 (junio de 2017): 291–300. http://dx.doi.org/10.1016/j.cogsys.2016.09.004.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Atzmon, Dor, Jiaoyang Li, Ariel Felner, Eliran Nachmani, Shahaf Shperberg, Nathan Sturtevant y Sven Koenig. "Multi-Directional Search". Proceedings of the International Symposium on Combinatorial Search 11, n.º 1 (1 de septiembre de 2021): 121–22. http://dx.doi.org/10.1609/socs.v11i1.18518.

Texto completo
Resumen
In the Multi-Agent Meeting (MAM) problem, the task is to find a meeting location for multiple agents, as well as a path for each agent to that location. In this paper, we introduce MM*, a Multi-Directional Search algorithm that finds the optimal meeting location under different cost functions. MM* generalizes the Meet in the Middle (MM) bidirectional search algorithm to the case of finding optimal meeting locations for multiple agents. A number of admissible heuristics are proposed and experiments demonstrate the benefits of MM*.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Zhang, Li, Zhi Qi, Qian Zhu Wang, Xing Ping Wang y Xin Shen. "Building a Multi-Agent System for Emergency Logistics Collaborative Decision". Applied Mechanics and Materials 513-517 (febrero de 2014): 2041–44. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.2041.

Texto completo
Resumen
Currently, the decision making of emergency logistics is faced with increasing challenges caused by deficient information, uncertain requirement and shortest response time. Agent-based modeling and multi-agent system have been proved as a promising ways in this field. Based on the previous work of emergency logistics decision framework, this paper presents a detailed design of agent internal structure of the emergency logistics multi-agent system. Some typical agents, such as logistics entity agent, task distribution agent and ontology visiting agent, are discussed from the composed function modules to the specific implementation. As the illustrative examples, the design of these primary agents can characterizes the basic structure of another agent in the emergency logistics multi-agent system, and it will be considered as the effective reference for system implementation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Qiang, Ning y Feng Ju Kang. "Multi-Task Coalition Generation of Multi-Agent System with Limited Resource". Advanced Materials Research 971-973 (junio de 2014): 1655–58. http://dx.doi.org/10.4028/www.scientific.net/amr.971-973.1655.

Texto completo
Resumen
A new fitness function is introduced in order to maximize the number of task served by the multi-agent system (MAS) with limited resource, while the tasks information remains unknown until the system found them one by one. The new fitness function not only considers to maximize the profit of the system which can be seen as to maximize the remaining resource of the system in the case of the MAS with limited resource, but also takes the balance of remaining resource in to account and it can makes a compromise between them. This paper uses an improved discrete particle swarm optimization to optimize the coalition of MAS. In order to improve the performance of the algorithm we redefine the particle velocity and position update formula. The simulation results show the effectiveness and superiority of the proposed fitness function and optimization algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Atzmon, Dor, Roni Stern, Ariel Felner, Glenn Wagner, Roman Bartak y Neng-Fa Zhou. "Robust Multi-Agent Path Finding". Proceedings of the International Symposium on Combinatorial Search 9, n.º 1 (1 de septiembre de 2021): 2–9. http://dx.doi.org/10.1609/socs.v9i1.18445.

Texto completo
Resumen
In the multi-agent path-finding (MAPF) problem, the task is to find a plan for moving a set of agents from their initial locations to their goals without collisions. Following this plan, however, may not be possible due to unexpected events that delay some of the agents. We explore the notion of k-robust MAPF, where the task is to find a plan that can be followed even if a limited number of such delays occur. k-robust MAPF is especially suitable for agents with a control mechanism that guarantees that each agent is within a limited number of steps away from its pre-defined plan. We propose sufficient and required conditions for finding a k-robust plan, and show how to convert several MAPF solvers to find such plans. Then, we show the benefit of using a k-robust plan during execution, and for finding plans that are likely to succeed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Wu, Zhong Bing, Bing Yao, Yi Sheng Liu y Shi Jie Jiang. "Research on Incentive Equilibrium Mechanism of Agent-Construction Relationship Based on Multitask Principal-Agent Model". Advanced Materials Research 250-253 (mayo de 2011): 2440–45. http://dx.doi.org/10.4028/www.scientific.net/amr.250-253.2440.

Texto completo
Resumen
Most of the Chinese scholars have simplified the relationship between the client and the agent-construction enterprise as a single-task principal-agent problem, which has ignored the important fact of multitasks, such as progress, quality and cost. In this paper, a multi-task principal-agent model with three tasks, i.e. progress, quality and cost, is constructed to analyze the optimal incentive contractual conditions and multi-task incentive equilibrium mechanism of the agent-construction enterprise, which can provide theoretical basis for the regulatory policy of government investment project.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Yu, Yuekang, Zhongyi Zhai, Weikun Li y Jianyu Ma. "Target-Oriented Multi-Agent Coordination with Hierarchical Reinforcement Learning". Applied Sciences 14, n.º 16 (12 de agosto de 2024): 7084. http://dx.doi.org/10.3390/app14167084.

Texto completo
Resumen
In target-oriented multi-agent tasks, agents collaboratively achieve goals defined by specific objects, or targets, in their environment. The key to success is the effective coordination between agents and these targets, especially in dynamic environments where targets may shift. Agents must adeptly adjust to these changes and re-evaluate their target interactions. Inefficient coordination can lead to resource waste, extended task times, and lower overall performance. Addressing this challenge, we introduce the regulatory hierarchical multi-agent coordination (RHMC), a hierarchical reinforcement learning approach. RHMC divides the coordination task into two levels: a high-level policy, assigning targets based on environmental state, and a low-level policy, executing basic actions guided by individual target assignments and observations. Stabilizing RHMC’s high-level policy is crucial for effective learning. This stability is achieved by reward regularization, reducing reliance on the dynamic low-level policy. Such regularization ensures the high-level policy remains focused on broad coordination, not overly dependent on specific agent actions. By minimizing low-level policy dependence, RHMC adapts more seamlessly to environmental changes, boosting learning efficiency. Testing demonstrates RHMC’s superiority over existing methods in global reward and learning efficiency, highlighting its effectiveness in multi-agent coordination.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Shi, Wen y Chengpu Yu. "Multi-Agent Task Allocation with Multiple Depots Using Graph Attention Pointer Network". Electronics 12, n.º 16 (8 de agosto de 2023): 3378. http://dx.doi.org/10.3390/electronics12163378.

Texto completo
Resumen
The study of the multi-agent task allocation problem with multiple depots is crucial for investigating multi-agent collaboration. Although many traditional heuristic algorithms can be adopted to handle the concerned task allocation problem, they are not able to efficiently obtain optimal or suboptimal solutions. To this end, a graph attention pointer network is built in this paper to deal with the multi-agent task allocation problem. Specifically, the multi-head attention mechanism is employed for the feature extraction of nodes, and a pointer network with parallel two-way selection and parallel output is introduced to further improve the performance of multi-agent cooperation and the efficiency of task allocation. Experimental results are provided to show that the presented graph attention pointer network outperforms the traditional heuristic algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Kotsinis, Dimitrios y Charalampos P. Bechlioulis. "Decentralized Navigation with Optimality for Multiple Holonomic Agents in Simply Connected Workspaces". Sensors 24, n.º 10 (15 de mayo de 2024): 3134. http://dx.doi.org/10.3390/s24103134.

Texto completo
Resumen
Multi-agent systems are utilized more often in the research community and industry, as they can complete tasks faster and more efficiently than single-agent systems. Therefore, in this paper, we are going to present an optimal approach to the multi-agent navigation problem in simply connected workspaces. The task involves each agent reaching its destination starting from an initial position and following an optimal collision-free trajectory. To achieve this, we design a decentralized control protocol, defined by a navigation function, where each agent is equipped with a navigation controller that resolves imminent safety conflicts with the others, as well as the workspace boundary, without requesting knowledge about the goal position of the other agents. Our approach is rendered sub-optimal, since each agent owns a predetermined optimal policy calculated by a novel off-policy iterative method. We use this method because the computational complexity of learning-based methods needed to calculate the global optimal solution becomes unrealistic as the number of agents increases. To achieve our goal, we examine how much the yielded sub-optimal trajectory deviates from the optimal one and how much time the multi-agent system needs to accomplish its task as we increase the number of agents. Finally, we compare our method results with a discrete centralized policy method, also known as a Multi-Agent Poli-RRT* algorithm, to demonstrate the validity of our method when it is attached to other research algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Alexander, Perry. "Task Analysis and Design Plans in Formal Specification Design". International Journal of Software Engineering and Knowledge Engineering 08, n.º 02 (junio de 1998): 223–52. http://dx.doi.org/10.1142/s0218194098000133.

Texto completo
Resumen
This paper presents BENTON, a prototype system demonstrating task analysis and multi-agent reasoning applied to formal specification synthesis. BENTON transforms specifications written as attribute-value pairs into Larch Modula-3 interface language and Larch Shared Language specifications. BENTON decomposes the software specification design task into synthesis, analysis and evaluation subtasks. Each subtask is assigned a specific design method based on problem and domain characteristics. This task analysis is achieved using blackboard knowledge sources and multi-agent reasoning employing design plans to implement different problem solving methods. Knowledge sources representing different problem solving methodologies monitor blackboard spaces and activate when they are applicable. When executed, Design plans send subtasks to agents that select from available problem solving methodologies. BENTON agents and knowledge sources use case-based reasoning, schemata-based reasoning and procedure execution as their fundamental reasoning methods. This paper presents an overview of the BENTON design model, its agent architecture and plan execution capabilities, and two annotated examples of BENTON problem solving activities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Shi, Daming, Junbo Tong, Yi Liu y Wenhui Fan. "Knowledge Reuse of Multi-Agent Reinforcement Learning in Cooperative Tasks". Entropy 24, n.º 4 (28 de marzo de 2022): 470. http://dx.doi.org/10.3390/e24040470.

Texto completo
Resumen
With the development and appliance of multi-agent systems, multi-agent cooperation is becoming an important problem in artificial intelligence. Multi-agent reinforcement learning (MARL) is one of the most effective methods for solving multi-agent cooperative tasks. However, the huge sample complexity of traditional reinforcement learning methods results in two kinds of training waste in MARL for cooperative tasks: all homogeneous agents are trained independently and repetitively, and multi-agent systems need training from scratch when adding a new teammate. To tackle these two problems, we propose the knowledge reuse methods of MARL. On the one hand, this paper proposes sharing experience and policy within agents to mitigate training waste. On the other hand, this paper proposes reusing the policies learned by original teams to avoid knowledge waste when adding a new agent. Experimentally, the Pursuit task demonstrates how sharing experience and policy can accelerate the training speed and enhance the performance simultaneously. Additionally, transferring the learned policies from the N-agent enables the (N+1)–agent team to immediately perform cooperative tasks successfully, and only a minor training resource can allow the multi-agents to reach optimal performance identical to that from scratch.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Park, Sankyu, Key-Sun Choi y K. H. (Kane) Kim. "A Framework for Multi-Agent Systems with Multi-Modal User Interfaces in Distributed Computing Environments". International Journal of Software Engineering and Knowledge Engineering 07, n.º 03 (septiembre de 1997): 351–69. http://dx.doi.org/10.1142/s0218194097000217.

Texto completo
Resumen
In current multi-agent systems, the user is typically interacting with a single agent at a time through relatively inflexible and modestly intelligent interfaces. As a consequence, these systems force the users to submit simplistic requests only and suffer from problems such as the low-level nature of the system services offered to users, the weak reusability of agents, and the weak extensibility of the systems. In this paper, a framework for multi-agent systems called the open agent architecture (OAA) which reduces such problems, is discussed. The OAA is designed to handle complex requests that involve multiple agents. In some cases of complex requests from users, the components of the requests do not directly correspond to the capabilities of various application agents, and therefore, the system is required to translate the user's model of the task into the system's model before apportioning subtasks to the agents. To maximize users' efficiency in generating this type of complex requests, the OAA offers an intelligent multi-modal user interface agent which supports a natural language interface with a mix of spoken language, handwriting, and gesture. The effectiveness of the OAA environment including the intelligent distributed multi-modal interface has been observed in our development of several practical multi-agent systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Simões, David, Nuno Lau y Luís Paulo Reis. "Exploring communication protocols and centralized critics in multi-agent deep learning". Integrated Computer-Aided Engineering 27, n.º 4 (11 de septiembre de 2020): 333–51. http://dx.doi.org/10.3233/ica-200631.

Texto completo
Resumen
Tackling multi-agent environments where each agent has a local limited observation of the global state is a non-trivial task that often requires hand-tuned solutions. A team of agents coordinating in such scenarios must handle the complex underlying environment, while each agent only has partial knowledge about the environment. Deep reinforcement learning has been shown to achieve super-human performance in single-agent environments, and has since been adapted to the multi-agent paradigm. This paper proposes A3C3, a multi-agent deep learning algorithm, where agents are evaluated by a centralized referee during the learning phase, but remain independent from each other in actual execution. This referee’s neural network is augmented with a permutation invariance architecture to increase its scalability to large teams. A3C3 also allows agents to learn communication protocols with which agents share relevant information to their team members, allowing them to overcome their limited knowledge, and achieve coordination. A3C3 and its permutation invariant augmentation is evaluated in multiple multi-agent test-beds, which include partially-observable scenarios, swarm environments, and complex 3D soccer simulations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Yuan, Ruiping, Jiangtao Dou, Juntao Li, Wei Wang y Yingfan Jiang. "Multi-robot task allocation in e-commerce RMFS based on deep reinforcement learning". Mathematical Biosciences and Engineering 20, n.º 2 (2022): 1903–18. http://dx.doi.org/10.3934/mbe.2023087.

Texto completo
Resumen
<abstract><p>A Robotic Mobile Fulfillment System (RMFS) is a new type of parts-to-picker order fulfillment system where multiple robots coordinate to complete a large number of order picking tasks. The multi-robot task allocation (MRTA) problem in RMFS is complex and dynamic, and it cannot be well solved by traditional MRTA methods. This paper proposes a task allocation method for multiple mobile robots based on multi-agent deep reinforcement learning, which not only has the advantage of reinforcement learning in dealing with dynamic environment but also can solve the task allocation problem of large state space and high complexity utilizing deep learning. First, a multi-agent framework based on cooperative structure is proposed according to the characteristics of RMFS. Then, a multi agent task allocation model is constructed based on Markov Decision Process. In order to avoid inconsistent information among agents and improve the convergence speed of traditional Deep Q Network (DQN), an improved DQN algorithm based on a shared utilitarian selection mechanism and priority empirical sample sampling is proposed to solve the task allocation model. Simulation results show that the task allocation algorithm based on deep reinforcement learning is more efficient than that based on a market mechanism, and the convergence speed of the improved DQN algorithm is much faster than that of the original DQN algorithm.</p></abstract>
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Wang, Zijin, Yibin Wang, Ming Chen, Tingyu Yuan, Haibo Chen y Yingze Yang. "A Task Allocation Strategy based on Hungarian Algorithm in RoboCup Rescue Simulation". Journal of Physics: Conference Series 2456, n.º 1 (1 de marzo de 2023): 012009. http://dx.doi.org/10.1088/1742-6596/2456/1/012009.

Texto completo
Resumen
Abstract In the multi-agent cooperation of RoboCup Rescue Agent Simulation, as one of the three agents, the agent with the task of clearing roadblocks plays a significant role. The clearing behavior help other agents to work promptly and provide a highly efficient operation to the entire simulation system. However, considering the cost of all agents in a limited time, it is not easy to carry out flexible task allocation, which leads to the current strategy is not perfect, and finally leads to the low efficiency of rescue operations. Therefore, it is necessary to make sure the high efficiency of clearing behavior. In order to maximize the resources of agents, this article puts forward a task allocation strategy based on Hungarian algorithm, which has substantially enhanced the agents’ efficiency through experimental verification at last.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

XU, DIANXIANG, RICHARD A. VOLZ, THOMAS R. IOERGER y JOHN YEN. "MODELING AND ANALYZING MULTI-AGENT BEHAVIORS USING PREDICATE/TRANSITION NETS". International Journal of Software Engineering and Knowledge Engineering 13, n.º 01 (febrero de 2003): 103–24. http://dx.doi.org/10.1142/s0218194003001184.

Texto completo
Resumen
How agents accomplish a goal task in a multi-agent system is usually specified by multi-agent plans built from basic actions (e.g. operators) of which the agents are capable. The plan specification provides the agents with a shared mental model for how they are supposed to collaborate with each other to achieve the common goal. Making sure that the plans are reliable and fit for the purpose for which they are designed is a critical problem with this approach. To address this problem, this paper presents a formal approach to modeling and analyzing multi-agent behaviors using Predicate/Transition (PrT) nets, a high- level formalism of Petri nets. We model a multi-agent problem by representing agent capabilities as transitions in PrT nets. To analyze a multi-agent PrT model, we adapt the planning graphs as a compact structure for reachability analysis, which is coherent to the concurrent semantics. We also demonstrate that one can analyze whether parallel actions specified in multi-agent plans can be executed in parallel and whether the plans can achieve the goal by analyzing the dependency relations among the transitions in the PrT model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Vainshtain, David y Oren Salzman. "Multi-Agent Terraforming: Efficient Multi-Agent Path Finding via Environment Manipulation". Proceedings of the International Symposium on Combinatorial Search 12, n.º 1 (21 de julio de 2021): 239–41. http://dx.doi.org/10.1609/socs.v12i1.18596.

Texto completo
Resumen
Planning collision-free paths for multiple agents operating in close proximity has a myriad of applications ranging from smart warehouses to route planning for airport taxiways. This problem, known as the Multi-Agent Path-Finding (MAPF) problem, is highly relevant to real-world applications in automation and robotics, and has attracted significant research in recent years. While in many applications, the robots are tasked with transporting objects and thus have the means to move obstacles, common formulations of the problem prohibit agents from moving obstacles en-route to a task. This often causes agents to take long detours to avoid obstacles instead of simply moving them to clear a path. In this work we present multi-agent terraforming, a novel extension of the MAPF problem that can exploit the fact that the system contains movable obstacles. We build upon leading MAPF solvers and propose an efficient method to solve the multi-agent terraforming problem in a manner that is both complete and optimal. We evaluate our method on scenarios inspired by smart warehouses (such as those of Amazon) and demonstrate that, compared to the classical MAPF formulation, the extra flexibility provided by terraforming facilitates a notable improvement of solution quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Liu, Chenlei y Zhixin Sun. "A Multi-Agent Reinforcement Learning-Based Task-Offloading Strategy in a Blockchain-Enabled Edge Computing Network". Mathematics 12, n.º 14 (19 de julio de 2024): 2264. http://dx.doi.org/10.3390/math12142264.

Texto completo
Resumen
In recent years, many mobile edge computing network solutions have enhanced data privacy and security and built a trusted network mechanism by introducing blockchain technology. However, this also complicates the task-offloading problem of blockchain-enabled mobile edge computing, and traditional evolutionary learning and single-agent reinforcement learning algorithms are difficult to solve effectively. In this paper, we propose a blockchain-enabled mobile edge computing task-offloading strategy based on multi-agent reinforcement learning. First, we innovatively propose a blockchain-enabled mobile edge computing task-offloading model by comprehensively considering optimization objectives such as task execution energy consumption, processing delay, user privacy metrics, and blockchain incentive rewards. Then, we propose a deep reinforcement learning algorithm based on multiple agents sharing a global memory pool using the actor–critic architecture, which enables each agent to acquire the experience of another agent during the training process to enhance the collaborative capability among agents and overall performance. In addition, we adopt attenuatable Gaussian noise into the action space selection process in the actor network to avoid falling into the local optimum. Finally, experiments show that this scheme’s comprehensive cost calculation performance is enhanced by more than 10% compared with other multi-agent reinforcement learning algorithms. In addition, Gaussian random noise-based action space selection and a global memory pool improve the performance by 38.36% and 43.59%, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Arain, Zulfiqar Ali, Xuesong Qiu, Changqiao Xu, Mu Wang y Mussadiq Abdul Rahim. "Energy-Aware MPTCP Scheduling in Heterogeneous Wireless Networks Using Multi-Agent Deep Reinforcement Learning Techniques". Electronics 12, n.º 21 (1 de noviembre de 2023): 4496. http://dx.doi.org/10.3390/electronics12214496.

Texto completo
Resumen
This paper proposes an energy-efficient scheduling scheme for multi-path TCP (MPTCP) in heterogeneous wireless networks, aiming to minimize energy consumption while ensuring low latency and high throughput. Each MPTCP sub-flow is controlled by an agent that cooperates with other agents using the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm. This approach enables the agents to learn decentralized policies through centralized training and decentralized execution. The scheduling problem is modeled as a multi-agent decision-making task. The proposed energy-efficient scheduling scheme, referred to as EE-MADDPG, demonstrates significant energy savings while maintaining lower latency and higher throughput compared to other state-of-the-art scheduling techniques. By adopting a multi-agent deep reinforcement learning approach, the agents can learn efficient scheduling policies that optimize various performance metrics in heterogeneous wireless networks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Zhang, Yuanhang, Hesheng Wang y Zhongqiang Ren. "A Short Summary of Multi-Agent Combinatorial Path Finding with Heterogeneous Task Duration (Extended Abstract)". Proceedings of the International Symposium on Combinatorial Search 17 (1 de junio de 2024): 301–2. http://dx.doi.org/10.1609/socs.v17i1.31591.

Texto completo
Resumen
Multi-Agent Combinatorial Path Finding (MCPF) seeks collision-free paths for multiple agents from their initial locations to destinations, visiting a set of intermediate target locations in the middle of the paths, while minimizing the sum of arrival times. While a few approaches have been developed to handle MCPF, most of them simply direct the agent to visit the targets without considering the task duration, i.e., the amount of time needed for an agent to execute the task (such as picking an item) at a target location. MCPF is NP-hard to solve to optimality, and the inclusion of task duration further complicates the problem. To handle task duration, we develop two methods, where the first method post-processes the paths planned by any MCPF planner to include the task duration and has no solution optimality guarantee; and the second method considers task duration during planning and is able to ensure solution optimality. The numerical and simulation results show that our methods can handle up to 20 agents and 50 targets in the presence of task duration, and can execute the paths subject to robot motion disturbance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía