To see the other types of publications on this topic, follow the link: Multiagent decision.

Journal articles on the topic 'Multiagent decision'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multiagent decision.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kumar, Akshat, Shlomo Zilberstein, and Marc Toussaint. "Probabilistic Inference Techniques for Scalable Multiagent Decision Making." Journal of Artificial Intelligence Research 53 (June 29, 2015): 223–70. http://dx.doi.org/10.1613/jair.4649.

Full text
Abstract:
Decentralized POMDPs provide an expressive framework for multiagent sequential decision making. However, the complexity of these models---NEXP-Complete even for two agents---has limited their scalability. We present a promising new class of approximation algorithms by developing novel connections between multiagent planning and machine learning. We show how the multiagent planning problem can be reformulated as inference in a mixture of dynamic Bayesian networks (DBNs). This planning-as-inference approach paves the way for the application of efficient inference techniques in DBNs to multiagent decision making. To further improve scalability, we identify certain conditions that are sufficient to extend the approach to multiagent systems with dozens of agents. Specifically, we show that the necessary inference within the expectation-maximization framework can be decomposed into processes that often involve a small subset of agents, thereby facilitating scalability. We further show that a number of existing multiagent planning models satisfy these conditions. Experiments on large planning benchmarks confirm the benefits of our approach in terms of runtime and scalability with respect to existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Han, Xiaoyu. "Application of Reinforcement Learning in Multiagent Intelligent Decision-Making." Computational Intelligence and Neuroscience 2022 (September 16, 2022): 1–6. http://dx.doi.org/10.1155/2022/8683616.

Full text
Abstract:
The combination of deep neural networks and reinforcement learning had received more and more attention in recent years, and the attention of reinforcement learning of single agent was slowly getting transferred to multiagent. Regret minimization was a new concept in the theory of gaming. In some game issues that Nash equilibrium was not the optimal solution, the regret minimization had better performance. Herein, we introduce the regret minimization into multiagent reinforcement learning and propose a multiagent regret minimum algorithm. This chapter first introduces the Nash Q-learning algorithm and uses the overall framework of Nash Q-learning to minimize regrets into the multiagent reinforcement learning and then verify the effectiveness of the algorithm in the experiment.
APA, Harvard, Vancouver, ISO, and other styles
3

Narayanan, Lakshmi Kanthan, Suresh Sankaranarayanan, Joel J. P. C. Rodrigues, and Pascal Lorenz. "Multi-Agent-Based Modeling for Underground Pipe Health and Water Quality Monitoring for Supplying Quality Water." International Journal of Intelligent Information Technologies 16, no. 3 (July 2020): 52–79. http://dx.doi.org/10.4018/ijiit.2020070103.

Full text
Abstract:
This article discusses distributed monitoring through the deployment of various multiagents in the IoT-Fog-based water distribution network (WDN). This will ensure the right amount of water supplied with respect to demand forecasted to residents. In addition, underground pipe health is also monitored by means of a multiagent based on hydraulic parameters supplying water forecasted with minimal losses which would minimize the operational and material cost involved in recovery or repair. Lastly, there are agents deployed towards leakage monitoring and anti-theft detection of water. The multiagents act upon various parameters of hydrology and analysis is based on the data acquired by the various sensors deployed in the water distribution network which perform partial automation of the disconnection of the supply during extreme critical conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Xiang, Yang, and Frank Hanshar. "Multiagent Decision Making in Collaborative Decision Networks by Utility Cluster Based Partial Evaluation." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 23, no. 02 (April 2015): 149–91. http://dx.doi.org/10.1142/s0218488515500075.

Full text
Abstract:
We consider optimal multiagent cooperative decision making in stochastic environments. The focus is on simultaneous decision making, during which agents cooperate by limited communication. We model the multiagent system as a collaborative decision network (CDN). Several techniques are developed to improve efficiency for decision making with CDNs. We present an equivalent transformation of CDN subnets to facilitate model manipulation. We propose partial evaluation to allow action profiles evaluated with reduced computation. We decompose a CDN subnet, based on clustering of utility variables. A general simultaneous decision making algorithm suite is developed that embeds these techniques. We show that the new algorithm suite improves efficiency by a combination of a linear factor and an exponential factor.
APA, Harvard, Vancouver, ISO, and other styles
5

XIANG, YANG, and FRANK HANSHAR. "MULTIAGENT EXPEDITION WITH GRAPHICAL MODELS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, no. 06 (December 2011): 939–76. http://dx.doi.org/10.1142/s0218488511007416.

Full text
Abstract:
We investigate a class of multiagent planning problems termed multiagent expedition, where agents move around an open, unknown, partially observable, stochastic, and physical environment, in pursuit of multiple and alternative goals of different utility. Optimal planning in multiagent expedition is highly intractable. We introduce the notion of conditional optimality, decompose the task into a set of semi-independent optimization subtasks, and apply a decision-theoretic multiagent graphical model to solve each subtask optimally. A set of techniques are proposed to enhance modeling so that the resultant graphical model can be practically evaluated. Effectiveness of the framework and its scalability are demonstrated through experiments. Multiagent expedition can be characterized as decentralized partially observable Markov decision processes (Dec-POMDPs). Hence, this work contributes towards practical planning in Dec-POMDPs.
APA, Harvard, Vancouver, ISO, and other styles
6

Nunes, Ernesto, Julio Godoy, and Maria Gini. "Multiagent Decision Making on Transportation Networks." Journal of Information Processing 22, no. 2 (2014): 307–18. http://dx.doi.org/10.2197/ipsjjip.22.307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Maturo, Antonio, and Aldo G. S. Ventre. "Reaching consensus in multiagent decision making." International Journal of Intelligent Systems 25, no. 3 (March 2010): 266–73. http://dx.doi.org/10.1002/int.20401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

He, Liu, Haoning Xi, Tangyi Guo, and Kun Tang. "A Generalized Dynamic Potential Energy Model for Multiagent Path Planning." Journal of Advanced Transportation 2020 (July 24, 2020): 1–14. http://dx.doi.org/10.1155/2020/1360491.

Full text
Abstract:
Path planning for the multiagent, which is generally based on the artificial potential energy field, reflects the decision-making process of pedestrian walking and has great importance on the field multiagent system. In this paper, after setting the spatial-temporal simulation environment with large cells and small time segments based on the disaggregation decision theory of the multiagent, we establish a generalized dynamic potential energy model (DPEM) for the multiagent through four steps: (1) construct the space energy field with the improved Dijkstra algorithm, and obtain the fitting functions to reflect the relationship between speed decline rate and space occupancy of the agent through empirical cross experiments. (2) Construct the delay potential energy field based on the judgement and psychological changes of the multiagent in the situations where the other pedestrians have occupied the bottleneck cell. (3) Construct the waiting potential energy field based on the characteristics of the multiagent, such as dissipation and enhancement. (4) Obtain the generalized dynamic potential energy field by superposing the space potential energy field, delay potential energy field, and waiting potential energy field all together. Moreover, a case study is conducted to verify the feasibility and effectiveness of the dynamic potential energy model. The results also indicate that each agent’s path planning decision such as forward, waiting, and detour in the multiagent system is related to their individual characters and environmental factors. Overall, this study could help improve the efficiency of pedestrian traffic, optimize the walking space, and improve the performance of pedestrians in the multiagent system.
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Yang, Xiang Li, and Ming Liu. "Modeling and Simulation of Complex Network Attributes on Coordinating Large Multiagent System." Scientific World Journal 2014 (2014): 1–15. http://dx.doi.org/10.1155/2014/412479.

Full text
Abstract:
With the expansion of distributed multiagent systems, traditional coordination strategy becomes a severe bottleneck when the system scales up to hundreds of agents. The key challenge is that in typical large multiagent systems, sparsely distributed agents can only communicate directly with very few others and the network is typically modeled as an adaptive complex network. In this paper, we present simulation testbedCoordSimbuilt to model the coordination of network centric multiagent systems. Based on the token-based strategy, the coordination can be built as a communication decision problem that agents make decisions to target communications and pass them over to the capable agents who will potentially benefit the team most. We have theoretically analyzed that the characters of complex network make a significant difference with both random and intelligent coordination strategies, which may contribute to future multiagent algorithm design.
APA, Harvard, Vancouver, ISO, and other styles
10

Szymak, Piotr. "Comparison of Centralized, Dispersed and Hybrid Multiagent Control Systems of Underwater Vehicles Team." Solid State Phenomena 180 (November 2011): 114–21. http://dx.doi.org/10.4028/www.scientific.net/ssp.180.114.

Full text
Abstract:
Multiagent systems controlling robots can have different structures, depending on a way of generating decision in these systems. Decisions can be work out in centralized, decentralized or even hybrid way (hybrid system is a connection of both centralized and decentralized systems). In the case of controlling a team of underwater vehicles, it is significant to examine different structures of multiagent systems for choosing the best one for defined underwater task. In the paper, results of operation of three different structures (centralized, dispersed - decentralized and hybrid) of multiagent control systems of underwater vehicles team were presented. The systems were tested in predator-prey problem. In this problem, a team of three underwater vehicles had to catch another underwater robot escaping with larger velocity.
APA, Harvard, Vancouver, ISO, and other styles
11

Pelta, David A., and Ronald R. Yager. "Analyzing the Robustness of Decision Strategies in Multiagent Decision Making." Group Decision and Negotiation 23, no. 6 (October 20, 2013): 1403–16. http://dx.doi.org/10.1007/s10726-013-9376-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Huang, Zishan. "UAV Intelligent Control Based on Machine Vision and Multiagent Decision-Making." Advances in Multimedia 2022 (May 27, 2022): 1–11. http://dx.doi.org/10.1155/2022/8908122.

Full text
Abstract:
In order to improve the effect of UAV intelligent control, this paper will improve machine vision technology. Moreover, this paper adds scale information on the basis of the LSD algorithm, uses the multiline segment standard to merge these candidate line segments for intelligent recognition, and uses the LSD detection algorithm to improve the operating efficiency of the UAV control system and reduce the computational complexity. In addition, this paper combines machine vision technology and multiagent decision-making technology for UAV intelligent control and builds an intelligent control system, which uses intelligent machine vision technology for recognition and multiagent decision-making technology for motion control. The research results show that the UAV intelligent control system based on machine vision and multiagent decision-making proposed in this paper can achieve reliable control of UAVs and improve the work efficiency of UAVs.
APA, Harvard, Vancouver, ISO, and other styles
13

Singh, Arambam James, Duc Thien Nguyen, Akshat Kumar, and Hoong Chuin Lau. "Multiagent Decision Making For Maritime Traffic Management." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6171–78. http://dx.doi.org/10.1609/aaai.v33i01.33016171.

Full text
Abstract:
We address the problem of maritime traffic management in busy waterways to increase the safety of navigation by reducing congestion. We model maritime traffic as a large multiagent systems with individual vessels as agents, and VTS authority as the regulatory agent. We develop a maritime traffic simulator based on historical traffic data that incorporates realistic domain constraints such as uncertain and asynchronous movement of vessels. We also develop a traffic coordination approach that provides speed recommendation to vessels in different zones. We exploit the nature of collective interactions among agents to develop a scalable policy gradient approach that can scale up to real world problems. Empirical results on synthetic and real world problems show that our approach can significantly reduce congestion while keeping the traffic throughput high.
APA, Harvard, Vancouver, ISO, and other styles
14

Sokolowski, John A. "Enhanced Decision Modeling Using Multiagent System Simulation." SIMULATION 79, no. 4 (April 2003): 232–42. http://dx.doi.org/10.1177/0037549703038886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Gray, Rebecca, Alessio Franci, Vaibhav Srivastava, and Naomi Ehrich Leonard. "Multiagent Decision-Making Dynamics Inspired by Honeybees." IEEE Transactions on Control of Network Systems 5, no. 2 (June 2018): 793–806. http://dx.doi.org/10.1109/tcns.2018.2796301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Petit, Claude, and François-Xavier Magaud. "Multiagent meta-model for strategic decision support." Knowledge-Based Systems 19, no. 3 (July 2006): 202–11. http://dx.doi.org/10.1016/j.knosys.2005.11.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Pechoucek, M., J. Vokrinek, and P. Becvar. "ExPlanTech: Multiagent Support for Manufacturing Decision Making." IEEE Intelligent Systems 20, no. 1 (January 2005): 67–74. http://dx.doi.org/10.1109/mis.2005.6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Rizk, Yara, Mariette Awad, and Edward W. Tunstel. "Decision Making in Multiagent Systems: A Survey." IEEE Transactions on Cognitive and Developmental Systems 10, no. 3 (September 2018): 514–29. http://dx.doi.org/10.1109/tcds.2018.2840971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zeng, Yifeng, and Kim-leng Poh. "Symbolic verification of multiagent graphical decision models." International Journal of Intelligent Systems 23, no. 11 (November 2008): 1177–95. http://dx.doi.org/10.1002/int.20313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

González-Briones, Alfonso, José A. Castellanos-Garzón, Yeray Mezquita Martín, Javier Prieto, and Juan M. Corchado. "A Framework for Knowledge Discovery from Wireless Sensor Networks in Rural Environments: A Crop Irrigation Systems Case Study." Wireless Communications and Mobile Computing 2018 (July 17, 2018): 1–14. http://dx.doi.org/10.1155/2018/6089280.

Full text
Abstract:
This paper presents the design and development of an innovative multiagent system based on virtual organizations. The multiagent system manages information from wireless sensor networks for knowledge discovery and decision making in rural environments. The multiagent system has been built over the cloud computing paradigm to provide better flexibility and higher scalability for handling both small- and large-scale projects. The development of wireless sensor network technology has allowed for its extension and application to the rural environment, where the lives of the people interacting with the environment can be improved. The use of “smart” technologies can also improve the efficiency and effectiveness of rural systems. The proposed multiagent system allows us to analyse data collected by sensors for decision making in activities carried out in a rural setting, thus, guaranteeing the best performance in the ecosystem. Since water is a scarce natural resource that should not be wasted, a case study was conducted in an agricultural environment to test the proposed system’s performance in optimizing the irrigation system in corn crops. The architecture collects information about the terrain and the climatic conditions through a wireless sensor network deployed in the crops. This way, the architecture can learn about the needs of the crop and make efficient irrigation decisions. The obtained results are very promising when compared to a traditional automatic irrigation system.
APA, Harvard, Vancouver, ISO, and other styles
21

Xiao, Liang, and Des Greer. "Linked Argumentation Graphs for Multidisciplinary Decision Support." Healthcare 11, no. 4 (February 15, 2023): 585. http://dx.doi.org/10.3390/healthcare11040585.

Full text
Abstract:
Multidisciplinary clinical decision-making has become increasingly important for complex diseases, such as cancers, as medicine has become very specialized. Multiagent systems (MASs) provide a suitable framework to support multidisciplinary decisions. In the past years, a number of agent-oriented approaches have been developed on the basis of argumentation models. However, very limited work has focused, thus far, on systematic support for argumentation in communication among multiple agents spanning various decision sites and holding varying beliefs. There is a need for an appropriate argumentation scheme and identification of recurring styles or patterns of multiagent argument linking to enable versatile multidisciplinary decision applications. We propose, in this paper, a method of linked argumentation graphs and three types of patterns corresponding to scenarios of agents changing the minds of others (argumentation) and their own (belief revision): the collaboration pattern, the negotiation pattern, and the persuasion pattern. This approach is demonstrated using a case study of breast cancer and lifelong recommendations, as the survival rates of diagnosed cancer patients are rising and comorbidity is the norm.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Zhuo, Xu Zhou, Filip De Turck, Taixin Li, Yongmao Ren, and Yifang Qin. "Feudal Multiagent Reinforcement Learning for Interdomain Collaborative Routing Optimization." Wireless Communications and Mobile Computing 2022 (March 27, 2022): 1–11. http://dx.doi.org/10.1155/2022/1231979.

Full text
Abstract:
In view of the inability of traditional interdomain routing schemes to meet the sudden network changes and adapt the routing policy accordingly, many optimization schemes such as modifying Border Gateway Protocol (BGP) parameters and using software-defined network (SDN) to optimize interdomain routing decisions have been proposed. However, with the change and increase of the demand for network data transmission, the high latency and flexibility of these mechanisms have become increasingly prominent. Recent researches have addressed these challenges through multiagent reinforcement learning (MARL), which can be capable of dynamically meeting interdomain requirements, and the multiagent Markov Decision Process (MDP) is introduced to construct this routing optimization problem. Thus, in this paper, an interdomain collaborative routing scheme is proposed in interdomain collaborative architecture. The proposed Feudal Multiagent Actor-Critic (FMAAC) algorithm is designed based on multiagent actor-critic and feudal reinforcement learning to solve this competition-cooperative problem. Our multiagent learns about the optimal interdomain routing decisions, focused on different optimization objectives such as end-to-end delay, throughput, and average delivery rate. Experiments were carried out in the interdomain testbed to verify the convergence and effectiveness of the FMAAC algorithm. Experimental results show that our approach can significantly improve various Quality of Service (QoS) indicators, containing reduced end-to-end delay, increased throughput, and guaranteed over 90% average delivery rate.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Hongjing, Chunhua Hu, and Zhi Huang. "Optimal Control of Multiagent Decision-Making Based on Competence Evolution." Discrete Dynamics in Nature and Society 2023 (May 29, 2023): 1–22. http://dx.doi.org/10.1155/2023/2179376.

Full text
Abstract:
We employ the theory of rarefied gas dynamics and optimal control to investigate the kinetic model of decision-making. The novelty of this paper is that we develop a kinetic model that takes into account both the influence of agents’ competence and managers’ control on decision-making. After each interaction, in addition to the changes in decision directly caused by communication with other agents, the agents’ competence evolves and indirectly influences the degree of decision adjustment through the compromise function. By adding a control term to the model, the behavior of the managers who require the group to establish consensus is also described, and the concrete expression of the control term that minimizes the cost function is obtained by model predictive control. The Boltzmann equation is constructed to characterize the evolution of the density distribution of agents, and the main properties are discussed. The corresponding Fokker–Planck equation is derived by utilizing the asymptotic technique. Lastly, the direct simulation of the Monte Carlo method is used to simulate the evolution of decisions. The results indicate that the agents’ competence and managers’ control facilitate the consistency of collective decisions.
APA, Harvard, Vancouver, ISO, and other styles
24

Pan, Yinghui, Jing Tang, Biyang Ma, Yifeng Zeng, and Zhong Ming. "Toward data-driven solutions to interactive dynamic influence diagrams." Knowledge and Information Systems 63, no. 9 (August 8, 2021): 2431–53. http://dx.doi.org/10.1007/s10115-021-01600-5.

Full text
Abstract:
AbstractWith the availability of significant amount of data, data-driven decision making becomes an alternative way for solving complex multiagent decision problems. Instead of using domain knowledge to explicitly build decision models, the data-driven approach learns decisions (probably optimal ones) from available data. This removes the knowledge bottleneck in the traditional knowledge-driven decision making, which requires a strong support from domain experts. In this paper, we study data-driven decision making in the context of interactive dynamic influence diagrams (I-DIDs)—a general framework for multiagent sequential decision making under uncertainty. We propose a data-driven framework to solve the I-DIDs model and focus on learning the behavior of other agents in problem domains. The challenge is on learning a complete policy tree that will be embedded in the I-DIDs models due to limited data. We propose two new methods to develop complete policy trees for the other agents in the I-DIDs. The first method uses a simple clustering process, while the second one employs sophisticated statistical checks. We analyze the proposed algorithms in a theoretical way and experiment them over two problem domains.
APA, Harvard, Vancouver, ISO, and other styles
25

Wei, Xiaojuan, Meng Jia, and Mengke Geng. "A Multiagent Cooperative Decision-Making Method for Adaptive Intersection Complexity Based on Hierarchical RL." Wireless Communications and Mobile Computing 2022 (October 19, 2022): 1–10. http://dx.doi.org/10.1155/2022/9329186.

Full text
Abstract:
In this paper, we propose a multiagent collaboration decision-making method for adaptive intersection complexity based on hierarchical reinforcement learning—H-CommNet, which uses a two-level structure for collaboration: the upper-level policy network fuses information from all agents and learns how to set a subtask for each agent, and the lower-level policy network relies on the local observation of the agent to control the action targets of the agents from each subtask in the upper layer. H-CommNet allows multiagents to complete collaboration on different time scales, and the scale is controllable. It also uses the computational intelligence of invehicle intelligence and edge nodes to achieve joint optimization of computing resources and communication resources. Through the simulation experiments in the intersection environment without traffic lights, the experimental results show that H-CommNet can achieve better results than baseline in different complexity scenarios when using as few resources as possible, and the scalability, flexibility, and control effects have been improved.
APA, Harvard, Vancouver, ISO, and other styles
26

Yang, Yanhua, and Ligang Yao. "Optimization Method of Power Equipment Maintenance Plan Decision-Making Based on Deep Reinforcement Learning." Mathematical Problems in Engineering 2021 (March 15, 2021): 1–8. http://dx.doi.org/10.1155/2021/9372803.

Full text
Abstract:
The safe and reliable operation of power grid equipment is the basis for ensuring the safe operation of the power system. At present, the traditional periodical maintenance has exposed the abuses such as deficient maintenance and excess maintenance. Based on a multiagent deep reinforcement learning decision-making optimization algorithm, a method for decision-making and optimization of power grid equipment maintenance plans is proposed. In this paper, an optimization model of power grid equipment maintenance plan that takes into account the reliability and economics of power grid operation is constructed with maintenance constraints and power grid safety constraints as its constraints. The deep distributed recurrent Q-networks multiagent deep reinforcement learning is adopted to solve the optimization model. The deep distributed recurrent Q-networks multiagent deep reinforcement learning uses the high-dimensional feature extraction capabilities of deep learning and decision-making capabilities of reinforcement learning to solve the multiobjective decision-making problem of power grid maintenance planning. Through case analysis, the comparative results show that the proposed algorithm has better optimization and decision-making ability, as well as lower maintenance cost. Accordingly, the algorithm can realize the optimal decision of power grid equipment maintenance plan. The expected value of power shortage and maintenance cost obtained by the proposed method is $71.75$ $MW·H$ and $496000$ $yuan$.
APA, Harvard, Vancouver, ISO, and other styles
27

Pynadath, D. V., and M. Tambe. "The Communicative Multiagent Team Decision Problem: Analyzing Teamwork Theories and Models." Journal of Artificial Intelligence Research 16 (June 1, 2002): 389–423. http://dx.doi.org/10.1613/jair.1024.

Full text
Abstract:
Despite the significant progress in multiagent teamwork, existing research does not address the optimality of its prescriptions nor the complexity of the teamwork problem. Without a characterization of the optimality-complexity tradeoffs, it is impossible to determine whether the assumptions and approximations made by a particular theory gain enough efficiency to justify the losses in overall performance. To provide a tool for use by multiagent researchers in evaluating this tradeoff, we present a unified framework, the COMmunicative Multiagent Team Decision Problem (COM-MTDP). The COM-MTDP model combines and extends existing multiagent theories, such as decentralized partially observable Markov decision processes and economic team theory. In addition to their generality of representation, COM-MTDPs also support the analysis of both the optimality of team performance and the computational complexity of the agents' decision problem. In analyzing complexity, we present a breakdown of the computational complexity of constructing optimal teams under various classes of problem domains, along the dimensions of observability and communication cost. In analyzing optimality, we exploit the COM-MTDP's ability to encode existing teamwork theories and models to encode two instantiations of joint intentions theory taken from the literature. Furthermore, the COM-MTDP model provides a basis for the development of novel team coordination algorithms. We derive a domain-independent criterion for optimal communication and provide a comparative analysis of the two joint intentions instantiations with respect to this optimal policy. We have implemented a reusable, domain-independent software package based on COM-MTDPs to analyze teamwork coordination strategies, and we demonstrate its use by encoding and evaluating the two joint intentions strategies within an example domain.
APA, Harvard, Vancouver, ISO, and other styles
28

Brito, Robison Cris, Cesar Augusto Tacla, and Lúcia Valéria Ramos de Arruda. "A multiagent simulator for supporting logistic decisions of unloading petroleum ships in habors." Pesquisa Operacional 30, no. 3 (December 2010): 729–50. http://dx.doi.org/10.1590/s0101-74382010000300012.

Full text
Abstract:
This work presents and evaluates the performance of a simulation model based on multiagent system technology in order to support logistic decisions in a harbor from oil supply chain. The main decisions are concerned to pier allocation, oil discharge, storage tanks management and refinery supply by a pipeline. The real elements as ships, piers, pipelines, and refineries are modeled as agents, and they negotiate by auctions to move oil in this system. The simulation results are compared with results obtained with an optimization mathematical model based on mixed integer linear programming (MILP). Both models are able to find optimal solutions or close to the optimal solution depending on the problem size. In problems with several elements, the multiagent model can find solutions in seconds, while the MILP model presents very high computational time to find the optimal solution. In some situations, the MILP model results in out of memory error. Test scenarios demonstrate the usefulness of the multiagent based simulator in supporting decision taken concerning the logistic in harbors.
APA, Harvard, Vancouver, ISO, and other styles
29

Rao, Ning, Hua Xu, Yue Zhang, Dan Wang, Lei Jiang, and Xiang Peng. "Joint Optimization of Jamming Link and Power Control in Communication Countermeasures: A Multiagent Deep Reinforcement Learning Approach." Wireless Communications and Mobile Computing 2022 (December 29, 2022): 1–18. http://dx.doi.org/10.1155/2022/7962686.

Full text
Abstract:
Due to the nonconvexity feature of optimal controlling such as jamming link selection and jamming power allocation issues, obtaining the optimal resource allocation strategy in communication countermeasures scenarios is challenging. Thus, we propose a novel decentralized jamming resource allocation algorithm based on multiagent deep reinforcement learning (MADRL) to improve the efficiency of jamming resource allocation in battlefield communication countermeasures. We first model the communication jamming resource allocation problem as a fully cooperative multiagent task, considering the cooperative interrelationship of jamming equipment (JE). Then, to alleviate the nonstationarity feature and high decision dimensions in the multiagent system, we introduce a centralized training with decentralized execution framework (CTDE), which means all JEs are trained with global information and rely on their local observations only while making decisions. Each JE obtains a decentralized policy after the training process. Subsequently, we develop the multiagent soft actor-critic (MASAC) algorithm to enhance the exploration capability of agents and accelerate the learning of cooperative policies among agents by leveraging the maximum policy entropy criterion. Finally, the simulation results are presented to demonstrate that the proposed MASAC algorithm outperforms the existing centralized allocation benchmark algorithms.
APA, Harvard, Vancouver, ISO, and other styles
30

TSUI, KWOK CHING, and JIMING LIU. "AN EVOLUTIONARY MULTIAGENT DIFFUSION APPROACH TO OPTIMIZATION." International Journal of Pattern Recognition and Artificial Intelligence 16, no. 06 (September 2002): 715–33. http://dx.doi.org/10.1142/s0218001402001940.

Full text
Abstract:
This article proposes a novel multiagent approach to optimization inspired by diffusion in nature called Evolutionary Multiagent Diffusion (EMD). Each agent in EMD makes the decision to diffuse based on the information shared between its parent and its siblings. The behavior of EMD is analyzed and its relation to similar search algorithms is discussed.
APA, Harvard, Vancouver, ISO, and other styles
31

Ponnambalam, S. G., Mukund Nilakantan Janardhanan, and G. Rishwaraj. "Trust-based decision-making framework for multiagent system." Soft Computing 25, no. 11 (March 20, 2021): 7559–75. http://dx.doi.org/10.1007/s00500-021-05715-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Vemuri, Ratna kumari, Chinni Bala Vijaya Durga, Syed Abuthahir Syed Ibrahim, Nagaraju Arumalla, Senthilvadivu Subramanian, and Lakshmi Bhukya. "Intelligent-of-things multiagent system for smart home energy monitoring." Indonesian Journal of Electrical Engineering and Computer Science 34, no. 3 (June 1, 2024): 1858. http://dx.doi.org/10.11591/ijeecs.v34.i3.pp1858-1867.

Full text
Abstract:
The proliferation of IoT devices has ushered in a new era of smart homes, where efficient energy management is a paramount concern. Multiagent artificial intelligence-of-things (MAIoT) has emerged as a promising approach to address the complex challenges of smart home energy management. This research study examines MAIoT's components, functioning, benefits, and drawbacks. MAIoT systems improve energy efficiency and user comfort by combining multiagent systems and IoT devices. However, privacy, security, interoperability, scalability, and user acceptability must be addressed. As technology advances, MAIoT in smart home energy management will offer more sophisticated and adaptable solutions to cut energy consumption and promote sustainability. This article describes how energy status and internal pricing signals affect group intelligent decision making and the interaction dynamics between consumers or decision makers. In a multiagent configuration based on the new concept of artificial intelligence-of-things, this intelligent home energy management challenge is simulated and illustrated using software and hardware. Based on sufficient experimental simulations, this paper suggested that residential clients can significantly improve their economic benefit and decision-making efficiency.
APA, Harvard, Vancouver, ISO, and other styles
33

Ling, Jiajing, Kushagra Chandak, and Akshat Kumar. "Integrating Knowledge Compilation with Reinforcement Learning for Routes." Proceedings of the International Conference on Automated Planning and Scheduling 31 (May 17, 2021): 542–50. http://dx.doi.org/10.1609/icaps.v31i1.16002.

Full text
Abstract:
Sequential multiagent decision-making under partial observability and uncertainty poses several challenges. Although multiagent reinforcement learning (MARL) approaches have increased the scalability, addressing combinatorial domains is still challenging as random exploration by agents is unlikely to generate useful reward signals. We address cooperative multiagent pathfinding under uncertainty and partial observability where agents move from their respective sources to destinations while also satisfying constraints (e.g., visiting landmarks). Our main contributions include: (1) compiling domain knowledge such as underlying graph connectivity and domain constraints into propositional logic based decision diagrams, (2) developing modular techniques to integrate such knowledge with deep MARL algorithms, and (3) developing fast algorithms to query the compiled knowledge for accelerated episode simulation in RL. Empirically, our approach can tractably represent various types of domain constraints, and outperforms previous MARL approaches significantly both in terms of sample complexity and solution quality on a number of instances.
APA, Harvard, Vancouver, ISO, and other styles
34

Oliehoek, Frans, Stefan Witwicki, and Leslie Kaelbling. "A Sufficient Statistic for Influence in Structured Multiagent Environments." Journal of Artificial Intelligence Research 70 (February 24, 2021): 789–870. http://dx.doi.org/10.1613/jair.1.12136.

Full text
Abstract:
Making decisions in complex environments is a key challenge in artificial intelligence (AI). Situations involving multiple decision makers are particularly complex, leading to computational intractability of principled solution methods. A body of work in AI has tried to mitigate this problem by trying to distill interaction to its essence: how does the policy of one agent influence another agent? If we can find more compact representations of such influence, this can help us deal with the complexity, for instance by searching the space of influences rather than the space of policies. However, so far these notions of influence have been restricted in their applicability to special cases of interaction. In this paper we formalize influence-based abstraction (IBA), which facilitates the elimination of latent state factors without any loss in value, for a very general class of problems described as factored partially observable stochastic games (fPOSGs). On the one hand, this generalizes existing descriptions of influence, and thus can serve as the foundation for improvements in scalability and other insights in decision making in complex multiagent settings. On the other hand, since the presence of other agents can be seen as a generalization of single agent settings, our formulation of IBA also provides a sufficient statistic for decision making under abstraction for a single agent. We also give a detailed discussion of the relations to such previous works, identifying new insights and interpretations of these approaches. In these ways, this paper deepens our understanding of abstraction in a wide range of sequential decision making settings, providing the basis for new approaches and algorithms for a large class of problems.
APA, Harvard, Vancouver, ISO, and other styles
35

Jiao, Peng, Kai Xu, Shiguang Yue, Xiangyu Wei, and Lin Sun. "A Decentralized Partially Observable Markov Decision Model with Action Duration for Goal Recognition in Real Time Strategy Games." Discrete Dynamics in Nature and Society 2017 (2017): 1–15. http://dx.doi.org/10.1155/2017/4580206.

Full text
Abstract:
Multiagent goal recognition is a tough yet important problem in many real time strategy games or simulation systems. Traditional modeling methods either are in great demand of detailed agents’ domain knowledge and training dataset for policy estimation or lack clear definition of action duration. To solve the above problems, we propose a novel Dec-POMDM-T model, combining the classic Dec-POMDP, an observation model for recognizer, joint goal with its termination indicator, and time duration variables for actions with action termination variables. In this paper, a model-free algorithm named cooperative colearning based on Sarsa is used. Considering that Dec-POMDM-T usually encounters multiagent goal recognition problems with different sorts of noises, partially missing data, and unknown action durations, the paper exploits the SIS PF with resampling for inference under the dynamic Bayesian network structure of Dec-POMDM-T. In experiments, a modified predator-prey scenario is adopted to study multiagent joint goal recognition problem, which is the recognition of the joint target shared among cooperative predators. Experiment results show that (a) Dec-POMDM-T works effectively in multiagent goal recognition and adapts well to dynamic changing goals within agent group; (b) Dec-POMDM-T outperforms traditional Dec-MDP-based methods in terms of precision, recall, andF-measure.
APA, Harvard, Vancouver, ISO, and other styles
36

Kovařík, Vojtěch, Martin Schmid, Neil Burch, Michael Bowling, and Viliam Lisý. "Rethinking formal models of partially observable multiagent decision making." Artificial Intelligence 303 (February 2022): 103645. http://dx.doi.org/10.1016/j.artint.2021.103645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Person, Patrick, Hadhoum Boukachour, Michel Coletta, Thierry Galinho, and Frédéric Serin. "Data representation layer in a MultiAgent decision support system." Multiagent and Grid Systems 2, no. 3 (September 14, 2006): 223–35. http://dx.doi.org/10.3233/mgs-2006-2302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Dubernet, Thibaut, and Kay W. Axhausen. "Including joint decision mechanisms in a multiagent transport simulation." Transportation Letters 5, no. 4 (October 2013): 175–83. http://dx.doi.org/10.1179/1942787513y.0000000002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Qinghe, Hu, Arun Kumar, and Zhang Shuang. "A bidding decision model in multiagent supply chain planning." International Journal of Production Research 39, no. 15 (January 2001): 3291–301. http://dx.doi.org/10.1080/00207540110060860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Pelta, David A., and Ronald R. Yager. "Decision Strategies in Mediated Multiagent Negotiations: An Optimization Approach." IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 40, no. 3 (May 2010): 635–40. http://dx.doi.org/10.1109/tsmca.2009.2036932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yichuan Jiang, Jing Hu, and Donghui Lin. "Decision Making of Networked Multiagent Systems for Interaction Structures." IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 41, no. 6 (November 2011): 1107–21. http://dx.doi.org/10.1109/tsmca.2011.2114343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Becvar, Petr, Lubo Smidl, and Josef Psutka. "An Intelligent Telephony Interface of Multiagent Decision Support Systems." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 37, no. 4 (July 2007): 553–60. http://dx.doi.org/10.1109/tsmcc.2007.897335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

A. Vidhate, Deepak, and Parag Kulkarni. "Implementation of Multiagent Learning Algorithms for Improved Decision Making." International Journal of Computer Trends and Technology 35, no. 2 (May 25, 2016): 60–66. http://dx.doi.org/10.14445/22312803/ijctt-v35p111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Dahong, Tang, and Chen Ting. "Multi-criteria Decision Making Problems with Bi-level Multiagent." IFAC Proceedings Volumes 22, no. 10 (August 1989): 275–79. http://dx.doi.org/10.1016/s1474-6670(17)53185-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Sujit, P. B., and Debasish Ghose. "Self Assessment-Based Decision Making for Multiagent Cooperative Search." IEEE Transactions on Automation Science and Engineering 8, no. 4 (October 2011): 705–19. http://dx.doi.org/10.1109/tase.2011.2155058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kuroda, Tatsuaki. "A power index for multistage and multiagent decision systems." Behavioral Science 38, no. 4 (1993): 255–72. http://dx.doi.org/10.1002/bs.3830380403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Shi, Jiawei, and Yan Zhou. "Group Decision Making for Product Innovation Based on PZB Model in Fuzzy Environment: A Case from New-Energy Storage Innovation Design." Mathematics 10, no. 19 (October 4, 2022): 3634. http://dx.doi.org/10.3390/math10193634.

Full text
Abstract:
According to the World Economic Forum, countries and regions should steer their energy systems toward cheaper, safer, and more sustainable energy sources, and move away from their reliance on traditional energy sources. With this trend, it is significant that new-energy battery enterprises should not only maintain their current installed product, but also attract more consumers. Due to the differences in customers, there are different requirements for the products. Thus, this paper chooses new-energy storage product innovation design as the object, and proposes a novel multiagent group decision-making method based on QFD and PZB models in a fuzzy environment. Firstly, extensively collected multiagent (consumer and designer) requirements are transformed into specific functions through an extended multiagent QFD with HFLTS, and the relationship coefficients are derived. Afterward, different design schemes for functional components are evaluated according to the concept of the PZB model. Then, the satisfaction degree interval is calculated for each partial design. On the basis of these indicators, a multiagent multi-objective optimization model is established. Afterward, solving he model through NSGA-II quickly generates the most suitable product innovation design scheme. Lastly, the feasibility and superiority of proposed method are illustrated through innovation design for a new-energy storage battery.
APA, Harvard, Vancouver, ISO, and other styles
48

Radu, Valentin, Catalin Dumitrescu, Emilia Vasile, Alina Iuliana Tăbîrcă, Maria Cristina Stefan, Liliana Manea, and Florin Radu. "Modeling and Prediction of Sustainable Urban Mobility Using Game Theory Multiagent and the Golden Template Algorithm." Electronics 12, no. 6 (March 8, 2023): 1288. http://dx.doi.org/10.3390/electronics12061288.

Full text
Abstract:
The current development of multimodal transport networks focuses on the realization of intelligent transport systems (ITS) to manage the prediction of traffic congestion and urban mobility of vehicles and passengers so that alternative routes can be recommended for transport, especially the use of public passenger transport, to achieve sustainable transport. In the article, we propose an algorithm and a methodology for solving multidimensional traffic congestion objectives, especially for intersections, based on combining machine learning with the templates method—the golden template algorithm with the multiagent game theory. Intersections are modeled as independent players who had to reach an agreement using Nash negotiation. The obtained results showed that the Nash negotiation with multiagents and the golden template modeling have superior results to the model predictive control (MPC) algorithm, improving travel time, the length of traffic queues, the efficiency of travel flows in an unknown and dynamic environment, and the coordination of the agents’ actions and decision making. The proposed algorithm can be used in planning public passenger transport on alternative routes and in ITS management decision making.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Baolai, Shengang Li, Xianzhong Gao, and Tao Xie. "UAV Swarm Confrontation Using Hierarchical Multiagent Reinforcement Learning." International Journal of Aerospace Engineering 2021 (December 21, 2021): 1–12. http://dx.doi.org/10.1155/2021/3360116.

Full text
Abstract:
With the development of unmanned aerial vehicle (UAV) technology, UAV swarm confrontation has attracted many researchers’ attention. However, the situation faced by the UAV swarm has substantial uncertainty and dynamic variability. The state space and action space increase exponentially with the number of UAVs, so that autonomous decision-making becomes a difficult problem in the confrontation environment. In this paper, a multiagent reinforcement learning method with macro action and human expertise is proposed for autonomous decision-making of UAVs. In the proposed approach, UAV swarm is modeled as a large multiagent system (MAS) with an individual UAV as an agent, and the sequential decision-making problem in swarm confrontation is modeled as a Markov decision process. Agents in the proposed method are trained based on the macro actions, where sparse and delayed rewards, large state space, and action space are effectively overcome. The key to the success of this method is the generation of the macro actions that allow the high-level policy to find a near-optimal solution. In this paper, we further leverage human expertise to design a set of good macro actions. Extensive empirical experiments in our constructed swarm confrontation environment show that our method performs better than the other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
50

Panella, Alessandro. "Multiagent Stochastic Planning With Bayesian Policy Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 29, 2013): 1672–73. http://dx.doi.org/10.1609/aaai.v27i1.8506.

Full text
Abstract:
When operating in stochastic, partially observable, multiagent settings, it is crucial to accurately predict the actions of other agents. In my thesis work, I propose methodologies for learning the policy of external agents from their observed behavior, in the form of finite state controllers. To perform this task, I adopt Bayesian learning algorithms based on nonparametric prior distributions, that provide the flexibility required to infer models of unknown complexity. These methods are to be embedded in decision making frameworks for autonomous planning in partially observable multiagent systems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography