Auswahl der wissenschaftlichen Literatur zum Thema „Single Player Monte Carlo Tree Search“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Single Player Monte Carlo Tree Search" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Single Player Monte Carlo Tree Search"

1

Schadd, Maarten P. D., Mark H. M. Winands, Mandy J. W. Tak und Jos W. H. M. Uiterwijk. „Single-player Monte-Carlo tree search for SameGame“. Knowledge-Based Systems 34 (Oktober 2012): 3–11. http://dx.doi.org/10.1016/j.knosys.2011.08.008.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Xia, Yu-Wei, Chao Yang und Bing-Qiu Chen. „A Path Planning Method Based on Improved Single Player-Monte Carlo Tree Search“. IEEE Access 8 (2020): 163694–702. http://dx.doi.org/10.1109/access.2020.3021748.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Furuoka, Ryota, und Shimpei Matsumoto. „Worker’s knowledge evaluation with single-player Monte Carlo tree search for a practical reentrant scheduling problem“. Artificial Life and Robotics 22, Nr. 1 (23.09.2016): 130–38. http://dx.doi.org/10.1007/s10015-016-0325-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Mingyan, Hang Ren, Wei Huang, Taiwei Yan, Jiewei Lei und Jiayang Wang. „An efficient AI-based method to play the Mahjong game with the knowledge and game-tree searching strategy“. ICGA Journal 43, Nr. 1 (26.05.2021): 2–25. http://dx.doi.org/10.3233/icg-210179.

Der volle Inhalt der Quelle
Annotation:
The Mahjong game has widely been acknowledged to be a difficult problem in the field of imperfect information games. Because of its unique characteristics of asymmetric, serialized and multi-player game information, conventional methods of dealing with perfect information games are difficult to be applied directly on the Mahjong game. Therefore, AI (artificial intelligence)-based studies to handle the Mahjong game become challenging. In this study, an efficient AI-based method to play the Mahjong game is proposed based on the knowledge and game-tree searching strategy. Technically, we simplify the Mahjong game framework from multi-player to single-player. Based on the above intuition, an improved search algorithm is proposed to explore the path of winning. Meanwhile, three node extension strategies are proposed based on heuristic information to improve the search efficiency. Then, an evaluation function is designed to calculate the optimal solution by combining the winning rate, score and risk value assessment. In addition, we combine knowledge and Monte Carlo simulation to construct an opponent model to predict hidden information and translate it into available relative probabilities. Finally, dozens of experiments are designed to prove the effectiveness of each algorithm module. It is also worthy to mention that, the first version of the proposed method, which is named as KF-TREE, has won the silver medal in the Mahjong tournament of 2019 Computer Olympiad.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Guo, Jian, Yaoyao Shi, Zhen Chen, Tao Yu, Bijan Shirinzadeh und Pan Zhao. „Improved SP-MCTS-Based Scheduling for Multi-Constraint Hybrid Flow Shop“. Applied Sciences 10, Nr. 18 (08.09.2020): 6220. http://dx.doi.org/10.3390/app10186220.

Der volle Inhalt der Quelle
Annotation:
As a typical non-deterministic polynomial (NP)-hard combinatorial optimization problem, the hybrid flow shop scheduling problem (HFSSP) is known to be a very common layout in real-life manufacturing scenarios. Even though many metaheuristic approaches have been presented for the HFSSP with makespan criterion, there are limitations of the metaheuristic method in accuracy, efficiency, and adaptability. To address this challenge, an improved SP-MCTS (single-player Monte-Carlo tree search)-based scheduling is proposed for the hybrid flow shop to minimize the makespan considering the multi-constraint. Meanwhile, the Markov decision process (MDP) is applied to transform the HFSSP into the problem of shortest time branch path. The improvement of the algorithm includes the selection policy blending standard deviation, the single-branch expansion strategy and the 4-Rule policy simulation. Based on this improved algorithm, it could accurately locate high-potential branches, economize the resource of the computer and quickly optimize the solution. Then, the parameter combination is introduced to trade off the selection and simulation with the intention of balancing the exploitation and exploration in the search process. Finally, through the analysis of the calculated results, the validity of improved SP-MCTS (ISP-MCTS) for solving the benchmarks is proven, and the ISP-MCTS performs better than the other algorithms in solving large-scale problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Fabbri, André, Frédéric Armetta, Éric Duchêne und Salima Hassas. „A Self-Acquiring Knowledge Process for MCTS“. International Journal on Artificial Intelligence Tools 25, Nr. 01 (Februar 2016): 1660007. http://dx.doi.org/10.1142/s0218213016600071.

Der volle Inhalt der Quelle
Annotation:
MCTS (Monte Carlo Tree Search) is a well-known and efficient process to cover and evaluate a large range of states for combinatorial problems. We choose to study MCTS for the Computer Go problem, which is one of the most challenging problem in the field of Artificial Intelligence. For this game, a single combinatorial approach does not always lead to a reliable evaluation of the game states. In order to enhance MCTS ability to tackle such problems, one can benefit from game specific knowledge in order to increase the accuracy of the game state evaluation. Such a knowledge is not easy to acquire. It is the result of a constructivist learning mechanism based on the experience of the player. That is why we explore the idea to endow the MCTS with a process inspired by constructivist learning, to self-acquire knowledge from playing experience. In this paper, we propose a complementary process for MCTS called BHRF (Background History Reply Forest), which allows to memorize efficient patterns in order to promote their use through the MCTS process. Our experimental results lead to promising results and underline how self-acquired data can be useful for MCTS based algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Maes, Francis, David Lupien St-Pierre und Damien Ernst. „Monte Carlo Search Algorithm Discovery for Single-Player Games“. IEEE Transactions on Computational Intelligence and AI in Games 5, Nr. 3 (September 2013): 201–13. http://dx.doi.org/10.1109/tciaig.2013.2239295.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Spoerer, Kristian. „BI-DIRECTIONAL MONTE CARLO TREE SEARCH“. Asia-Pacific Journal of Information Technology and Multimedia 10, Nr. 01 (01.06.2021): 17–26. http://dx.doi.org/10.17576/apjitm-2021-1001-02.

Der volle Inhalt der Quelle
Annotation:
This paper describes a new algorithm called Bi-Directional Monte Carlo Tree Search. The essential idea of Bi-directional Monte Carlo Tree Search is to run an MCTS forwards from the start state, and simultaneously run an MCTS backwards from the goal state, and stop when the two searches meet. Bi-Directional MCTS is tested on 8-Puzzle and Pancakes Problem, two single-agent search problems, which allow control over the optimal solution length d and average branching factor b respectively. Preliminary results indicate that enhancing Monte Carlo Tree Search by making it Bi-Directional speeds up the search. The speedup of Bi-directional MCTS grows with increasing the problem size, in terms of both optimal solution length d and also branching factor b. Furthermore, Bi-Directional Search has been applied to a Reinforcement Learning algorithm. It is hoped that the speed enhancement of Bi-directional Monte Carlo Tree Search will also apply to other planning problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lisy, Viliam. „ALTERNATIVE SELECTION FUNCTIONS FOR INFORMATION SET MONTE CARLO TREE SEARCH“. Acta Polytechnica 54, Nr. 5 (31.10.2014): 333–40. http://dx.doi.org/10.14311/ap.2014.54.0333.

Der volle Inhalt der Quelle
Annotation:
We evaluate the performance of various selection methods for the Monte Carlo Tree Search algorithm in two-player zero-sum extensive-form games with imperfect information. We compare the standard Upper Confident Bounds applied to Trees (UCT) along with the less common Exponential Weights for Exploration and Exploitation (Exp3) and novel Regret matching (RM) selection in two distinct imperfect information games: Imperfect Information Goofspiel and Phantom Tic-Tac-Toe. We show that UCT after initial fast convergence towards a Nash equilibrium computes increasingly worse strategies after some point in time. This is not the case with Exp3 and RM, which also show superior performance in head-to-head matches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Roberson, Christian, und Katarina Sperduto. „A Monte Carlo Tree Search Player for Birds of a Feather Solitaire“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 9700–9705. http://dx.doi.org/10.1609/aaai.v33i01.33019700.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence in games serves as an excellent platform for facilitating collaborative research with undergraduates. This paper explores several aspects of a research challenge proposed for a newly-developed variant of a solitaire game. We present multiple classes of game states that can be identified as solvable or unsolvable. We present a heuristic for quickly finding goal states in a game state search tree. Finally, we introduce a Monte Carlo Tree Search-based player for the solitaire variant that can win almost any solvable starting deal efficiently.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Single Player Monte Carlo Tree Search"

1

Žlebek, Petr. „Hra Sokoban a umělá inteligence“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-442845.

Der volle Inhalt der Quelle
Annotation:
The thesis is focused on solving the Sokoban game using artificial intelligence algorithms. The first part of the thesis describes the Sokoban game, state space and selected state space search methods. In the second part selected methods were implemented and graphic user interface was created in the Python environment. Comparative experiments were executed in the final part.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Paumard, Marie-Morgane. „Résolution automatique de puzzles par apprentissage profond“. Thesis, CY Cergy Paris Université, 2020. http://www.theses.fr/2020CYUN1067.

Der volle Inhalt der Quelle
Annotation:
L’objectif de cette thèse est de développer des méthodes sémantiques de réassemblage dans le cadre compliqué des collections patrimoniales, où certains blocs sont érodés ou manquants.Le remontage de vestiges archéologiques est une tâche importante pour les sciences du patrimoine : il permet d’améliorer la compréhension et la conservation des vestiges et artefacts anciens. Certains ensembles de fragments ne peuvent être réassemblés grâce aux techniques utilisant les informations de contour et les continuités visuelles. Il est alors nécessaire d’extraire les informations sémantiques des fragments et de les interpréter. Ces tâches peuvent être accomplies automatiquement grâce aux techniques d’apprentissage profond couplées à un solveur, c’est-à-dire un algorithme de prise de décision sous contraintes.Cette thèse propose deux méthodes de réassemblage sémantique pour fragments 2D avec érosion, ainsi qu’un jeu de données et des métriques d’évaluation.La première méthode, Deepzzle, propose un réseau de neurones auquel succède un solveur. Le réseau de neurones est composé de deux réseaux convolutionnels siamois entraînés à prédire la position relative de deux fragments : il s'agit d'une classification à 9 classes. Le solveur utilise l’algorithme de Dijkstra pour maximiser la probabilité jointe. Deepzzle peut résoudre le cas de fragments manquants et surnuméraires, est capable de traiter une quinzaine de fragments par puzzle, et présente des performances supérieures à l’état de l’art de 25%.La deuxième méthode, Alphazzle, s’inspire d’AlphaZero et de recherche arborescente Monte Carlo (MCTS) à un joueur. Il s’agit d’une méthode itérative d’apprentissage profond par renforcement : à chaque étape, on place un fragment sur le réassemblage en cours. Deux réseaux de neurones guident le MCTS : un prédicteur d’action, qui utilise le fragment et le réassemblage en cours pour proposer une stratégie, et un évaluateur, qui est entraîné à prédire la qualité du résultat futur à partir du réassemblage en cours. Alphazzle prend en compte les relations entre tous les fragments et s’adapte à des puzzles de taille supérieure à ceux résolus par Deepzzle. Par ailleurs, Alphazzle se place dans le cadre patrimonial : en fin de réassemblage, le MCTS n’accède pas à la récompense, contrairement à AlphaZero. En effet, la récompense, qui indique si un puzzle est bien résolu ou non, ne peut être qu’estimée par l’algorithme, car seul un conservateur peut être certain de la qualité d’un réassemblage
The objective of this thesis is to develop semantic methods of reassembly in the complicated framework of heritage collections, where some blocks are eroded or missing.The reassembly of archaeological remains is an important task for heritage sciences: it allows to improve the understanding and conservation of ancient vestiges and artifacts. However, some sets of fragments cannot be reassembled with techniques using contour information or visual continuities. It is then necessary to extract semantic information from the fragments and to interpret them. These tasks can be performed automatically thanks to deep learning techniques coupled with a solver, i.e., a constrained decision making algorithm.This thesis proposes two semantic reassembly methods for 2D fragments with erosion and a new dataset and evaluation metrics.The first method, Deepzzle, proposes a neural network followed by a solver. The neural network is composed of two Siamese convolutional networks trained to predict the relative position of two fragments: it is a 9-class classification. The solver uses Dijkstra's algorithm to maximize the joint probability. Deepzzle can address the case of missing and supernumerary fragments, is capable of processing about 15 fragments per puzzle, and has a performance that is 25% better than the state of the art.The second method, Alphazzle, is based on AlphaZero and single-player Monte Carlo Tree Search (MCTS). It is an iterative method that uses deep reinforcement learning: at each step, a fragment is placed on the current reassembly. Two neural networks guide MCTS: an action predictor, which uses the fragment and the current reassembly to propose a strategy, and an evaluator, which is trained to predict the quality of the future result from the current reassembly. Alphazzle takes into account the relationships between all fragments and adapts to puzzles larger than those solved by Deepzzle. Moreover, Alphazzle is compatible with constraints imposed by a heritage framework: at the end of reassembly, MCTS does not access the reward, unlike AlphaZero. Indeed, the reward, which indicates if a puzzle is well solved or not, can only be estimated by the algorithm, because only a conservator can be sure of the quality of a reassembly
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Šmejkal, Pavel. „Umělá inteligence pro počítačovou hru Children of the Galaxy“. Master's thesis, 2018. http://www.nusl.cz/ntk/nusl-387346.

Der volle Inhalt der Quelle
Annotation:
Even though artificial intelligence (AI) agents are now able to solve many classical games, in the field of computer strategy games, the AI opponents still leave much to be desired. In this work we tackle a problem of combat in strategy video games by adapting existing search approaches: Portfolio greedy search (PGS) and Monte-Carlo tree search (MCTS). We also introduce an improved version of MCTS called MCTS considering hit points (MCTS_HP). These methods are evaluated in context of a recently released 4X strategy game Children of the Galaxy. We implement a combat simulator for the game and a benchmarking framework where various AI approaches can be compared. We show that for small to medium combat MCTS methods are superior to PGS. In all scenarios MCTS_HP is equal or better than regular MCTS due to its better search guidance. In smaller scenarios MCTS_HP with only 100 millisecond time limit outperforms regular MCTS with 2 second time limit. By combining fast greedy search for large combats and more precise MCTS_HP for smaller scenarios a universal AI player can be created.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Single Player Monte Carlo Tree Search"

1

Schadd, Maarten P. D., Mark H. M. Winands, H. Jaap van den Herik, Guillaume M. J. B. Chaslot und Jos W. H. M. Uiterwijk. „Single-Player Monte-Carlo Tree Search“. In Computers and Games, 1–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-87608-3_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Nijssen, J. A. M., und Mark H. M. Winands. „Enhancements for Multi-Player Monte-Carlo Tree Search“. In Computers and Games, 238–49. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-17928-0_22.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Nijssen, J. A. M., und Mark H. M. Winands. „Playout Search for Monte-Carlo Tree Search in Multi-player Games“. In Lecture Notes in Computer Science, 72–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31866-5_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Single Player Monte Carlo Tree Search"

1

Lan, Li-Cheng, Wei Li, Ting-Han Wei und I.-Chen Wu. „Multiple Policy Value Monte Carlo Tree Search“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/653.

Der volle Inhalt der Quelle
Annotation:
Many of the strongest game playing programs use a combination of Monte Carlo tree search (MCTS) and deep neural networks (DNN), where the DNNs are used as policy or value evaluators. Given a limited budget, such as online playing or during the self-play phase of AlphaZero (AZ) training, a balance needs to be reached between accurate state estimation and more MCTS simulations, both of which are critical for a strong game playing agent. Typically, larger DNNs are better at generalization and accurate evaluation, while smaller DNNs are less costly, and therefore can lead to more MCTS simulations and bigger search trees with the same budget. This paper introduces a new method called the multiple policy value MCTS (MPV-MCTS), which combines multiple policy value neural networks (PV-NNs) of various sizes to retain advantages of each network, where two PV-NNs f_S and f_L are used in this paper. We show through experiments on the game NoGo that a combined f_S and f_L MPV-MCTS outperforms single PV-NN with policy value MCTS, called PV-MCTS. Additionally, MPV-MCTS also outperforms PV-MCTS for AZ training.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Gao, Chao, Martin Müller und Ryan Hayward. „Three-Head Neural Network Architecture for Monte Carlo Tree Search“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/523.

Der volle Inhalt der Quelle
Annotation:
AlphaGo Zero pioneered the concept of two-head neural networks in Monte Carlo Tree Search (MCTS), where the policy output is used for prior action probability and the state-value estimate is used for leaf node evaluation. We propose a three-head neural net architecture with policy, state- and action-value outputs, which could lead to more efficient MCTS since neural leaf estimate can still be back-propagated in tree with delayed node expansion and evaluation. To effectively train the newly introduced action-value head on the same game dataset as for two-head nets, we exploit the optimal relations between parent and children nodes for data augmentation and regularization. In our experiments for the game of Hex, the action-value head learning achieves similar error as the state-value prediction of a two-head architecture. The resulting neural net models are then combined with the same Policy Value MCTS (PV-MCTS) implementation. We show that, due to more efficient use of neural net evaluations, PV-MCTS with three-head neural nets consistently performs better than the two-head ones, significantly outplaying the state-of-the-art player MoHex-CNN.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Sarratt, Trevor, David V. Pynadath und Arnav Jhala. „Converging to a player model in Monte-Carlo Tree Search“. In 2014 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 2014. http://dx.doi.org/10.1109/cig.2014.6932881.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

klementev, Egor, Arina Fedorovskaya, Farhad Hakimov, Hamna Aslam und Joseph Alexander Brow. „Monte Carlo Tree Search player for Mai- Star and Balance Evaluation“. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2020. http://dx.doi.org/10.1109/ssci47803.2020.9308322.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Greenwood, Garrison W., und Daniel Ashlock. „Monte Carlo Tree Search Strategies in 2-Player Iterated Prisoner Dilemma Games“. In 2020 IEEE Conference on Games (CoG). IEEE, 2020. http://dx.doi.org/10.1109/cog47356.2020.9231854.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ribeiro, Leonardo F. R., und Daniel R. Figueiredo. „Performance of Monte Carlo Tree Search Algorithms when Playing the Game Ataxx“. In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4423.

Der volle Inhalt der Quelle
Annotation:
Monte Carlo Tree Search (MCTS) has recently emerged as a promising technique to play games with very large state spaces. Ataxx is a simple two-player board game with large and deep game tree. In this work, we apply different MCTS algorithms to play the game Ataxx and evaluate its performance against different adversaries (e.g., minimax2). Our analysis highlights one key aspect of MCTS, the trade-off between samples (and accuracy) and chances of winning the game which translates to a trade-off between the delay in making a move and chances of winning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sanselone, Maxime, Stéphane Sanchez, Cédric Sanza, David Panzoli und Yves Duthen. „Control of non player characters in a medical learning game with Monte Carlo tree search“. In GECCO '14: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2598394.2598473.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Chowdhury, Moinul Morshed Porag, Christopher Kiekintveld, Son Tran und William Yeoh. „Bidding in Periodic Double Auctions Using Heuristics and Dynamic Monte Carlo Tree Search“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/23.

Der volle Inhalt der Quelle
Annotation:
In a Periodic Double Auction (PDA), there are multiple discrete trading periods for a single type of good. PDAs are commonly used in real-world energy markets to trade energy in specific time slots to balance demand on the power grid. Strategically, bidding in a PDA is complicated because the bidder must predict and plan for future auctions that may influence the bidding strategy for the current auction. We present a general bidding strategy for PDAs based on forecasting clearing prices and using Monte Carlo Tree Search (MCTS) to plan a bidding strategy across multiple time periods. In addition, we present a fast heuristic strategy that can be used either as a standalone method or as an initial set of bids to seed the MCTS policy. We evaluate our bidding strategies using a PDA simulator based on the wholesale market implemented in the Power Trading Agent Competition (PowerTAC) competition. We demonstrate that our strategies outperform state-of-the-art bidding strategies designed for that competition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Koriche, Frédéric, Sylvain Lagrue, Éric Piette und Sébastien Tabary. „Constraint-Based Symmetry Detection in General Game Playing“. In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/40.

Der volle Inhalt der Quelle
Annotation:
Symmetry detection is a promising approach for reducing the search tree of games. In General Game Playing (GGP), where any game is compactly represented by a set of rules in the Game Description Language (GDL), the state-of-the-art methods for symmetry detection rely on a rule graph associated with the GDL description of the game. Though such rule-based symmetry detection methods can be applied to various tree search algorithms, they cover only a limited number of symmetries which are apparent in the GDL description. In this paper, we develop an alternative approach to symmetry detection in stochastic games that exploits constraint programming techniques. The minimax optimization problem in a GDL game is cast as a stochastic constraint satisfaction problem (SCSP), which can be viewed as a sequence of one-stage SCSPs. Minimax symmetries are inferred according to themicrostructure complement of these one-stage constraint networks. Based on a theoretical analysis of this approach, we experimentally show on various games that the recent stochastic constraint solver MAC-UCB, coupled with constraint-based symmetry detection, significantly outperforms the standard Monte Carlo Tree Search algorithms, coupled with rule-based symmetry detection. This constraint-driven approach is also validated by the excellent results obtained by our player during the last GGP competition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Baier, Hendrik, und Mark H. M. Winands. „MCTS-Minimax Hybrids with State Evaluations (Extended Abstract)“. In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/782.

Der volle Inhalt der Quelle
Annotation:
Monte-Carlo Tree Search (MCTS) has been found to show weaker play than minimax-based search in some tactical game domains. In order to combine the tactical strength of minimax and the strategic strength of MCTS, MCTS-minimax hybrids have been proposed in prior work. This article continues this line of research for the case where heuristic state evaluation functions are available. Three different approaches are considered, employing minimax in the rollout phase of MCTS, as a replacement for the rollout phase, and as a node prior to bias move selection. The latter two approaches are newly proposed. Results show that the use of enhanced minimax for computing node priors results in the strongest MCTS-minimax hybrid in the three test domains of Othello, Breakthrough, and Catch the Lion. This hybrid also outperforms enhanced minimax as a standalone player in Breakthrough, demonstrating that at least in this domain, MCTS and minimax can be combined to an algorithm stronger than its parts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie