Reinforcement learning chess engine

Graduate Thesis uoadl:2963137 180 Read counter

Unit:
Department of Informatics and Telecommunications
Πληροφορική
Deposit date:
2021-10-20
Year:
2021
Author:
GORGOGIANNIS ORESTIS
ZIORIS ELEFTHERIOS
Supervisors info:
Παναγιώτης Σταματόπουλος, Επίκουρος Καθηγητής, τμήμα Πληροφορικήσ και Τηλεπικοινωνιών, Εθνικό και Καποδιστριακό Πανεπιστήμιο Αθηνών.
Original Title:
Σκακιστική μηχανή μέσω ενισχυτικής μάθησης
Languages:
Greek
English
Translated title:
Reinforcement learning chess engine
Summary:
In recent years, reinforcement learning has risen to become the state of the art method in neural network based game engines. With the creation of AlphaGo, DeepMind achieved results that were thought to be unreachable in the near future. Then, AlphaGo Zero broke even that barrier, surpassing every other game engine using no prior domain knowledge. Since then, Artificial Intelligence studies have steered towards the pure reinforcement learning approach. In this paper, we attempt to create a model that learns to play chess without prior knowledge, using a generalized training procedure based on AlphaGo Zero. The difference in game rules between chess and Go create the need for a more general purpose algorithm, free from limitations the rules each different game demands. Modifications were also necessary to overcome the hardware limitations we face. Using the bare essentials for machine learning model training, our model is able to achieve a steady learning process, slowly getting better and beating its previous versions with each iteraion.
Main subject category:
Science
Keywords:
Machine Learning, Artificial Intelligence, Reinforcement Learning, Game Theory
Index:
Yes
Number of index pages:
3
Contains images:
Yes
Number of references:
40
Number of pages:
51
Thesis.pdf (782 KB) Open in new window