Open this publication in new window or tab >>2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]
This thesis concerns reinforcement learning and dynamical systems in finite discrete problem domains. Artificial intelligence studies through reinforcement learning involves developing models and algorithms for scenarios when there is an agent that is interacting with an environment. By taking actions the agent may induce changes in the observed environment, where a modeled reward system reinforces correct behavior through learning algorithms. Reinforcement learning can be used in a wide variety of different domains, such as finance, robotics, games, routing and health care. However as the subject matures there is an increasing need to more heavily rely on advanced concepts in mathematics and deep learning to further our understanding of existing problems and find new algorithmic insights. Discrete dynamical systems arise in virtually any setting as soon as there is a set of elements subjected to iteration by a defining function. The function may be seen to represent the passing of time or to define the rules for state transitions. If the set of elements is finite but very large then we may find applications in several different fields such as operations research, cryptography and biology, where understanding properties of the structure and long-term behavior without explicit enumeration is key.
In Paper I we extend the model of categorical reinforcement learning with a group-aided training procedure involving multiple agents. By having the agents learn through shared distributional information but act independently we argue for an accelerated learning process. We empirically show that the procedure may lead to much more robust learning, stronger individual agent performance and good ensemble efficiency.
In Paper II we continue to build upon distributional reinforcement learning for finite Markov processes. A common approach among
algorithms is to apply transformations on agent returns for stability and flexibility over a variety of different tasks. We show that one of the most successful methods may not work for a stochastic process. As a solution we introduce a new distributional operator that handles a large class of transformations with guaranteed theoretical convergence. We also propose an approximating single-actor algorithm based on these novel insights, which when tested achieves state-of-the-art performance compared to similar algorithms.
In Paper III we focus on the issue of efficient exploration in reinforcement learning by studying the regret that a learning algorithm might have versus an optimal policy. Specifically, in the paper we derive a Bayesian regret bound for Thompson sampling on linear multi-armed bandits with Gaussian reward models when we have environment uncertainty described by a set of multivariate normal-gamma distributions.
In Paper IV we derive sharp bounds on the number of iterations needed for any linear finite dynamical system to stabilize on an inner set of cycles. These bounds may be used in cycle analysis and criterion tests to understand the long-term behavior of extremely large systems.
Place, publisher, year, edition, pages
Växjö: Linnaeus University Press, 2023. p. 38
Series
Linnaeus University Dissertations ; 494
Keywords
artificial intelligence, distributional reinforcement learning, Markov decision processes, Bellman operators, deep learning, multi-armed bandits, Bayesian bandits, conjugate priors, Thompson sampling, linear finite dynamical systems, cycle orbits, fixed-point systems
National Category
Mathematics
Research subject
Natural Science, Mathematics; Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-120715 (URN)10.15626/LUD.494.2023 (DOI)9789180820301 (ISBN)9789180820318 (ISBN)
Public defence
2023-06-09, Weber, hus K, Linnaeus University, Växjö, 14:30 (English)
Opponent
Supervisors
2023-05-162023-05-152024-09-10Bibliographically approved