lnu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Thompson Sampling for Linear Bandit Problems with Normal-Gamma Priors
Linnaeus University, Faculty of Technology, Department of Mathematics.ORCID iD: 0000-0003-2756-5186
Linnaeus University, Faculty of Technology, Department of Mathematics.ORCID iD: 0000-0002-7825-4428
(English)Manuscript (preprint) (Other academic)
Abstract [en]

We consider Thompson sampling for linear bandit problems with finitely many independent arms, where rewards are sampled from normal distributions that are linearly dependent on unknown parameter vectors and with unknown variance. Specifically, with a Bayesian formulation we consider multivariate normal-gamma priors to represent environment uncertainty for all involved parameters. We show that our chosen sampling prior is a conjugate prior to the reward model and derive a Bayesian regret bound for Thompson sampling under the condition that the 5/2-moment of the variance distribution exist.

Keywords [en]
multi-armed bandits, linear bandits, Thompson sampling
National Category
Computer Sciences Probability Theory and Statistics
Research subject
Mathematics, Applied Mathematics; Computer and Information Sciences Computer Science, Computer Science
Identifiers
URN: urn:nbn:se:lnu:diva-120710DOI: 10.48550/arXiv.2303.03348OAI: oai:DiVA.org:lnu-120710DiVA, id: diva2:1756739
Projects
Algoritmer för förstärkningsinlärningAvailable from: 2023-05-15 Created: 2023-05-15 Last updated: 2023-05-26Bibliographically approved
In thesis
1. Reinforcement Learning and Dynamical Systems
Open this publication in new window or tab >>Reinforcement Learning and Dynamical Systems
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis concerns reinforcement learning and dynamical systems in finite discrete problem domains. Artificial intelligence studies through reinforcement learning involves developing models and algorithms for scenarios when there is an agent that is interacting with an environment. By taking actions the agent may induce changes in the observed environment, where a modeled reward system reinforces correct behavior through learning algorithms. Reinforcement learning can be used in a wide variety of different  domains, such as finance, robotics, games, routing and health care. However as the subject matures there is an increasing need to more heavily rely on advanced concepts in mathematics and deep learning to further our understanding of existing problems and find new algorithmic insights. Discrete dynamical systems arise in virtually any setting as soon as there is a set of elements subjected to iteration by a defining function. The function may be seen to represent the passing of time or to define the rules for state transitions. If the set of elements is finite but very large then we may find applications in several different fields such as operations research, cryptography and biology, where understanding properties of the structure and long-term behavior without explicit enumeration is key. 

In Paper I we extend the model of categorical reinforcement learning with a group-aided training procedure involving multiple agents. By having the agents learn through shared distributional information but act independently we argue for an accelerated learning process. We empirically show that the procedure may lead to much more robust learning, stronger individual agent performance and good ensemble efficiency.

In Paper II we continue to build upon distributional reinforcement learning for finite Markov processes. A common approach among

algorithms is to apply transformations on agent returns for stability and flexibility over a variety of different tasks. We show that one of the most successful methods may not work for a stochastic process. As a solution we introduce a new distributional operator that handles a large class of transformations with guaranteed theoretical convergence. We also propose an approximating single-actor algorithm based on these novel insights, which when tested achieves state-of-the-art performance compared to similar algorithms.

In Paper III we focus on the issue of efficient exploration in reinforcement learning by studying the regret that a learning algorithm might have versus an optimal policy. Specifically, in the paper we derive a Bayesian regret bound for Thompson sampling on linear multi-armed bandits with Gaussian reward models when we have environment uncertainty described by a set of multivariate normal-gamma distributions.

In Paper IV we derive sharp bounds on the number of iterations needed for any linear finite dynamical system to stabilize on an inner set of cycles. These bounds may be used in cycle analysis and criterion tests to understand the long-term behavior of extremely large systems.

Place, publisher, year, edition, pages
Växjö: Linnaeus University Press, 2023. p. 38
Series
Linnaeus University Dissertations ; 494
Keywords
artificial intelligence, distributional reinforcement learning, Markov decision processes, Bellman operators, deep learning, multi-armed bandits, Bayesian bandits, conjugate priors, Thompson sampling, linear finite dynamical systems, cycle orbits, fixed-point systems
National Category
Mathematics
Research subject
Natural Science, Mathematics; Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-120715 (URN)10.15626/LUD.494.2023 (DOI)9789180820301 (ISBN)9789180820318 (ISBN)
Public defence
2023-06-09, Weber, hus K, Linnaeus University, Växjö, 14:30 (English)
Opponent
Supervisors
Available from: 2023-05-16 Created: 2023-05-15 Last updated: 2024-09-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Lindenberg, BjörnLindahl, Karl-Olof

Search in DiVA

By author/editor
Lindenberg, BjörnLindahl, Karl-Olof
By organisation
Department of Mathematics
Computer SciencesProbability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 117 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf