lnu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Stabilization bounds for linear finite dynamical systems
Linnaeus University, Faculty of Technology, Department of Mathematics.ORCID iD: 0000-0003-2756-5186
2018 (English)In: Journal of Algebra, ISSN 0021-8693, E-ISSN 1090-266X, Vol. 511, p. 516-534Article in journal (Refereed) Published
Abstract [en]

A common problem to all applications of linear finite dynamical systems is analyzing the dynamics without enumerating every possible state transition. Of particular interest is the long term dynamical behaviour. In this paper, we study the number of iterations needed for a system to settle on a fixed set of elements. As our main result, we present two upper bounds on iterations needed, and each one may be readily applied to a fixed point system test or a computation of limit cycles. The bounds are based on submodule properties of iterated images and reduced systems modulo a prime. We also provide examples where our bounds are optimal. (C) 2018 Elsevier Inc. All rights reserved.

Place, publisher, year, edition, pages
Elsevier, 2018. Vol. 511, p. 516-534
Keywords [en]
Finite rings, Linear finite dynamical systems, Fixed point systems, Height
National Category
Mathematics
Research subject
Natural Science, Mathematics
Identifiers
URN: urn:nbn:se:lnu:diva-77488DOI: 10.1016/j.jalgebra.2018.04.009ISI: 000441495100028Scopus ID: 2-s2.0-85045314644OAI: oai:DiVA.org:lnu-77488DiVA, id: diva2:1244258
Available from: 2018-08-31 Created: 2018-08-31 Last updated: 2023-05-15Bibliographically approved
In thesis
1. Reinforcement Learning and Dynamical Systems
Open this publication in new window or tab >>Reinforcement Learning and Dynamical Systems
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis concerns reinforcement learning and dynamical systems in finite discrete problem domains. Artificial intelligence studies through reinforcement learning involves developing models and algorithms for scenarios when there is an agent that is interacting with an environment. By taking actions the agent may induce changes in the observed environment, where a modeled reward system reinforces correct behavior through learning algorithms. Reinforcement learning can be used in a wide variety of different  domains, such as finance, robotics, games, routing and health care. However as the subject matures there is an increasing need to more heavily rely on advanced concepts in mathematics and deep learning to further our understanding of existing problems and find new algorithmic insights. Discrete dynamical systems arise in virtually any setting as soon as there is a set of elements subjected to iteration by a defining function. The function may be seen to represent the passing of time or to define the rules for state transitions. If the set of elements is finite but very large then we may find applications in several different fields such as operations research, cryptography and biology, where understanding properties of the structure and long-term behavior without explicit enumeration is key. 

In Paper I we extend the model of categorical reinforcement learning with a group-aided training procedure involving multiple agents. By having the agents learn through shared distributional information but act independently we argue for an accelerated learning process. We empirically show that the procedure may lead to much more robust learning, stronger individual agent performance and good ensemble efficiency.

In Paper II we continue to build upon distributional reinforcement learning for finite Markov processes. A common approach among

algorithms is to apply transformations on agent returns for stability and flexibility over a variety of different tasks. We show that one of the most successful methods may not work for a stochastic process. As a solution we introduce a new distributional operator that handles a large class of transformations with guaranteed theoretical convergence. We also propose an approximating single-actor algorithm based on these novel insights, which when tested achieves state-of-the-art performance compared to similar algorithms.

In Paper III we focus on the issue of efficient exploration in reinforcement learning by studying the regret that a learning algorithm might have versus an optimal policy. Specifically, in the paper we derive a Bayesian regret bound for Thompson sampling on linear multi-armed bandits with Gaussian reward models when we have environment uncertainty described by a set of multivariate normal-gamma distributions.

In Paper IV we derive sharp bounds on the number of iterations needed for any linear finite dynamical system to stabilize on an inner set of cycles. These bounds may be used in cycle analysis and criterion tests to understand the long-term behavior of extremely large systems.

Place, publisher, year, edition, pages
Växjö: Linnaeus University Press, 2023. p. 38
Series
Linnaeus University Dissertations ; 494
Keywords
artificial intelligence, distributional reinforcement learning, Markov decision processes, Bellman operators, deep learning, multi-armed bandits, Bayesian bandits, conjugate priors, Thompson sampling, linear finite dynamical systems, cycle orbits, fixed-point systems
National Category
Mathematics
Research subject
Natural Science, Mathematics; Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-120715 (URN)10.15626/LUD.494.2023 (DOI)9789180820301 (ISBN)9789180820318 (ISBN)
Public defence
2023-06-09, Weber, hus K, Linnaeus University, Växjö, 14:30 (English)
Opponent
Supervisors
Available from: 2023-05-16 Created: 2023-05-15 Last updated: 2024-09-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Lindenberg, Björn

Search in DiVA

By author/editor
Lindenberg, Björn
By organisation
Department of Mathematics
In the same journal
Journal of Algebra
Mathematics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 229 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf