Before We Can Find a Model, We Must Forget about Perfection

Authors

  • Dimiter Dobrev Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Bulgaria

DOI:

https://doi.org/10.55630/sjc.2021.15.85-128

Keywords:

Artificial Intelligence, Reinforcement Learning, Partial Observability, Event-Driven Model, Definition of Object

Abstract

With Reinforcement Learning we assume that a model of the world does exist. We assume furthermore that the model in question is perfect (i.e. it describes the world completely and unambiguously). This article will demonstrate that it does not make sense to search for the perfect model because this model is too complicated and practically impossible to find. We will show that we should abandon the pursuit of perfection and pursue Event-Driven (ED) models instead. These models are generalization of Markov Decision Process (MDP) models. This generalization is essential because nothing can be found without it. Rather than a single MDP, we will aim to find a raft of neat simple ED models each one describing a simple dependency or property. In other words, we will replace the search for a singular and complex perfect model with a search for a large number of simple models.

Downloads

Published

2022-06-21

Issue

Section

Articles