Modeling dopamine activity by Reinforcement Learning methods: implications from two recent models |
| |
Authors: | Patrick Horgan Fred Cummins |
| |
Affiliation: | (1) UCD School of Computer Science and Informatics, University College Dublin Belfield, Dublin 4, Ireland;(2) Neuroscience and Psychiatry Unit, University of Manchester, G.714 Stopford Building, Oxford Road, Manchester, M13 9PT, UK |
| |
Abstract: | We compare and contrast two recent computational models of dopamine activity in the human central nervous system at the level of single cells. Both models implement reinforcement learning using the method of temporal differences (TD). To address drawbacks with earlier models, both models employ internal models. The principal difference between the internal models lies in the degree to which they implement the properties of the environment. One employs a partially observable semi-Markov environment; the other uses a form of transition matrix in an iterative manner to generate the sum of future predictions. We show that the internal models employ fundamentally different assumptions and that the assumptions are problematic in each case. Both models lack specification regarding their biological implementation to different degrees. In addition, the model employing the partially observable semi-Markov environment seems to have redundant features. In contrast, the alternate model appears to lack generalizability. |
| |
Keywords: | Computational Dopamine Learning Model Reinforcement |
本文献已被 SpringerLink 等数据库收录! |
|