当前位置:网站首页>Prediction problem inspired by animal learning (AL)

Prediction problem inspired by animal learning (AL)

2020-12-07 19:17:17 Tian Guanyu

We propose three problems in modeling after animal learning experiments , These experiments are designed to test online state construction or representation learning algorithms . Our test questions require the learning system to construct a compact summary of its past interactions with the world , In order to predict the future , Online update and incremental update at each time step , There is no need for specific training - Test partitioning . Most of the latest work in deep reinforcement learning focuses on completely observable tasks , Or stack a small number of recent frames in the game to get good performance . Current benchmark utilization for evaluating memory and circular learning 3D The visual environment ( for example DeepMind Lab), It takes billions of training samples , Complex agent architecture and cloud scale computing . therefore , These domains are not suitable for rapid prototyping , Hyperparametric studies or extensive replication studies . In this paper , We provide a series of test questions and benchmark results to fill this gap . Our testing problem is designed as the simplest instantiation and learning ability test that animals are easy to demonstrate , Include (1) Tracking condition ( Remember the clues to predict the future ),(2) Patterning ( The hint can predict another ),(3) And the combination of the two and other unrelated interference signals . We provide a baseline for each issue , It includes the heuristic method in the early stage of neural network learning and the simple idea inspired by the animal learning calculation model .

Original title :Prediction problems inspired by animal learning

original text :We present three problems modeled after animal learning experiments designed to test online state construction or representation learning algorithms. Our test problems require the learning system to construct compact summaries of their past interaction with the world in order to predict the future, updating online and incrementally on each time step without an explicit training-testing split. The majority of recent work in Deep Reinforcement Learning focuses on either fully observable tasks, or games where stacking a handful of recent frames is sufficient for good performance. Current benchmarks used for evaluating memory and recurrent learning make use of 3D visual environments (e.g., DeepMind Lab) which require billions of training samples, complex agent architectures, and cloud-scale compute. These domains are thus not well suited for rapid prototyping, hyper-parameter study, or extensive replication study. In this paper, we contribute a set of test problems and benchmark results to fill this gap. Our test problems are designed to be the simplest instantiation and test of learning capabilities which animals readily exhibit, including (1) trace conditioning (remembering a cue in order to predict another far in the future), (2) patterning (a particular combination of cues predict another), (3) and combinations of both with additional non-relevant distracting signals. We provide baselines for each problem including heuristics from the early days of neural network learning and simple ideas inspired by computational models of animal learning. Our results highlight the difficulty of our test problems for online recurrent learning systems and how the agent's performance often exhibits substantial sensitivity to the choice of key problem and agent parameters.

Original author :Banafsheh Rafiee, Sina Ghiassian, Raksha Kumaraswamy, Richard Sutton, Elliot Ludvig, Adam White

Original address :https://arxiv.org/abs/2011.04590

Original statement , This article is authorized by the author + Community publication , Unauthorized , Shall not be reproduced .

If there is any infringement , Please contact the yunjia_community@tencent.com Delete .

版权声明
本文为[Tian Guanyu]所创,转载请带上原文链接,感谢
https://chowdera.com/2020/11/20201113115807778l.html