Benchmarking Symbolic Regression and Local Linear Modelling Methods for Reinforcement Learning

Benchmarking Symbolic Regression and Local Linear Modelling Methods for Reinforcement Learning

Citation Author(s):
Martin
Brablc
Brno University of Technology
Submitted by:
Martin Brablc
Last updated:
Fri, 07/26/2019 - 03:46
DOI:
10.21227/5v5e-jg39
Data Format:
License:
Dataset Views:
81
Share / Embed Cite
Abstract: 

Reinforcement Learning (RL) agents can learn to control a nonlinear system without using a model of the system. However, having a model brings benefits, mainly in terms of a reduced number of unsuccessful trials before achieving acceptable control performance. Several modelling approaches have been used in the RL domain, such as neural networks, local linear regression, or Gaussian processes. In this article, we focus on a technique that has not been used much so far:\ symbolic regression, based on genetic programming. Using measured data, this approach yields a nonlinear, continuous-time analytic model. We benchmark two state-of-the-art  methods, SNGP -- Single Node Genetic Programming and MGGP -- Multi-Gene Genetic Programming, against a standard incremental local regression method called RFWR -- Receptive Field Weighted Regression. We have introduced slight modifications to the RFWR algorithm to better suit low-dimensional continuous-time systems. The benchmark is a highly nonlinear, dynamic magnetic manipulation system. The results show that using the RL framework and a proper approximation method, it is possible to design a stable controller of such a complex system without the necessity of any haphazard learning. While all of the approximation methods were successful, MGGP achieved the best results at the cost of higher computational complexity.

Instructions: 

Containing files can be used to demonstrate a RL control agent for a magnetic manipulator system (a row of coils controlling a steel ball). View the README file.

Dataset Files

You must be an IEEE Dataport Subscriber to access these files. Login or subscribe now. Sign up to be a Beta Tester and receive a coupon code for a free subscription to IEEE DataPort!

Documentation

AttachmentSize
Plain text icon README.txt394 bytes

Embed this dataset on another website

Copy and paste the HTML code below to embed your dataset:

Share via email or social media

Click the buttons below:

facebooktwittermailshare
[1] Martin Brablc, "Benchmarking Symbolic Regression and Local Linear Modelling Methods for Reinforcement Learning", IEEE Dataport, 2019. [Online]. Available: http://dx.doi.org/10.21227/5v5e-jg39. Accessed: Oct. 14, 2019.
@data{5v5e-jg39-19,
doi = {10.21227/5v5e-jg39},
url = {http://dx.doi.org/10.21227/5v5e-jg39},
author = {Martin Brablc },
publisher = {IEEE Dataport},
title = {Benchmarking Symbolic Regression and Local Linear Modelling Methods for Reinforcement Learning},
year = {2019} }
TY - DATA
T1 - Benchmarking Symbolic Regression and Local Linear Modelling Methods for Reinforcement Learning
AU - Martin Brablc
PY - 2019
PB - IEEE Dataport
UR - 10.21227/5v5e-jg39
ER -
Martin Brablc. (2019). Benchmarking Symbolic Regression and Local Linear Modelling Methods for Reinforcement Learning. IEEE Dataport. http://dx.doi.org/10.21227/5v5e-jg39
Martin Brablc, 2019. Benchmarking Symbolic Regression and Local Linear Modelling Methods for Reinforcement Learning. Available at: http://dx.doi.org/10.21227/5v5e-jg39.
Martin Brablc. (2019). "Benchmarking Symbolic Regression and Local Linear Modelling Methods for Reinforcement Learning." Web.
1. Martin Brablc. Benchmarking Symbolic Regression and Local Linear Modelling Methods for Reinforcement Learning [Internet]. IEEE Dataport; 2019. Available from : http://dx.doi.org/10.21227/5v5e-jg39
Martin Brablc. "Benchmarking Symbolic Regression and Local Linear Modelling Methods for Reinforcement Learning." doi: 10.21227/5v5e-jg39