Zaragoza, Julio H. and Morales, Eduardo F. (2010) Relational Reinforcement Learning with Continuous Actions by Combining Behavioural Cloning and Locally Weighted Regression. Journal of Intelligent Learning Systems and Applications, 02 (02). pp. 69-79. ISSN 2150-8402
JILSA20100200002_32687896.pdf - Published Version
Download (1MB)
Abstract
Reinforcement Learning is a commonly used technique for learning tasks in robotics, however, traditional algorithms are unable to handle large amounts of data coming from the robot’s sensors, require long training times, and use dis-crete actions. This work introduces TS-RRLCA, a two stage method to tackle these problems. In the first stage, low-level data coming from the robot’s sensors is transformed into a more natural, relational representation based on rooms, walls, corners, doors and obstacles, significantly reducing the state space. We use this representation along with Behavioural Cloning, i.e., traces provided by the user; to learn, in few iterations, a relational control policy with discrete actions which can be re-used in different environments. In the second stage, we use Locally Weighted Regression to transform the initial policy into a continuous actions policy. We tested our approach in simulation and with a real service robot on different environments for different navigation and following tasks. Results show how the policies can be used on different domains and perform smoother, faster and shorter paths than the original discrete actions policies.
Item Type: | Article |
---|---|
Subjects: | STM Academic > Engineering |
Depositing User: | Unnamed user with email support@stmacademic.com |
Date Deposited: | 24 Jan 2023 08:03 |
Last Modified: | 16 Feb 2024 04:23 |
URI: | http://article.researchpromo.com/id/eprint/119 |