SenseAct™ is a benchmark task suite for developing and evaluating reinforcement learning methods with physical robots by abstracting over the complexity of real-time control of robotic components. SenseAct’s guiding principles of minimizing delays and maximizing timing consistency via proactive computation lead to responsive learned behavior and reliable learning via state-of-the-art algorithms. All the task implementations share a core structure which can be reused to implement new robotic tasks that benefit from SenseAct’s tight control over system delays.
Reinforcement Learning (RL) is a promising approach to solving complex real world tasks with physical robots, supported by recent successes, e.g. in grasping and object manipulation. In RL, a decision-making agent interacting with the world discovers new behaviours by trial and error, sometimes exploring new ways to do things, and sometimes exploiting what it has already found to work well. Efficient exploration of alternative behaviours is the key to reinforcement learning.
Achieving reproducible results in reinforcement learning experiments can be notoriously difficult; with the added hardware artifacts of physical robots, this becomes an even more challenging feat. In this guest blog post, Oliver presents an overview of his experience as an early tester working with SenseAct to reproduce the UR-Reacher-2 experiment.
We introduce six reinforcement learning benchmark tasks based on three commercially available robots. These tasks are developed in SenseAct, a new open-source framework for implementing real-time reinforcement learning tasks. We furthermore provide benchmarking results from our evaluation of several state of the art learning algorithms for continuous control on these tasks.
Reinforcement learning research with physical robots faces substantial resistance due to the lack of benchmark tasks and supporting source code. In this work, we introduce several reinforcement learning tasks with multiple commercially available robots that present varying levels of learning difficulty, setup, and repeatability.
Today, Kindred announced the launch of SenseAct, the first open-source toolkit to set-up reinforcement learning tasks on physical robots. Kindred’s SenseAct was created to provide robotics developers and researchers with a consistent, learnable interface that efficiently controls for time delays, a factor that simulation environments aren’t hindered by.
From Go to Dota 2 to the operation of commercial HVAC systems - reinforcement learning modeling and algorithms are changing how engineers see complicated dynamical systems as games, and learn strategies to play them well. Kindred is applying the revolutionary technology that powered AlphaGo to the creation of a new generation of intelligent robotics for material handling.
Reinforcement Learning on robots is sensitive and hard but can be made robust and reproducible with a carefully designed setup. In our latest research paper, our team of researchers provide advice on how to set up reproducible RL with robots.