SenseAct™ is a benchmark task suite for developing and evaluating reinforcement learning methods with physical robots by abstracting over the complexity of real-time control of robotic components. SenseAct’s guiding principles of minimizing delays and maximizing timing consistency via proactive computation lead to responsive learned behavior and reliable learning via state-of-the-art algorithms. All the task implementations share a core structure which can be reused to implement new robotic tasks that benefit from SenseAct’s tight control over system delays.
Achieving reproducible results in reinforcement learning experiments can be notoriously difficult; with the added hardware artifacts of physical robots, this becomes an even more challenging feat. In this guest blog post, Oliver presents an overview of his experience as an early tester working with SenseAct to reproduce the UR-Reacher-2 experiment.
We introduce six reinforcement learning benchmark tasks based on three commercially available robots. These tasks are developed in SenseAct, a new open-source framework for implementing real-time reinforcement learning tasks. We furthermore provide benchmarking results from our evaluation of several state of the art learning algorithms for continuous control on these tasks.
Reinforcement learning research with physical robots faces substantial resistance due to the lack of benchmark tasks and supporting source code. In this work, we introduce several reinforcement learning tasks with multiple commercially available robots that present varying levels of learning difficulty, setup, and repeatability.
From Go to Dota 2 to the operation of commercial HVAC systems - reinforcement learning modeling and algorithms are changing how engineers see complicated dynamical systems as games, and learn strategies to play them well. Kindred is applying the revolutionary technology that powered AlphaGo to the creation of a new generation of intelligent robotics for material handling.
Reinforcement Learning on robots is sensitive and hard but can be made robust and reproducible with a carefully designed setup. In our latest research paper, our team of researchers provide advice on how to set up reproducible RL with robots.
As the linguistic capabilities of interactive robots advance, it becomes increasingly important to understand how humans will instruct robots through natural language. What is more, with the increased use of teleoperated humanoid robots, it is important to recognize whether any differences between instructions given to humans and robots are due to the physical embodiment or perceived autonomy of the instructee. In this paper, we present the results of a human-subject experiment in which participants interacted in a collaborative, task-based setting with both a human and a suit-based, teleoperated humanoid robot said to be either autonomous or teleoperated.