SNF - Perceptive Dynamic Locomotion on Rough Terrain

Perceptive Dynamic Locomotion on Rough Terrain

Project identifier: SNF  188596

Project duration: 1.10.2019 - 30.9.2023

 

Legged robotics and in particular quadrupedal robotics has made considerable advances in the past decade. Several groups around the world have realized versatile systems that show great locomotion skills and the first legged solutions are currently installed in industrial applications. Despite all advances by our research community, these machines are still far away from their full potential in their mobility and autonomy to navigate through challenging environments. The primary reason is that, in contrast to wheeled or flying vehicles, legged robots have to discontinuously make and break contact with the terrain in order to balance and move forward.
The objective of this project is to overcome these challenges and realize perceptive dynamic locomotion in challenging environments with legged robots. To this end, we propose a hybrid algorithm, i.e., a combination of online and offline planning methods composed of sampling-based path planning, reinforcement learning, and model-based optimization. This research project builds upon our extensive and world-leading expertise in locomotion planning, control, and environment perception, and combines modern ideas from the different communities of machine learning and optimal control. Specifically, we use simplified models for the system dynamics and the environment representation to plan a rough initial path. To overcome the highly non-convex problem of finding footholds and gait timings on uneven grounds of variable properties, we train neural network policies in simulation and use it on local, robot-centric maps. On the last layer, we finally account for the
full system dynamics and terrain knowledge to plan and control motion and ground reaction forces to move forward. On all layers, we make use of semantic knowledge about the environment gained from experience. The proposed multi-layer approach allows exploration of different combinations of model-free Reinforcement Learning (RL) and model-based optimization tools in order to make the toolchain as performant as possible.
We are convinced that true progress in this area is only possible with real systems and real data, which will be strongly emphasized during execution of our research. Hence, in our project we will continuously test and validate the proposed methods and algorithms in simulation and real-world experiments with the existing quadrupedal robot ANYmal. With this research we are convinced we will enable new applications of legged systems in hardly accessible terrain.
 

JavaScript has been disabled in your browser