9:00-9:10 Welcome and a short introduction
9:10-10:20 Session 1 - Bioinspired Locomotion (BIO)
Guillaume Bellegarda
"Hierarchical Bio-Inspired Learning for Legged Locomotion" (30 min)
Marie Janneke Schwaner
"Musculoskeletal Mechanics and Control in Bipedal Agility" (30 min)
10:20-11:05 Poster/Demos session 1 and Coffee Break
11:05-12:00 Session 2 - Multi-modal Active Perception (MAP)
Mariangela Filosa
"E-skins and sensors for sensorimotor integration in collaborative robotics" (20 min)
Jonas Frey
"Learning Perception and Navigation: Towards autonomous robots in the wild" (20 min)
12:00-13:45 Lunch break
13:45-15:20 Session 3 - Locomotion Learning (LL)
Nicolas Mansard
"From whole-body MPC to constraint-based reinforcement learning: should we optimize or learn the locomotion?" (30 min)
Pulkit Agrawal
"Reactive Intelligence for Legged Robots" (30 min)
Firas Al-Hafez
"Learning Robust Whole-Body Control from Human Motion Capture" (20 min)
15:20-16:10 Poster/Demos session 2 and Coffee Break
16:10-16:45 Session 4 - Insights from industry
Michael Lutter
"Learning Locomotion Policies for Spot" (30 min)
16:45-17:45 Panel Discussion
17:45-18:00 Award Ceremony
The ability to efficiently move in complex environments is a fundamental property both for animals and for robots, and the problem of locomotion and movement control is an area in which neuroscience, biomechanics, and robotics can fruitfully interact. Bio-inspired robots and numerical models can be used to explore the interplay of the four main components underlying animal locomotion, namely central pattern generators (CPGs), reflexes, descending modulation, and the musculoskeletal system. After briefly reviewing different models for animals ranging from lampreys to humans, I will present our recent work on integrating deep reinforcement learning with CPGs to study this interplay for quadruped locomotion.
We walk over complex terrain, like trails or stairs, seemingly effortlessly, as our neuromuscular system synergistically coordinates multiple mechanisms (muscle mechanics and sensory feedback) to maintain agility. Hence, at the frontiers of biology, muscle physiology, and movement sciences, it is still unclear how animals effectively integrate muscle mechanics – shaped by feedforward control – and sensory feedback to achieve agile locomotion. In my research, I have used unique experimental approaches to explore how guinea fowl, a model for bipedal locomotion, integrate muscle mechanics and sensory feedback to maintain robust locomotion, that I currently combine with data informed musculoskeletal modeling to further probe at underlying mechanisms to agile locomotion.
The field of collaborative robotics is constantly growing given the central role of safe human-robot interactions in the current era of Industry 4.0. Robotic systems are increasingly being either natively equipped with or adapted to house sensing devices to act and interact with the surroundings. In this scenario, tactile sensors, or e-skins, are fundamental to provide robots with awareness of the environment to detect impacts, accurately manipulate objects, and enable learning from demonstration strategies.
In this talk, sensing systems for robotic applications will be presented. In particular, Fiber Bragg Grating-based e-skin will be introduced and discussed in relevant application scenarios.
This talk will present an overview of recent research on integrating locomotion, navigation, and perception for legged robots in challenging environments. We focus on leveraging large-scale simulation, fusing proprioceptive and exteroceptive sensing, and propose an alternative to the classical hierarchical planning paradigm.
Humanoids have the potential to be the ideal embodiment in environments designed by and for humans. Their structural similarity to the human body makes human motion capture data a rich and appealing source for learning control policies. However, despite their kinematic resemblance to humans, humanoids are often significantly different and often limited in their dynamics, complicating the direct transfer of human motion to robots. This talk sheds light on these challenges and provides insights into developing robust whole-body control for humanoid robots.
Negotiating contacts is a central challenge in legged locomotion. I will discuss recent advances and challenges in applying machine learning for locomotion and whole-body control problems. While vision and language makes it possible for robots to make plans, the use forces and proprioception enables robot controllers to achieve dynamic, robust and reliable behaviors.
How should we control robotic locomotion—by optimizing precise models or by letting machines learn from their environment? This talk dives into our team’s work, starting with whole-body model predictive control (MPC), where optimization and hard constraints ensure stability and precision in real time. Various solutions to solve trajectory problems with hard constraints will be discussed. We will then explore the role of motion memory in enhancing robot performance and discuss the shift to reinforcement learning with constraints, which allows robots to learn safe behavior while reducing the difficulty of cost/reward tuning. Finally, we’ll address learning directly from vision, paving the way for adaptable, model-free locomotion in complex environments.
In recent years we have seen many advances for learning robust whole-body controllers for quadruped robots. In this talk, I am going to present how we at Boston Dynamics explored different combinations of model-based and learning-based controllers for quadruped locomotion. I am going to share our key learnings and challenges when productizing a learned locomotion controller for Spot.