Home
![]() |
I am an assistant professor at the Université de Montréal, a core academic member of the Mila - Quebec AI Institute, CIFAR AI chair, and co-director of the Robotics and Embodied AI Lab (REAL).
I was a Postdoctoral Researcher with Berkeley Artificial Intelligence Research (BAIR), working with Sergey Levine.
His previous and current research has focused on solving sequential decision-making problems for real-world autonomous learning systems (robots).
The specific of his research has covered the areas of reinforcement-, continual-, meta-, hierarchical learning, and human-robot collaboration.
In his work, Dr. Berseth has published at top venues across the disciplines of robotics, machine learning, and computer animation.
Currently, he is teaching a course on robot learning at Université de Montréal and Mila that covers the most recent research on machine learning techniques for creating generalist robots.
For a more formal biography, click here |
- I support Slow Science.
News
-
(June 2022) New paper at IROS on quadruped robots learning soccer shooting skills. IROS Best RoboCup Paper Award Finalist!
-
(May 2022) New paper at ICML on generalization in RL by training a single policy to control robots with different morphologies.
-
(April 2022) UdeM receives 159M to create the Courtois Institute to support fundamental science and apply AI and robotics to scientific experimentation. My lab will lead research on methods for automating scientific experimentation.
-
(January 2022) New paper accepted to ICLR on Continual Meta-Reinforcement Learning.
-
(September 2021) New paper accepted to NeurIPS on surprise minimization in partially observed environments!
-
(September 2021) I will be teaching a course on deep reinforcement learning for robotics in January 2022!
-
(September 2021) New paper accepted to CoRL on autonomous robot learning!
-
(May 2021) I will be joining Mila and starting as an assistant professor at the University of Montreal in August 2021!
-
(April 2021) Our research paper that will be presented at ICRA2021 on RL for bipedal robots was featured in MIT Technology Review
-
(March 2021) Associate Editor for IROS 2021
-
(February 2021) Two papers accepted to ICRA2021!
-
(January 2021) SMiRL: Surprise Minimizing RL in Unstable Environments receives oral presentation at ICLR 2021 (top 1.8% of submitted papers)
Recent Work
Autonomous real-world Reinforcement learning![]() | We study how robots can autonomously learn skills that require a combination of navigation and grasping. While reinforcement learning in principle provides for automated robotic skill learning, in practice reinforcement learning in the real world is challenging and often requires extensive instrumentation and supervision. Our aim is to devise a robotic reinforcement learning system for learning navigation and manipulation together, in an autonomous way without human intervention, enabling continual learning under realistic assumptions. Our proposed system, ReLMM, can learn continuously on a real-world platform without any environment instrumentation, without human intervention, and without access to privileged information, such as maps, objects positions, or a global view of the environment. Our method employs a modularized policy with components for manipulation and navigation, where manipulation policy uncertainty drives exploration for the navigation controller, and the manipulation module provides rewards for navigation. We evaluate our method on a room cleanup task, where the robot must navigate to and pick up items scattered on the floor. After a grasp curriculum training phase, ReLMM can learn navigation and grasping together fully automatically, in around 40 hours of autonomous real-world training. |
General task learning by inferring rewards from example data![]() | Can we use reinforcement learning to learn general-purpose policies that can perform a wide range of different tasks, resulting in flexible and reusable skills? Contextual policies provide this capability in principle, but the representation of the context determines the degree of generalization and expressivity. Categorical contexts preclude generalization to entirely new tasks. Goal-conditioned policies may enable some generalization, but cannot capture all tasks that might be desired. In this paper, we propose goal distributions as a general and broadly applicable task representation suitable for contextual policies. Goal distributions are general in the sense that they can represent any state-based reward function when equipped with an appropriate distribution class, while the particular choice of distribution class allows us to trade off expressivity and learnability. We develop an off-policy algorithm called distribution-conditioned reinforcement learnin (DisCo) to efficiently learn these policies. We evaluate DisCo on a variety of robot manipulation tasks and find that it significantly outperforms prior methods on tasks that require generalization to new goal distributions. |
Entropy minimization for emergent behaviour![]() ![]() | All living organisms carve out environmental niches within which they can maintain relative predictability amidst the ever-increasing entropy around them [schneider1994, friston2009]. Humans, for example, go to great lengths to shield themselves from surprise --- we band together in millions to build cities with homes, supplying water, food, gas, and electricity to control the deterioration of our bodies and living spaces amidst heat and cold, wind and storm. The need to discover and maintain such surprise-free equilibria has driven great resourcefulness and skill in organisms across very diverse natural habitats. Motivated by this, we ask: could the motive of preserving order amidst chaos guide the automatic acquisition of useful behaviors in artificial agents? |