NeurIPS website, Zoom, and Rocket.Chat

Poster session (

Submit your questions for our panel session here


Advances in learning-based methods for perception, decision making, and control continue to open up new possibilities for deployment on physical robot platforms. Recent examples are given by the considerable rate of progress in representation learning - enabling easier application for supervised and reinforcement learning to domains with image-based data. However, the development and evaluation of algorithmic progress are often constrained to simulation and rigid datasets, leading to overfitting to specific characteristics in these limited domains.

Experiments on physical platforms benefit from the complexity and variety of real-world data both for the generality of evaluation and richness of training data. While direct contact with the real-world provides a grounding for algorithmic performance, the question of deployment also introduces its own challenges for experimentation and reproducibility. Environments, tasks, and platforms have to be standardized, relevant, and broadly accessible. However, finding suitable compromises by improving the realism of datasets and simulators while addressing the limits of real-world experiments will be important to ensure that research insights survive the test of time.

The goal of the workshop is to discuss the challenges for machine learning research in the context of physical systems. This discussion involves the presentation of current methods and the experiences made during algorithm deployment on real-world platforms. Moreover, the workshop aims to strengthen further the ties between the robotics and machine learning communities by discussing how their respective recent directions result in new challenges, requirements, and opportunities for future research.

Rather than merely focusing on applications of machine learning in robotics, as in the previous, successful iterations of the workshop, the new interdisciplinary panel will foster discussion on how real-world applications such as robotics can trigger various impactful directions for the development of machine learning and vice versa. To further this discussion, we aim to improve the interaction and communication across a diverse set of scientists who are at various stages of their careers. Instead of the common trade-offs between attracting a wider audience with well-known speakers and enabling early-stage researchers to voice their opinion, we encourage each of our senior presenters to share their presentations with a PhD student or postdoc from their lab. We also ask all our presenters - invited and contributed - to add a “dirty laundry” slide, describing the limitations and shortcomings of their work. We expect this will aid further discussion in poster and panel sessions in addition to helping junior researchers avoid similar roadblocks along their path.

Scope of contributions:

Important dates

Invited Speakers

Walking the Boundary of Learning and Interaction: There have been significant advances in the field of robot learning in the past decade. However, many challenges still remain when considering how robot learning can advance interactive agents such as robots that collaborate with humans. This includes autonomous vehicles that interact with human-driven vehicles or pedestrians, service robots collaborating with their users at homes over short or long periods of time, or assistive robots helping patients with disabilities. This introduces an opportunity for developing new robot learning algorithms that can help advance interactive autonomy. In this talk, we will discuss a formalism for human-robot interaction built upon ideas from representation learning. Specifically, we will first discuss the notion of latent strategies — low dimensional representations sufficient for capturing non-stationary interactions. We will then talk about the challenges of learning such representations when interacting with humans, and how we can develop data-efficient techniques that enable actively learning computational models of human behavior from demonstrations and preferences.

Object- and Action-Centric Representational Robot Learning: In this talk we’ll discuss different views on representations for robot learning, in particular towards the goal of precise, generalizable vision-based manipulation skills that are sample-efficient and scalable to train. Object-centric representations, on the one hand, can enable using rich additional sources of learning, and can enable various efficient downstream behaviors. Action-centric representations, on the other hand, can learn high-level planning, and do not have to explicitly instantiate objectness. As case studies we’ll look at two recent papers in these two areas.

State of Robotics @ Google: Robotics@Google’s mission is to make robots useful in the real world through machine learning. We are excited about a new model for robotics, designed for generalization across diverse environments and instructions. This model is focused on scalable data-driven learning, which is task-agnostic, leverages simulation, learns from past experience, and can be quickly adapted to work in the real-world through limited interactions. In this talk, we’ll share some of our recent work in this direction in both manipulation and locomotion applications.

Learning-based Control of a Legged Robot: Legged robots pose one of the greatest challenges in robotics. Dynamic and agile maneuvers of animals cannot be imitated by existing methods that are crafted by humans. A compelling alternative is reinforcement learning, which requires minimal craftsmanship and promotes the natural evolution of a control policy. However, so far, reinforcement learning research for legged robots is mainly limited to simulation, and only few and comparably simple examples have been deployed on real systems. The primary reason is that training with real robots, particularly with dynamically balancing systems, is complicated and expensive. Recent algorithmic improvements have made simulation even cheaper and more accurate at the same time. Leveraging such tools to obtain control policies is thus a seemingly promising direction. However, a few simulation-related issues have to be addressed before utilizing them in practice. The biggest obstacle is the so-called reality gap – discrepancies between the simulated and the real system. Hand-crafted models often fail to achieve a reasonable accuracy due to the complexities of actuation systems of existing robots. This talk will focus on how such obstacles can be overcome. The main approaches are twofold: a fast and accurate algorithm for solving contact dynamics and a data-driven simulation-augmentation method using deep learning. These methods are applied to the ANYmal robot, a sophisticated medium-dog-sized quadrupedal system. Using policies trained in simulation, the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than ever before, and recovering from falling even in complex configurations.

RL with Sim2Real in the Loop/Online Domain Adaptation for Mapping: We will have two talks describing recent developments by the group. First, we will present a Bayesian solution to the problem of estimating posterior distributions of simulation parameters given real data. The uncertainty captured in the posterior can significantly improve the performance of reinforcement learning algorithms trained in simulation but deployed in the real world. We will also show that combining posterior parameter estimation and policy updates sequentially leads to further improvements on the convergence rate. In the second part, we will address the problem of mapping as an online classification problem. We will show that optimal transport can be a valuable theoretical framework to enable fast transformation of geometric information obtained in an environment or simulated environment into a secondary domain, leveraging prior information in an elegant and efficient manner.




In Pacific Time (San Francisco Time)

07:30 - 07:45 Introduction
07:45 - 08.30 Invited talk 1 - “Walking the Boundary of Learning and Interaction” - Dorsa Sadigh and Erdem Biyik
08:30 - 08:45 Contributed talk 1 - “Accelerating Reinforcement Learning with Learned Skill Priors” (Best Paper Runner-Up) - Karl Pertsch
08:45 - 09:45 Poster session 1
09:45 - 10:30 Invited talk 2 - “Object- and Action-Centric Representational Robot Learning” - Pete Florence and Daniel Seita
10:30 - 11:15 Invited talk 3 - “State of Robotics @ Google” - Carolina Prada
11:15 - 15:00 Break
15:00 - 16:00 Panel discussion - Pete Florence, Dorsa Sadigh, Carolina Prada, Jeannette Bohg, Peter Stone, and Fabio Ramos
16:00 - 16:45 Invited talk 4 - “Learning-based Control of a Legged Robot” - Jemin Hwangbo and JooWoong Byun
16:45 - 17:00 Contributed talk 2 - “Multi-Robot Deep Reinforcement Learning via Hierarchically Integrated Models” (Best Paper) - Katie Kang
17:00 - 17:30 Break
17:30 - 18:15 Invited talk 5 - “RL with Sim2Real in the loop/Online Domain Adaptation for Mapping” - Fabio Ramos and Anthony Tompkins
18:15 - 19:15 Poster session 2
19:15 - 19:30 Closing

Poster Session

Gather.Town link

Program Committee

We would like to thank the program committee for shaping the excellent technical program. In alphabetical order they are: Achin Jain, Adithyavairavan Murali, Akshara Rai, Alex Bewley, Ashvin Nair, Brian Ichter, Caterina Buizza, Coline Devin, Djalel Benbouzid, Dushyant Rao, Edward Johns, Jacob Varley, James Harrison, Jayesh Gupta, Jianwei Yang, Jie Tan, Johannes A. Stork, Jonathan Tompson, Karol Hausman, Kunal Menda, Marcin Andrychowicz, Marco Ewerton, Marko Bjelonic, Misha Denil, Nantas Nardelli, Nemanja Rakicevic, Octavio Antonio Villarreal Magaa, Panpan Cai, Peter Karkus, Raunak Bhattacharyya, Ruohan Wang, Sasha Salter, Siddharth Reddy, Spencer Richards, Takayuki Osa, Tomi Silander, Tuomas Haarnoja, Vikas Sindhwani, Walter Goodwin, Yevgen Chebotar, Yizhe Wu, Yunzhu Li

Manuscript Submission Instructions

Submissions should use the NeurIPS Workshop template available here and be 4 pages (plus as many pages as necessary for references). The reviewing proces will be double blind, so please submit as anonymous by using ‘\usepackage{neurips_wrl2020}’ in your main tex file.

Accepted papers and eventual supplementary material will be made available on the workshop website. However, this does not constitute an archival publication and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conference.

Submissions can be made at

Poster and Camera-Ready Submission Instructions

Poster deadline (Nov 24, 2020 AOE)

Camera-ready paper deadline (Dec 4, 2020 AOE)


  1. Can supplementary material be added beyond the 4-page limit and are there any restrictions on it?

    Yes, you may include additional supplementary material, but we ask that it be limited to a reasonable amount (max 10 pages in addition to the main submission) and that it follow the same NeurIPS format as the paper. References do not count towards the limit of 4 pages.

  2. Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?

    We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.

  3. Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?

    We will not be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate). However, we will accept submissions that are under review at the time of submission to our workshop (i.e. before Oct 9). For instance, papers that have been submitted to the conference on Robot Learning (CoRL) 2020 can be submitted to our workshop.

  4. My real-robot experiments are affected by Covid-19. Can I include simulation results instead?

    If your paper requires conducting experiments on physical robots and access to the experimental platform is limited due to Covid-19 workplace access restrictions, whenever possible, you may validate your methods through simulation.


For any faher questions, you can contact us at


We are very thankful to our corporate sponsors, Naver Labs Europe and Google Brain, for enabling us to provide best paper awards and student registration fees.