Sponsored by
![]() |
Workshop Theme
The year 2024 has seen an explosion of interest in humanoid robots. However, recent systems for drone racing, playing table tennis, and others clearly demonstrate that the humanoid form-factor isn’t a requirement for human-level performance. In the 7th Robot Learning workshop, to be held at ICLR 2025, we will look beyond the humanoid embodiment and ask: how far are we from robots with human-level abilities? What do we need to improve about embodied learning, decision-making, perception, and data collection to train generally physically capable robots to robustly perform a wide range of activities such as cooking or tidying up a house – activities that people do without much thinking?
We believe many of the weaknesses of the current robotic systems to be a reflection of the shortcomings of general AI methods and models. As such, we seek diverse perspectives on the workshop theme from robotics-focused and robotics-orthogonal parts of the ICLR community alike, scientific contributions from academia and industry, as well as participants from a variety of backgrounds and career stages.
Invited Speakers
Call for Papers
Key details
- Submission page: Robot Learning Workshop on OpenReview
- Submission deadline: February 10, 2025 (Anywhere on Earth)
We welcome submissions of original research papers as well as systems papers accompanied by videos (see the submission format below) focusing on algorithmic innovations, theoretical advancements, system design, or practical applications relevant to the workshop theme.
Specific areas of interest include but are not limited to:
- Novel ML algorithms and model architectures for robot control: techniques integrating large multi-modal models, sim-to-real bridging, safe policy optimization, and data efficiency.
- Human-robot interaction and collaboration: socially aware motion planning, adaptive interfaces, and trust-building strategies for seamless teamwork. Hardware innovations and system integration: advanced sensing and actuation, high-degree-of-freedom controllers, energy-efficient designs, and cohesive robotics architectures.
- Simulation, benchmarking, and evaluation methodologies: realistic simulation environments, standardized task suites, robust metrics, and cross-domain validation protocols.
- Applications in unstructured and dynamic environments: household assistance, mobile manipulation, industrial automation, healthcare, disaster response, and other real-world domains.
Submission format and review process
We welcome submissions in three formats:
- Full Papers
- Recommended length: 4–10 pages (no strict upper limit) using the ICLR-2025 template.
- Expected to meet standards typical of workshop papers, including technical depth and novelty.
- Tiny Papers
- Adhering to the format described in the ICLR Call for Tiny Papers, focusing on concise and impactful ideas.
- Should align closely with the workshop theme, offering preliminary insights or novel perspectives.
- Systems Papers
- Recommended length: 4–10 pages (no strict upper limit) using the ICLR-2025 template.
- Expected to be about a system, at least one of whose key components critically relies on AI/ML.
- Must be submitted with a supplementary video showing the system operation.
- All accepted systems papers will be guaranteed an oral spotlight presentation.
IMPORTANT: For the camera ready submission of your workshop paper please use the updated template. Note it should read “Accepted as a workshop paper to the 7th Robot Learning Workshop at ICLR 2025” at the top of your camera ready paper.
Accepted submissions will be non-archival, though Tiny Papers will be subject to the non-workshop-specific rules in the ICLR Call for Tiny Papers.
Important dates
- Submission deadline: February 10, 2025 (Anywhere on Earth)
- Notification: February 27, 2025 (Anywhere on Earth)
- Camera-ready due: April 11, 2025 (Anywhere on Earth)
- Workshop: 27 April 2025
Schedule
08:55 - 09:00 | Opening Remarks |
09:00 - 09:15 | Best Paper Award talk |
09:15 - 09:40 | Spotlight paper talks session #1 |
09:40 - 10:40 | Poster & robot demo session #1 + coffee |
10:40 - 11:15 | Invited talk by Chris Paxton (Hello Robot): Towards Home Robots: Open Vocabulary Mobile Manipulation in Unstructured Environments |
11:15 - 11:50 | Invited talk by Davide Scaramuzza and Jiaxu Xing (University of Zurich): Learning Superhuman Agile Flight |
11:50 - 12:50 | Lunch |
12:50 - 13:25 | Invited talk by Chelsea Finn (Stanford/Physical Intelligence): Data-Driven Pre-Training and Post-Training for Robot Foundation Models |
13:25 - 14:00 | Invited talk by Sandy Huang (Google DeepMind): From agility to language understanding: Using diverse simulation to unlock real-world robot abilities |
14:00 - 14:35 | Spotlight paper talks session #2 |
14:35 - 15:35 | Poster & robot demo session #2 + coffee |
15:35 - 16:10 | Invited talk by David Hsu (National University of Singapore): Towards Compositional Generalization for Robot Learning |
16:10 - 16:45 | Panel on the Future of Humanoid Robotics: Sandy Huang (Google DeepMind), Chaoyi Li (Booster Robotics), Niresh Dravin (FrodoBots), Animesh Garg (Georgia Tech), Edward Johns (Imperial College London) |
16:45 - 17:20 | Invited talk by Animesh Garg (Georgia Tech): Generalizeable Autonomy: Representations for Embodied Foundation Models |
17:20 - 17:55 | Invited talk by Coline Devin (Google DeepMind): Gemini Robotics: Bringing AI into the Physical World |
17:55 - 18:00 | Closing Remarks |
Accepted Papers
Awards
- Best paper: Instant Policy: In-Context Imitation Learning via Graph Diffusion
- Best paper runner-up: Policy-Agnostic RL: Offline RL and Online RL Fine-Tuning of Any Class and Backbone
- Best presentation: RL Zero: Zero-Shot Language to Behaviors without any Supervision
Orals
- Value-Based Deep RL Scales Predictably
- AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real World
- PP-Tac: Paper Picking Using Omnidirectional Tactile Feedback in Dexterous Robotic Hands
- SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation
- RecFlow Policy: Fast and Accurate Visuomotor Policy Learning via Rectified Action Flow
- Policy-Agnostic RL: Offline RL and Online RL Fine-Tuning of Any Class and Backbone
- Environment as Policy: Generative Curriculum Learning for Autonomous Racing
- DemoGen: Synthetic Demonstration Generation for Data-Efficient Visuomotor Policy Learning
- Instant Policy: In-Context Imitation Learning via Graph Diffusion
- ManiSkill3: GPU Parallelized Robot Simulation and Rendering for Generalizable Embodied AI
- X-IL: Exploring the Design Space of Imitation Learning Policies
- AirExo-2: Scaling up Generalizable Robotic Imitation Learning with Low-Cost Exoskeletons
- Learning a Thousand Tasks in a Day
- Learning the RoPEs: Better 2D and 3D Position Encodings with STRING
- Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation
Posters
- Continuous Scene Graph Generation for Imitation Learning of Everyday Tasks
- World Models as Reference Trajectories for Rapid Motor Adaptation
- RL Zero: Zero-Shot Language to Behaviors without any Supervision
- Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning
- Student-Informed Teacher Training
- A New Perspective on Transformers in Online Reinforcement Learning for Continuous Control
- Universal Actions for Enhanced Embodied Foundation Models
- Stress-Testing Offline Reward-Free Reinforcement Learning: A Case for Planning with Latent Dynamics Models
- FLOWER: Democratizing Generalist Robot Policies with Efficient Vision-Language-Action Flow Policies
- AnyDexGrasp: Learning General Dexterous Grasping for Any Hands with Human-Level Learning Efficiency
- Self-supervised Visual State Representation Learning for robotics from Dynamic Scenes
- Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning
- TOP-ERL: Transformer-based Off-Policy Episodic Reinforcement Learning
- Optimism via Intrinsic Rewards: Scalable and Principled Exploration for Model-based Reinforcement Learning
- RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language Models for Robotics
- Diffusion-Based Maximum Entropy Reinforcement Learning
- Accelerating Goal-Conditioned RL Algorithms and Research
- KineSoft: Learning Proprioceptive Manipulation Policies with Soft Robot Hands
- Conformalized Interactive Imitation Learning: Handling Expert Shift and Intermittent Feedback
- Accelerating Transformers in Online RL
- Learning Long-Context Robot Policies via Past-Token Prediction
- PEAR: Primitive Enabled Adaptive Relabeling for Boosting Hierarchical Reinforcement Learning
- PartInstruct: Part-level Instruction Following for Fine-grained Robot Manipulation
- Object-Centric Latent Action Learning
- Navigation with QPHIL: Quantizing Planner for Hierarchical Implicit Q-Learning
- From Tabula Rasa to Emergent Abilities: Discovering Robot Skills via Reset-Free Unsupervised Quality-Diversity
- RILe: Reinforced Imitation Learning
- Teaching Visual Language Models to Navigate using Maps
- Efficient Robotic Policy Learning via Latent Space Backward Planning
- Learning Composable Diffusion Guidance for Motion Priors
- Towards Fusing Point Cloud and Visual Representations for Imitation Learning
- Small features matter: Robust representation for world models
- Improving Efficiency of Sampling-based Motion Planning via Message-Passing Monte Carlo
- ControlManip: Few-Shot Manipulation Fine-tuning via Object-centric Conditional Control
Organizers
- Andrey Kolobov (Microsoft Research, Redmond, USA)
- Hamidreza Kasaei (University of Groningen, Netherlands)
- Alex Bewley (Google DeepMind, Zurich)
- Anqi Li (NVIDIA, Seattle, USA)
- Dhruv Shah (UC Berkeley)
- Georgia Chalvatzaki (TU Darmstadt, Germany)
- Feras Dayoub (University of Adelaide, Australia)
- Roberto Calandra (TU Dresden, Germany)
- Ted Xiao (Google DeepMind, Mountain View, USA)
- Rika Antonova (University of Cambridge, UK and Stanford University, USA)
- Nur Muhammad “Mahi” Shafiullah (New York University, USA)
- Masha Itkina (Toyota Research Institute, Los Altos, USA)
Advisors
- Markus Wulfmeier (Google DeepMind, London, USA)
- Jonathan Tompson (Google DeepMind, Mountain View, USA)
Reviewers
We would like to thank the reviewers for their time and effort in reviewing the submitted papers. They are: (click to expand)
- Haoran Li - Mohit Shridhar - Zarif Ikram - Yifan Yin - Max Sobol Mark - Jun Lv - Afraz Khan - Uksang Yoo - Shan Luo - Chenxi Xiao - Qianwei Han - Lekan P Molu - Wilbert Pumacay - Mingtong Zhang - Hongze Wang - Kibum Kim - Vlad Sobal - Shashank Hegde - Tigran Galstyan - Makram Chahine = Rémy Portelas = Carsten Marr - Yuejiang Liu - Anmol Dubey - Rhea Malhotra - Jyothish Pari - Aleksandr Panov - Omkar Patil - Xuan Zhao - Archit Kalra - Max Rudolph - Tugce Temel - Shao-Hua Sun - Jiang Bian - Moritz Reuss - Markus Grotz - Jiageng Mao - Xianyuan Zhan - Xi Huang - Yucheng XU - Luca Grillotti - Lionel Ott - Nico Messikommer - Yen-Ru Chen - Mathieu Petitbois - Seungyong Yang - Haoyi Niu - Hongjie Fang - Gabriel B. Margolis - Hehui Zheng - Han A. Wang - Tian Gao - Zhonghong Ou - Alexi Canesse - Nolan Fey - Mikael Norrlof - Ayush Bheemaiah - Xueyi Liu - Yixuan Wang - Pei Lin - Vitalis Vosylius - Marcel Torne Villasevil - Dongxiu Liu - Sandy Huang - Zhitian Zhang - Chenxi Wang - Preston Fu - Puhao Li - Nan Xiao - Zhihao Wang - Edward Johns - Yulin Liu - Pietro Vitiello - Chan Hee Song - Jinliang Zheng - Paweł Budzianowski - Jiansong Wan - Bhavya Sukhija - Arda Inceoglu - Vinay P. Namboodiri - Ömer Erdinç Yağmurlu - Jiaxu Xing - Zhenyang Chen - Jiafei Duan - Alexey Kovalev - Taegeun Yang - Onur Celik - Mert Albaba - Pranav Atreya - Jianibieke Adalibieke - Jonathan Tremblay - You Liang Tan - Ismail Geles - Thomas Langerak - Daniil Zelezetsky - Georgia Gabriela Sampaio - Yinghan Chen - Gabriel Nakajima An - Michelle D Zhao - Nikita Kachaev - Tengyu Liu - Jianxin Wang - Kamil Dreczkowski - Wenjie Zi - Egor Cherepanov - Maëlic Neau - Caleb Chuck - Ziyuan Jiao - Jeongeun Park - Zhengrong Xue - Sung-eui Yoon - Shivam Aarya - Zihao He - Taekyung Kim - Stone Tao - Rong Xue - Haoquan Fang - Samyak Parajuli - Yucheng Yang - Basavasagar Patil - Kamalesh Kalirathinam - Maximilian Xiling LiContacts
For any further questions, you can contact us at iclr2025@robot-learning.ml