Overview
Foundation Models (FM) for robotics promise broad, general-purpose behavior, including out-of-the-box generalization and increasingly complex skills such as dexterous manipulation (e.g., laundry folding, barista-style tasks) and in-the-wild navigation and planning. However, this breadth also brings new safety challenges. FM-based agents can misinterpret or hallucinate task-relevant details, fail to properly ground language in the physical scene, and struggle with real-world execution (e.g., contact dynamics, occlusions). They are also susceptible to adversarial jailbreak-style prompting that can induce unsafe behaviors. In embodied settings, these failures can translate to unsafe motion, damaged property, and risky interactions with people, leading to loss of trust and reliability.
With several startups promising general-purpose household robots as early as next year, it remains unclear how FMs can be made robustly deployable in real-world settings. This makes our workshop timely in consolidating and expanding insights from recent works (2024–2026) on robust robotic policies. More importantly, one goal of this workshop is to start a community discussion around a shared taxonomy and toolbox for trustworthy embodied foundation models, organized around two complementary perspectives: (1) Safety by design, focusing on reducing risk at the source (e.g., training data, architectures, and objectives), and (2) Safety by practice, focusing on deployment-time evaluation and mitigation (e.g., runtime monitoring, red-teaming, shielding, and robust failure recovery systems).
The workshop targets researchers and practitioners working on learned robot autonomy and its safe deployment, spanning robot learning, controls, verification, and HRI. Presenters and panelists will be drawn from across these communities to ensure diverse perspectives on evaluation, assurance, and real-world accountability.
We want the RSS community to actively engage with the safety implications of FM-based robots before such systems become mainstream in homes, hospitals, and workplaces. This workshop hopes to push towards consensus on evaluation standards and foster collaborations that address the key safety bottlenecks, helping shape the next generation of embodied AI on a safe, trustworthy foundation.
Call for Papers
We invite submissions on methods, evaluations, systems, and perspectives that advance trustworthy embodied foundation models for robotics, spanning both safety by design and safety by practice.
Contributions may include empirical studies, benchmarks, negative results, formal analyses, deployment lessons, and tooling that improves robustness, evaluation, and accountability of FM-based robotic autonomy.
We encourage researchers to submit work in the following areas:
- Continual learning for enhanced failure detection and recovery
- Lessons from red-teaming and jailbreaking of VLA models
- Robust training regimens using real or simulated data such as adversarial training, domain randomization, etc.
- Benchmarking for embodied safety evaluation
- Runtime assurance: monitoring, uncertainty/OOD detection, shielding, and safe fallback policies
- Formal methods and guarantees
- Human-in-the-loop systems for intervention, recovery, and incorporating failure feedback through intuitive interfaces
- Standardization for safety deployments in real-world and industrial applications
- Explainability for building trust in human–robot collaborations
Submission Guidelines
- Submission Portal: Submit Here
- Paper Submission Guidelines:
-
Paper Length:
Submissions should be 4-page short papers, excluding references, acknowledgements, and appendices.
-
Format:
- Submissions must be a single PDF following the RSS 2026 main conference format.
- Please include references and appendix in the same PDF as the main paper.
- Optional supplementary material (e.g., additional results, videos) may be uploaded as a single zip file.
-
Dual Submission / Previously Published Work:
- Dual submissions to other venues are allowed.
- Previously published work is welcome, as long as it is explicitly stated at the time of submission.
- Review & Presentation:
- All accepted papers will be presented as posters at the RSS 2026 workshop.
- A subset of accepted papers may be selected for short spotlight talks.
- Visibility: Submissions and reviews will not be public. Only accepted papers will be made public.
Important Dates
| Paper Submission Deadline |
TBA |
| Author Notification
|
TBA
|
| Camera-ready Version Due
|
TBA
|
| Workshop
|
TBA
|
Schedule
| Morning Schedule |
Evening Schedule |
Activity |
| 9:00 - 9:05 |
14:00 - 14:05 |
Welcome by organizers |
| 9:05 - 9:25 |
14:05 - 14:25 |
Invited Talk I (15 min + 3 min Q&A) |
| 9:25 - 9:55 |
14:25 - 14:55 |
Invited Talk II (15 min + 3 min Q&A) |
| 10:00 - 10:30 |
15:00 - 15:30 |
Coffee break + poster session (overlap) |
| 10:30 - 10:50 |
15:30 - 15:50 |
Invited Talk III (15 min + 3 min Q&A) |
| 10:50 - 11:10 |
15:50 - 16:10 |
Invited Talk IV (15 min + 3 min Q&A) |
| 11:10 - 11:35 |
16:10 - 16:35 |
Interactive Breakout Session |
| 11:35 - 11:50 |
16:35 - 16:50 |
Oral presentations |
| 11:50 - 12:30 |
16:50 - 17:30 |
Panel discussion |
| 12:30 - 12:55 |
17:30 - 17:55 |
Poster session + networking |
| 12:55 - 13:00 |
17:55 - 18:00 |
Closing remarks |
Invited Speakers
Carnegie Mellon University
Physical Intelligence
TU Darmstadt
PhD Student, TU Munich
Organizers
Cornell
KIT
Adobe Research
UCLA