Embodied AI

OpenEQA: Embodied Question Answering in the Era of Foundation Models

We present a new embodied question answering (EQA) dataset with open vocabulary questions.

HomeRobot: Open-Vocabulary Mobile Manipulation

We propose a combined simulation and real-world benchmark on the problem of Open-Vocabulary Mobile Manipulation (OVMM).

Navigating to Objects Specified by Images

We present a modular system that can perform well on the Instance ImageNav task in both simulation and the real world.

Habitat-matterport 3d semantics dataset

We present Habitat-Matterport 3D Semantics (HM3DSEM), the largest dataset of 3D real-world spaces with densely annotated semantics.

Last-Mile Embodied Visual Navigation

A last-mile navigation module that connects to prior policies, leading to improved image-goal navigation results in simulation and real-robot experiments.

OVRL: Offline Visual Representation Learning for Embodied Navigation

In this work we propose OVRL, a two-stage representation learning strategy for visual navigation tasks in Embodied AI.