Embodied BERT: A transformer model for embodied, language-guided visual task completion
2021
Language-guided robots performing home and office tasks must navigate in and interact with the world. Grounding language instructions against visual observations and actions to take in an environment is an open challenge. We present Embodied BERT (EmBERT), a transformer-based model which can attend to high-dimensional, multi-modal inputs across long temporal horizons for language-conditioned task completion. Additionally, we bridge the gap between successful object-centric navigation models used for noninteractive agents and the language-guided visual task completion benchmark, ALFRED, by introducing object navigation targets for EmBERT training. EmBERT achieves competitive performance on the ALFRED benchmark, and is the first model to use a full, pretrained BERT stack while handling the long-horizon, dense, multi-modal histories of ALFRED. Model code is available at GitHub.
Research areas