Ten university teams have been selected to participate in the live interactions phase of the Alexa Prize SimBot Challenge. The competition, which was launched in October 2021, is focused on helping advance development of next-generation virtual assistants that will assist humans in completing real-world tasks by harnessing generalizable AI methodologies such as continuous learning, teachable AI, multimodal understanding, and reasoning.
The SimBot Challenge is split into two phases: A public benchmark phase and a live interactions phase. Participants in both phases will build machine-learning models for natural language understanding, human-robot interaction, and robotic task completion. Unlike previous Alexa Prize competitions, the public benchmark challenge phase will be open not only to teams of university students, but also to individuals in academia and industry interested in advancing the science of AI and engaging top researchers from around the globe.
The 10 university teams selected to participate in the challenge are:
Team name | University | Student team leader | Faculty advisor |
Symbiote | Carnegie Mellon University | Nikolaos G. | Katerina Fragkiadaki |
GauchoAI | University of California, Santa Barbara | Jiachen L. | Xifeng Yan |
KingFisher | University of Illinois | Abhinav A. | Julia Hockenmaier |
KnowledgeBot | Virginia Tech | Minqian L. | Lifu Huang |
SalsaBot | Ohio State University | Chan Hee S. | Yu Su |
SEAGULL | University of Michigan | Yichi Z. | Joyce Chai |
SlugJARVIS | University of California, Santa Cruz | Jing G. | Xin Wang |
ScottyBot | Carnegie Mellon University | Jonathan F. | Yonatan Bisk |
Team EMMA | Heriot-Watt University | Amit P. | Alessandro Suglia |
UMD-PRG | University of Maryland | David S. | Yiannis Aloimonos |
Participants in the public benchmark phase will be ranked based on their model performance in the challenge. Leading university teams, not currently sponsored for the SimBot challenge, may receive Amazon sponsorship to build a SimBot that will compete in a challenge from July 2022 to September 2023 where they will receive real-time ratings and feedback from Alexa customers.
During the live interactions challenge phase, the university teams will compete to develop the bot that best responds to customer commands and multimodal sensor inputs from within a virtual world. Similar to previous SocialBot challenges, customers will participate in this phase; Alexa customers will play the game on their Amazon Echo Show devices, seeking to solve mysteries and progressively harder tasks within the virtual environment.
TEACh dataset
In conjunction with the announcement of the SimBot Challenge in October 2021, Amazon publicly released TEACh, a new dataset of more than 3,000 human-to-human dialogues between a simulated user and simulated robot communicating with each other to complete household tasks. In TEACh, the simulated user cannot interact with objects in the environment and the simulated robot does not know the task to be completed, requiring them to communicate and collaborate to successfully complete tasks. The public benchmark phase of the SimBot Challenge will be based on the TEACh dataset Execution from Dialog History (EDH) benchmark which evaluates a model’s ability to predict subsequent simulated robot actions, given the dialogue history between the user and the robot, and past robot actions and observations.
Due to the unconstrained dialogue interface used in the collection the dataset, TEACh dialogue sessions demonstrate a wide range of interesting dialogue phenomena including variation in instruction granularity, completeness, relevance and repetition, coreference to previously mentioned entities, past actions and locations, language-guided backtracking and correction of mistakes — all of which will be important aspects in the SimBot Challenge.