Five finalists selected for inaugural Alexa Prize SimBot Challenge
University teams are competing to help advance the science of conversational embodied AI and robust human AI interaction.
Five university teams have been selected to participate in the final live interactions phase of the Alexa Prize SimBot Challenge. The teams were selected based on, among other things, customer feedback and scientific merit of the technical papers produced by each team.
The five university teams selected to participate in the live interactions phase challenge are:
Student team leader
|University of California, Santa Barbara
|Carnegie Mellon University
|University of Michigan
|University of California, Santa Cruz
Alexa customers can interact with the finalists’ SimBots by saying "Alexa, play with robot" on Echo Show or Fire TV devices. Those ratings and feedback help the student teams improve their bots to compete to see who will win the top awards, totaling $650,000 in cash prizes.
The Alexa Prize is a unique industry-academia partnership program which provides an agile real-world experimentation framework and tools for accelerating scientific discovery. University students have the opportunity to launch innovations online and rapidly adapt based on feedback from Alexa customers.
It’s wonderful to see the tremendous talent, creativity, passion and contributions of all Alexa Prize SimBot teams during this inaugural challenge towards accelerating the science of robust human AI interactions and conversational embodied AI.
The SimBot Challenge is focused on helping advance the science of Embodied AI agents that can engage effectively with humans to understand, learn and collaborate to achieve their given missions.
“Next generation autonomous AI assistants will need to be robust and versatile, capable of learning and solving challenging tasks that require multimodal interactions with humans, other agents and the environment,” said Reza Ghanadan, a senior principal scientist in Alexa AI and head of Alexa AI Prize. “It’s wonderful to see the tremendous talent, creativity, passion and contributions of all Alexa Prize SimBot teams during this inaugural challenge towards accelerating the science of robust human AI interactions and conversational embodied AI.”
As part of the Alexa Prize SimBot program, Amazon provided data, software tools, ML models, visual and conversational AI pipeline, and the embodied-AI framework, Alexa Arena, to help teams innovate, launch and experiment with their new AI ideas online and improve their research throughout the competition.
During the finals phase, university teams are competing to develop a bot that best responds to commands and multimodal sensor inputs from within a virtual world. Similar to previous Alexa Prize challenges, Alexa customers participate in this phase as well.
In this case, customers interact with virtual robots powered by universities’ AI models on their Echo Show or Fire TV devices, seeking to solve progressively harder tasks within the virtual environment. After the interaction, they may provide feedback and ratings for the university bots. That feedback is shared with university teams to help advance their research.
The winning teams will be determined during the SimBot Challenge finals event scheduled for the first week of May. Publications from all ten semifinalist teams will be featured on the Amazon Science website later this year.
In conjunction with the SimBot Challenge, Amazon publicly released TEACh, a new dataset of more than 3,000 human-to-human dialogues between a simulated user and simulated robot communicating with each other to complete household tasks.
In TEACh, the simulated user cannot interact with objects in the environment and the simulated robot does not know the task to be completed, requiring them to communicate and collaborate to successfully complete tasks. The public benchmark phase of the SimBot Challenge which ended in June 2022 was based on the TEACh dataset Execution from Dialog History (EDH) benchmark which evaluates a model’s ability to predict subsequent simulated robot actions, given the dialogue history between the user and the robot, and past robot actions and observations.