Alexa Prize faculty advisors provide insights on the competition
Teams' research papers that outline their approaches to development and deployment are now available.
Earlier today, the Alquist team from Czech Technical University learned it had been awarded the $500,000 first-prize purse in the Alexa Prize SocialBot Grand Challenge 4. Teams from Stanford and the University of Buffalo placed second and third, respectively.
Each Alexa Prize challenge team has a faculty advisor. Below are some perspectives on the competition from the advisors to each of the finalists in the recently completed challenge.
Jan Sedivy, Czech Technical University
The CTU team was excited to be part of the Alexa Prize competition. It is very beneficial for the academic team to have a challenging project with many cooperating students. Creating a socialbot is an excellent target requiring innovative and concentrated thinking, but we also had much fun designing catchy and attractive dialogs. Thank you, Amazon, for organizing the competition, and we are looking forward to joining again.
Christopher Manning, Stanford University
We had a great group of students for the Chirpy Cardinal team’s second attempt at the Alexa Prize. I was impressed by the work they took on to almost entirely remake our codebase and to add major new features using neural network generation to more seamlessly blend in information from news articles or Wikipedia, and to improve the experience when discussing food and sports. Producing a human-like conversation is surprisingly subtle and tricky: You need to be able to maintain a natural and consistent conversational arc; you need to correctly pick up on people, places, or products that are mentioned; you need to be able to respond to curveball topics the other speaker may introduce, and you need to contribute novel directions so the conversation doesn’t become boring. There are still many times that Chirpy’s conversations become unnatural when we fail at one or other of these subtasks, but we made noticeable progress. Our conversations in the finals this year averaged more than twice as long as last year's — a sign of success! — and sometimes things all came together, like when one conversant said that their favorite song was “Chocolate” — really “Gimme chocolate!!” — by BabyMetal, and the system recognized that correctly and said it was a great group and then proceeded to ask them what they thought about another BabyMetal song.
Rohini Srihari, University of Buffalo
Through our participation in the Alexa Grand Challenge, Team Proto from the University at Buffalo has gained invaluable hands-on experience and insights into human-bot communication as well as neural models for NLP. Conversational AI has the potential to make a positive impact on people’s lives and we look forward to furthering our research in this area.
Marilyn Walker, University of California, Santa Cruz
We had a great time this year, it’s been really fun. We started off with a strong system that had many novel components from last year, and we doubled down on some of those. I myself worked on developing some new modules to explore particular research ideas of my own, and that was also amazingly fun and kept me really engaged. It was great seeing some of our ideas from last year come into full fruition, like our idea of creating a dialogue manager that could flexibly interleave response generators for a particular topic, and thus create an infinite number of novel dialogue interactions for any topic. We made that component stronger and developed it to cover more topics. We put together a dynamic team led by Omkar Patil, a computer science engineering master’s student, with four seasoned PhD students from last year’s team. Then we added a great group of five NLP master’s students, who worked on Athena’s discourse model for their NLP capstone project.
We like the idea of end-to-end dialogue systems, but we think they need more structure and control. So what we’ve created is a hybrid of neural and structured knowledge-informed modules. Many of Athena’s functionalities are an ensemble of classic rule-based components with neural-trained models. For example, our dialogue manager recognizes topics and then calls on response generators, but once a pool of responses has been created, we use a response ranker we’ve repeatedly retrained to select the best response in context.
The NLP MS team’s new discourse model is a hybrid ensemble of rule-based co-reference engine, with a trained neural engine. We also created a novel user model component that controls the dialogue strategy by remembering the user, their interests and preferences, both within a conversation and across multiple conversations.
Jinho D. Choi, Emory University
This is an exciting time for Conversational AI Research as it is getting more attention than ever. We are grateful that we have been given an opportunity to interact with thousands of people everyday through our chatbot, Emora. This year, we have focused on developing a logic-based dialogue management framework that aims to mimic the inference process that humans make to understand context and derive multiple branches of implications to conduct engaging conversations. We believe that the Alexa Prize has successfully visualized a true potential of Conversational AI in daily applications, challenging a new level of human-computer interaction that our generation has dreamed of for a long time.
Research papers from each of the teams participating in Alexa Prize Grand Challenge 4 are now available on the Alexa Prize website.