Who’s on First? How Alexa Is Learning to Resolve Referring Terms

This year, at the Association for Computational Linguistics’ Workshop on Natural-Language Processing for Conversational AI, my colleagues and I won one of two best-paper awards for our work on slot carryover.

Slot carryover is a method for solving the reference resolution problem that arises in the context of conversations with AI systems. For instance, if an Alexa customer asks, “When is Lion King playing at the Bijou?” and then follows up with the question “Is there a good Mexican restaurant near there?”, Alexa needs to know that “there” refers to the Bijou.

One of the things that makes reference resolution especially complicated for a large AI system like Alexa is that different Alexa services use different names — or slots — for the same data. A movie-finding service, for instance, might tag location data with the slot name Theater_Location, while a restaurant-finding service might use the slot name Landmark_Address. Over the course of a conversation, Alexa has to determine which slots used by one service should inherit data from which slots used by another.

Long-range_slot_carryover.jpg._CB440132402_.jpg
In large AI systems like Alexa, tracking conversational references is difficult because different services (Weather, Directions) use different “slot names” (WeatherCity, Town) for the same data (San Francisco).

Last year at Interspeech, we presented a machine learning system that learned to carry over slots from previous turns of dialogue to the current turn. That system made independent judgments about whether to carry over each slot value from one turn to the next. Even though it significantly outperformed a rule-based baseline system, its independent decision-making was a limitation.

In many Alexa services, slot values are highly correlated, and a strong likelihood of carrying over one slot value implies a strong likelihood of carrying over another. To take a simple example, some U.S. services have slots for both city and state; if one of those slots is carried over from one dialogue turn to another, it’s very likely that the other should be as well.

Exploiting these correlations should improve the accuracy of a slot carryover system. The decision about whether to carry over a given slot value should reinforce decisions about carrying over its correlates, and vice versa. In our new work, which was spearheaded by Tongfei Chen, a Johns Hopkins graduate student who was interning with our group, we evaluate two different machine learning architectures designed to explicitly model such slot correlations. We find that both outperform the system we reported last year.

Also at the Workshop on Natural-Language Processing for Conversational AI, Ruhi Sarikaya, director of applied science for the Alexa AI team, delivered a keynote talk on enabling scalable, natural, self-learning, contextual conversational systems. Although Sarikaya's talk ranged across the whole history of conversational AI, he concentrated on three recent innovations from the Alexa team: self-learning, contextual carryover, and skill arbitration. Contextual carryover is the topic of the main post on this page, and skill arbitration is a topic that Y. B. Kim has discussed at length on this blog, but self-learning has received comparatively little attention.

Self-learning is a process whereby Alexa improves performance with no human beings in the loop. It depends on implicit signals that a response is unsatisfactory, such as a "barge-in", in which a customer cuts of Alexa's response with a new request, or a rephrase or an earlier request. As Sarikaya explained, Alexa uses five different machine learning models to gauge whether customer reactions indicate dissatisfaction and, if so, whether they suggest what Alexa's responses should have been.

From such data, Sarikaya explained, the self-learning system produces a graphical model – an absorbing Markov chain – that spans multiple interactions between Alexa and multiple customers. This reinterprets the problem of correcting misunderstood queries as one of collaborative filtering, the technology underlying Amazon.com's recommendation engine. Rules learned by Alexa's self-learning system currently correct millions of misinterpretations every week, in Alexa's music, general-media, books, and video services. This year, Sarikaya explained, his team will introduce more efficient algorithms for exploring the graphical model of customer interactions, to increase the likelihood of finding the optimal correction for a misinterpreted request.

The first architecture we used to model slot interdependencies was a pointer network based on the long short-term memory (LSTM) architecture. Both the inputs and the outputs of LSTMs are sequences of data. A network configuration that uses pairs of LSTMs is common in natural-language understanding (NLU), automatic speech recognition, and machine translation. The first LSTM — the encoder — produces a vector representation of the input sequence, and the second LSTM — the decoder — converts it back into a data sequence. In machine translation, for instance, the vector representation would capture the semantic content of a sentence, regardless of what language it’s expressed in.

In our architecture, we used a bidirectional LSTM (bi-LSTM) encoder, which processes the input data sequence both forward and backward. The decoder was a pointer network, which outputs a subset of slots that should be carried over from previous turns of dialogue.

The other architecture we considered uses the same encoder as the first one, but it replaces the pointer-generator decoder with a self-attention decoder based on the transformer architecture. The transformer has recently become very popular for large-scale natural-language processing because of its efficient training and high accuracy. Its self-attention mechanism enables it to learn what additional data to emphasize when deciding how to handle a given input.

Our transformer-based network explicitly compares each input slot to all the other slots that have been identified in several preceding turns of dialogue, which are referred to as the dialogue context. During training, it learns which slot types from the context are most relevant when deciding what to do with any given type of input slot.

We tested our system using two different data sets, one a standard public benchmark and the other an internal data set. Each dialogue in the data sets consisted of several turns, alternating between customer utterances and system responses. A single turn might involve values for several slots. On both datasets, the new architectures, which are capable of modeling slot interdependencies, outperformed the systems we published last year, which make decisions independently.

Overall, the transformer system performed better than the pointer generator, but the pointer generator did exhibit some advantages in recognizing slot interdependencies across longer spans of dialogue. With the pointer generator architecture, we also found that ordering its slot inputs by turn improved performance relative to a random ordering, but further ordering slots within each turn lowered performance.

Suppose, for instance, that a given turn of dialogue consisted of the customer instruction “Play ‘Misty’ by Erroll Garner”, which the NLU system interpreted as having two slots, Song_Name (“Misty”) and Artist_Name (“Erroll Garner”). The bi-LSTM fared better if we didn’t consistently follow the order imposed by the utterance (Song_Name first, then Artist_Name) but instead varied the order randomly. This may be because random variation helped the system generalize better to alternate phrasings of the same content (“Play the Erroll Garner song ‘Misty’”).

Going forward, we will investigate further means of improving our slot carryover methodology, such as transfer learning and the addition of more data from the dialogue context, in order to improve Alexa’s ability to resolve references and deliver a better experience to our customers.

Acknowledgments: Tongfei Chen, Hua He, Lambert Mathias

Related content

US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
LU, Luxembourg
Are you a talented and inventive scientist with a strong passion about modern data technologies and interested to improve business processes, extracting value from the data? Would you like to be a part of an organization that is aiming to use self-learning technology to process data in order to support the management of the procurement function? The Global Procurement Technology, as a part of Global Procurement Operations, is seeking a skilled Data Scientist to help build its future data intelligence in business ecosystem, working with large distributed systems of data and providing Machine Learning (ML) and Predictive Modeling expertise. You will be a member of the Data Engineering and ML Team, joining a fast-growing global organization, with a great vision to transform the Procurement field, and become the role model in the market. This team plays a strategic role supporting the core Procurement business domains as well as it is the cornerstone of any transformation and innovation initiative. Our mission is to provide a high-quality data environment to facilitate process optimization and business digitalization, on a global scale. We are supporting business initiatives, including but not limited to, strategic supplier sourcing (e.g. contracting, negotiation, spend analysis, market research, etc.), order management, supplier performance, etc. We are seeking an individual who can thrive in a fast-paced work environment, be collaborative and share knowledge and experience with his colleagues. You are expected to deliver results, but at the same time have fun with your teammates and enjoy working in the company. In Amazon, you will find all the resources required to learn new skills, grow your career, and become a better professional. You will connect with world leaders in your field and you will be tackling Data Science challenges to ensure business continuity, by taking the right decisions for your customers. As a Data Scientist in the team, you will: -be the subject matter expert to support team strategies that will take Global Procurement Operations towards world-class predictive maintenance practices and processes, driving more effective procurement functions, e.g. supplier segmentation, negotiations, shipping supplies volume forecast, spend management, etc. -have strong analytical skills and excel in the design, creation, management, and enterprise use of large data sets, combining raw data from different sources -provide technical expertise to support the development of ML models to facilitate intelligent digital services, such as Contract Lifecycle Management (CLM) and Negotiations platform -cooperate closely with different groups of stakeholders, e.g. data/software engineers, product/program managers, analysts, senior leadership, etc. to evaluate business needs and objectives to set up the best data management environment -create and share with audiences of varying levels technical papers and presentations -deal with ambiguity, prioritizing needs, and delivering results in a dynamic environment Basic qualifications -Master’s Degree in Computer Science/Engineering, Informatics, Mathematics, or a related technical discipline -3+ years of industry experience in data engineering/science, business intelligence or related field -3+ years experience in algorithm design, engineering and implementation for very-large scale applications to solve real problems -Very good knowledge of data modeling and evaluation -Very good understanding of regression modeling, forecasting techniques, time series analysis, machine-learning concepts such as supervised and unsupervised learning, classification, random forest, etc. -SQL and query performance tuning skills Preferred qualifications -2+ years of proficiency in using R, Python, Scala, Java or any modern language for data processing and statistical analysis -Experience with various RDBMS, such as PostgreSQL, MS SQL Server, MySQL, etc. -Experience architecting Big Data and ML solutions with AWS products (Redshift, DynamoDB, Lambda, S3, EMR, SageMaker, Lex, Kendra, Forecast etc.) -Experience articulating business questions and using quantitative techniques to arrive at a solution using available data -Experience with agile/scrum methodologies and its benefits of managing projects efficiently and delivering results iteratively -Excellent written and verbal communication skills including data visualization, especially in regards to quantitative topics discussed with non-technical colleagues
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, WA, Seattle
We are a team of doers working passionately to apply cutting-edge advances in deep learning in the life sciences to solve real-world problems. As a Senior Applied Science Manager you will participate in developing exciting products for customers. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the leading edge of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with others teams. Location is in Seattle, US Embrace Diversity Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust Balance Work and Life Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives Mentor & Grow Careers Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. Key job responsibilities • Manage high performing engineering and science teams • Hire and develop top-performing engineers, scientists, and other managers • Develop and execute on project plans and delivery commitments • Work with business, data science, software engineer, biological, and product leaders to help define product requirements and with managers, scientists, and engineers to execute on them • Build and maintain world-class customer experience and operational excellence for your deliverables
US, Virtual
The Amazon Economics Team is hiring Interns in Economics. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL, UNIX, Sawtooth, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, data scientists and MBAʼs. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Seattle
Amazon internships are full-time (40 hours/week) for 12 consecutive weeks with start dates in May - July 2023. Our internship program provides hands-on learning and building experiences for students who are interested in a career in hardware engineering. This role will be based in Seattle, and candidates must be willing to work in-person. Corporate Projects (CPT) is a team that sits within the broader Corporate Development organization at Amazon. We seek to bring net-new, strategic projects to life by working together with customers and evolving projects from ZERO-to-ONE. To do so, we deploy our resources towards proofs-of-concept (POCs) and pilot programs and develop them from high-level ideas (the ZERO) to tangible short-term results that provide validating signal and a path to scale (the ONE). We work with our customers to develop and create net-new opportunities by relentlessly scouring all of Amazon and finding new and innovative ways to strengthen and/or accelerate the Amazon Flywheel. CPT seeks an Applied Science intern to work with a diverse, cross-functional team to build new, innovative customer experiences. Within CPT, you will apply both traditional and novel scientific approaches to solve and scale problems and solutions. We are a team where science meets application. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems.