Science innovations power Alexa Conversations dialogue management

Dialogue simulator and conversations-first modeling architecture provide ability for customers to interact with Alexa in a natural and conversational manner.

Today we announced the public beta launch of Alexa Conversations dialogue management. Alexa developers can now leverage a state-of-the-art dialogue manager powered by deep learning to create complex, nonlinear experiences — conversations that go well beyond today's typical one-shot interactions, such as "Alexa, what's the weather forecast for today?" or "Alexa, set a ten-minute pasta timer".

Alexa’s natural-language-understanding models classify requests according to domain, or the particular service that should handle the intent that the customer wants executed. The models also identify the slot types of the entities named in the requests, or the roles those entities play in fulfilling the request. In the request “Play ‘Rise Up’ by Andra Day”, the domain is Music, the intent is PlayMusic, and the names “Rise Up” and “Andra Day” fill the slots SongName and ArtistName.

Also at today's Alexa Live event, Nedim Fresko, vice president of Alexa Devices and Developers, announced that Amazon scientists have begun applying deep neural networks to custom skills and are seeing increases in accuracy. Read more here.

Natural conversations don’t follow these kinds of predetermined dialogue paths and often include anaphoric references (such as referring to a previously mentioned song by saying “play it”), contextual carryover of entities, customer revisions of requests, and many other types of interactions.

Alexa Conversations enables customers to interact with Alexa in a natural and conversational manner. At the same time, it relieves developers of the effort they would typically need to expend in authoring complex dialogue management rules, which are hard to maintain and often result in brittle customer experiences. Our dialogue augmentation algorithms and deep-learning models address the challenge of designing flexible and robust conversational experiences.

Dialogue management for Alexa Conversations is powered by two major science innovations: a dialogue simulator for data augmentation that generalizes a small number of sample dialogues provided by a developer into tens of thousands of annotated dialogues, and a conversations-first modeling architecture that leverages the generated dialogues to train deep-learning-based models to support dialogues beyond just the happy paths provided by the sample dialogues.

The Alexa Conversations dialogue simulator

Building high-performing deep-learning models requires large and diverse data sets, which are costly to acquire. With Alexa Conversations, the dialogue simulator automatically generates diversity from a few developer-provided sample dialogues that cover skill functionality, and it also generates difficult or uncommon exchanges that could occur.

The inputs to the dialogue simulator include developer application programming interfaces (APIs), slots and associated catalogues for slot values (e.g. city, state), and response templates (Alexa’s responses in different situations, such as requesting a slot value from the customer). These inputs together with their input arguments and output values define the skill-specific schema of actions and slots that the dialogue manager will predict.

Alexa Conversations dialogue simulator
The Alexa Conversations dialogue simulator generates tens of thousands of annotated dialogue examples that are used to train conversational models.

The dialogue simulator uses these inputs to generate additional sample dialogues in two steps.

In the first step, the simulator generates dialogue variations that represent different paths a conversation can take, such as different sequences of slot values and divergent paths that arise when a customer changes her mind.

More specifically, we conceive a conversation as a collaborative, goal-oriented interaction between two agents, a customer and Alexa. In this setting, the customer has a goal she wants to achieve, such as booking an airplane flight, and Alexa has access to resources, such as APIs for searching flight information or booking flights, that can help the customer reach her goal.

The simulated dialogues are generated through the interaction of two agent simulators, one for the customer, the other for Alexa. From the sample dialogues provided by the developer, the simulator first samples several plausible goals that customers interacting with the skill may want to achieve.

Conditioned on a sample goal, we generate synthetic interactions between the two simulator agents. The customer agent progressively reveals its goal to the Alexa agent, while the Alexa agent gathers the customer agent’s information, confirms information, and asks follow-up questions about missing information, guiding the interaction toward goal completion.

In the second step, the simulator injects language variations into the dialogue paths. The variations include alternate expressions of the same customer intention, such as “recommend me a movie” versus “I want to watch a movie”. Some of these alternatives are provided by the sample conversations and Alexa response templates, while others are generated through paraphrasing.

The variations also include alternate slot values (such as “Andra Day” or “Alicia Keys” for the slot ArtistName), which are sampled from slot catalogues provided by the developer. Through these two steps, the simulator generates tens of thousands of annotated dialogue examples that are used for training the conversational models.

The Alexa Conversations modeling architecture

A natural conversational experience could follow any one of a wide range of nonlinear dialogue patterns. Our conversations-first modeling architecture leverages dialogue-simulator and conversational-modeling components to support dialogue patterns that include carryover of entities, anaphora, confirmation of slots and APIs, and proactively offering related functionality, as well as robust support for a customer changing her mind midway through a conversation.

We follow an end-to-end dialogue-modeling approach, where the models take into account the current customer utterance and context from the entire conversation history to predict the optimal next actions for Alexa. Those actions might include calling a developer-provided API to retrieve information and relaying that information to the customer; asking for more information from the customer; or any number of other possibilities.

The modeling architecture is built using state-of-the-art deep-learning technology and consists of three models: a named-entity-recognition (NER) model, an action prediction (AP) model, and an argument-filling (AF) model. The models are built by combining supervised training techniques on the annotated synthetic dialogues generated by the dialogue simulator and unsupervised pretraining of large Transformer-based components on text corpora.

Alexa Conversations modeling architecture
The Alexa Conversations modeling architecture uses state-of-the-art deep-learning technology and consists of three models: a named-entity-recognition model, an action prediction model, and an argument-filling model. The models are built by combining supervised training techniques on the annotated synthetic dialogues generated by the dialogue simulator and unsupervised pretraining of large Transformer-based components on text corpora.

First, the NER model identifies slots in each of the customer utterances, selecting from slots the developer defined as part of the build-time assets (date, city, etc.). For example, for the request “search for flights to Seattle tomorrow”, the NER model will identify “Seattle” as a city slot and “tomorrow” as a date slot.

The NER model is a sequence-tagging model built using a bidirectional LSTM layer on top of a Transformer-based pretrained sentence encoder. In addition to the current sentence, NER also takes dialogue context as input, which is encoded through a hierarchical LSTM architecture that captures the conversational history, including past slots and Alexa actions.

Next, the AP model predicts the optimal next action for Alexa to take, such as calling an API or responding to the customer to either elicit more information or complete a request. The action space is defined by the APIs and Alexa response templates that the developer provides during the skill-authoring process.

The AP model is a classification model that, like the NER model, uses a hierarchical LSTM architecture to encode the current utterance and past dialogue context, which ultimately passes to a feed-forward network to generate the action prediction.

Finally, the AF model fills in the argument values for the API and response templates by looking at the entire dialogue for context. Using an attention-based pointing mechanism over the dialogue context, the AF model selects compatible slots from all slot values that the NER model recognized earlier.

For example, suppose slot values “Seattle” and “tomorrow” exist in the dialogue context for city and date slots respectively, and the AP model predicted the SearchFlight API as the optimal next action. The AF model will fill in the API arguments with the appropriate values, generating a complete API call: SearchFlight (city=“Seattle”, date="tomorrow").

The AP and AF models may also predict and generate more than one action after a customer utterance. For example, they may decide to first call an API to retrieve flight information and then call an Alexa response template to communicate this information to the customer. Therefore, the AP and AF models can make sequential predictions of actions, including the decision to stop predicting more actions and wait for the next customer request.

The finer points

Consistency check logic ensures that the resulting predictions are all valid actions, consistent with developer-provided information about their APIs. For example, the system would not generate an API call with an empty input argument, if that input argument is required by the developer.

The inputs include the entire dialogue history, as well as the latest customer request, and the resulting model predictions are contextual, relevant, and not repetitive. For example, if a customer has already provided the date of a trip while searching for a flight, Alexa will not ask for the date when booking the flight. Instead, the date provided earlier will contextually carry over and pass to the appropriate API.

We leveraged large pretrained Transformer components (BERT) that encode current and past requests in the conversation. To ensure state-of-the-art model build-time and runtime latency, we performed inference architecture optimizations such as accelerating embedding computation on GPUs, implementing efficient caching, and leveraging both data- and model-level parallelism.

We are excited about the advances that enable Alexa developers to build flexible and robust conversational experiences that allow customers to have natural interactions with their devices. Developers interested in learning more about the "how" of building these conversational experiences should read our accompanying developer blog.

For more information about the technical advances behind Alexa Conversations, at right are relevant publications related to our work in dialogue systems, dialogue state tracking, and data augmentation.

Acknowledgments: The entire Alexa Conversations team for making the innovations highlighted here possible.

Related content

US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
LU, Luxembourg
Are you a talented and inventive scientist with a strong passion about modern data technologies and interested to improve business processes, extracting value from the data? Would you like to be a part of an organization that is aiming to use self-learning technology to process data in order to support the management of the procurement function? The Global Procurement Technology, as a part of Global Procurement Operations, is seeking a skilled Data Scientist to help build its future data intelligence in business ecosystem, working with large distributed systems of data and providing Machine Learning (ML) and Predictive Modeling expertise. You will be a member of the Data Engineering and ML Team, joining a fast-growing global organization, with a great vision to transform the Procurement field, and become the role model in the market. This team plays a strategic role supporting the core Procurement business domains as well as it is the cornerstone of any transformation and innovation initiative. Our mission is to provide a high-quality data environment to facilitate process optimization and business digitalization, on a global scale. We are supporting business initiatives, including but not limited to, strategic supplier sourcing (e.g. contracting, negotiation, spend analysis, market research, etc.), order management, supplier performance, etc. We are seeking an individual who can thrive in a fast-paced work environment, be collaborative and share knowledge and experience with his colleagues. You are expected to deliver results, but at the same time have fun with your teammates and enjoy working in the company. In Amazon, you will find all the resources required to learn new skills, grow your career, and become a better professional. You will connect with world leaders in your field and you will be tackling Data Science challenges to ensure business continuity, by taking the right decisions for your customers. As a Data Scientist in the team, you will: -be the subject matter expert to support team strategies that will take Global Procurement Operations towards world-class predictive maintenance practices and processes, driving more effective procurement functions, e.g. supplier segmentation, negotiations, shipping supplies volume forecast, spend management, etc. -have strong analytical skills and excel in the design, creation, management, and enterprise use of large data sets, combining raw data from different sources -provide technical expertise to support the development of ML models to facilitate intelligent digital services, such as Contract Lifecycle Management (CLM) and Negotiations platform -cooperate closely with different groups of stakeholders, e.g. data/software engineers, product/program managers, analysts, senior leadership, etc. to evaluate business needs and objectives to set up the best data management environment -create and share with audiences of varying levels technical papers and presentations -deal with ambiguity, prioritizing needs, and delivering results in a dynamic environment Basic qualifications -Master’s Degree in Computer Science/Engineering, Informatics, Mathematics, or a related technical discipline -3+ years of industry experience in data engineering/science, business intelligence or related field -3+ years experience in algorithm design, engineering and implementation for very-large scale applications to solve real problems -Very good knowledge of data modeling and evaluation -Very good understanding of regression modeling, forecasting techniques, time series analysis, machine-learning concepts such as supervised and unsupervised learning, classification, random forest, etc. -SQL and query performance tuning skills Preferred qualifications -2+ years of proficiency in using R, Python, Scala, Java or any modern language for data processing and statistical analysis -Experience with various RDBMS, such as PostgreSQL, MS SQL Server, MySQL, etc. -Experience architecting Big Data and ML solutions with AWS products (Redshift, DynamoDB, Lambda, S3, EMR, SageMaker, Lex, Kendra, Forecast etc.) -Experience articulating business questions and using quantitative techniques to arrive at a solution using available data -Experience with agile/scrum methodologies and its benefits of managing projects efficiently and delivering results iteratively -Excellent written and verbal communication skills including data visualization, especially in regards to quantitative topics discussed with non-technical colleagues
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, WA, Seattle
Amazon is seeking an experienced, self-directed data scientist to support the research and analytical needs of Amazon Web Services' Sales teams. This is a unique opportunity to invent new ways of leveraging our large, complex data streams to automate sales efforts and to accelerate our customers' journey to the cloud. This is a high-visibility role with significant impact potential. You, as the right candidate, are adept at executing every stage of the machine learning development life cycle in a business setting; from initial requirements gathering to through final model deployment, including adoption measurement and improvement. You will be working with large volumes of structured and unstructured data spread across multiple databases and can design and implement data pipelines to clean and merge these data for research and modeling. Beyond mathematical understanding, you have a deep intuition for machine learning algorithms that allows you to translate business problems into the right machine learning, data science, and/or statistical solutions. You’re able to pick up and grasp new research and identify applications or extensions within the team. You’re talented at communicating your results clearly to business owners in concise, non-technical language. Key job responsibilities • Work with a team of analytics & insights leads, data scientists and engineers to define business problems. • Research, develop, and deliver machine learning & statistical solutions in close partnership with end users, other science and engineering teams, and business stakeholders. • Use AWS services like SageMaker to deploy scalable ML models in the cloud. • Examples of projects include modeling usage of AWS services to optimize sales planning, recommending sales plays based on historical patterns, and building a sales-facing alert system using anomaly detection.
US, WA, Seattle
We are a team of doers working passionately to apply cutting-edge advances in deep learning in the life sciences to solve real-world problems. As a Senior Applied Science Manager you will participate in developing exciting products for customers. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the leading edge of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with others teams. Location is in Seattle, US Embrace Diversity Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust Balance Work and Life Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives Mentor & Grow Careers Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. Key job responsibilities • Manage high performing engineering and science teams • Hire and develop top-performing engineers, scientists, and other managers • Develop and execute on project plans and delivery commitments • Work with business, data science, software engineer, biological, and product leaders to help define product requirements and with managers, scientists, and engineers to execute on them • Build and maintain world-class customer experience and operational excellence for your deliverables
US, Virtual
The Amazon Economics Team is hiring Interns in Economics. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL, UNIX, Sawtooth, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, data scientists and MBAʼs. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Seattle
Amazon internships are full-time (40 hours/week) for 12 consecutive weeks with start dates in May - July 2023. Our internship program provides hands-on learning and building experiences for students who are interested in a career in hardware engineering. This role will be based in Seattle, and candidates must be willing to work in-person. Corporate Projects (CPT) is a team that sits within the broader Corporate Development organization at Amazon. We seek to bring net-new, strategic projects to life by working together with customers and evolving projects from ZERO-to-ONE. To do so, we deploy our resources towards proofs-of-concept (POCs) and pilot programs and develop them from high-level ideas (the ZERO) to tangible short-term results that provide validating signal and a path to scale (the ONE). We work with our customers to develop and create net-new opportunities by relentlessly scouring all of Amazon and finding new and innovative ways to strengthen and/or accelerate the Amazon Flywheel. CPT seeks an Applied Science intern to work with a diverse, cross-functional team to build new, innovative customer experiences. Within CPT, you will apply both traditional and novel scientific approaches to solve and scale problems and solutions. We are a team where science meets application. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems.