Leveraging unannotated data to bootstrap Alexa functions more quickly

Developing a new natural-language-understanding system usually requires training it on thousands of sample utterances, which can be costly and time-consuming to collect and annotate. That’s particularly burdensome for small developers, like many who have contributed to the library of more than 70,000 third-party skills now available for Alexa.

One way to make training more efficient is transfer learning, in which a neural network trained on huge collections of previously annotated data is then retrained on the comparatively sparse data in a new area. Last year, my colleagues and I showed that, for low volumes of training data, transfer learning could reduce the error rate of natural-language-understanding (NLU) systems by an average of 14%.

This year, at the 33rd conference of the Association for the Advancement of Artificial Intelligence (AAAI), we will present a method for reducing the error rate by an additional 8% — again, for low volumes of training data — by leveraging millions of unannotated interactions with Alexa.

Using those interactions, we train a neural network to produce “embeddings”, which represent words as points in a high-dimensional space, such that words with similar functions are grouped together. In our earlier paper, we used off-the-shelf embeddings; replacing them with these new embeddings but following the same basic transfer-learning procedure reduced error rates on two NLU tasks.

Our embeddings are based on an embedding scheme called ELMo, or Embeddings from Language Models. But we simplify the network that produces the embeddings, speeding it up by 60%, which makes it efficient enough for deployment in a real-time system like Alexa. We call our embedding ELMoL, for ELMo Light.

Embeddings typically group words together on the basis of their co-occurrence with other words. The more co-occurring words two words have in common, the closer they are in the embedding space. Embeddings thus capture information about words’ semantic similarities without requiring human annotation of training data.

Most popular embedding networks are “pretrained” on huge bodies of textual data. The volume of data ensures that the measures of semantic similarity are fairly reliable, and many NLU systems simply use the pretrained embeddings. That’s what we did in our earlier paper, using an embedding scheme called Fasttext.

We reasoned, however, that requests to Alexa, even across a wide range of tasks, exhibit more linguistic regularities than the more varied texts used to pretrain embeddings. An embedding network trained on those requests might be better able to exploit their regularities. We also knew that we had enough training data to yield reliable embeddings.

ELMo differs from embeddings like Fasttext in that it is context sensitive: with ELMo, the word “bark”, for instance, should receive different embeddings in the sentences “the dog’s bark is loud” and “the tree’s bark is hard”.

To track context, the ELMo network uses bidirectional long short-term memories, or bi-LSTMs. An LSTM is a network that processes inputs in order, and the output corresponding to a given input factors in previous inputs and outputs. A bidirectional LSTM is one that runs through the data both forward and backward.

ELMo uses a stack of bi-LSTMs, each of which processes the output of the one beneath it. But again, because Alexa transactions display more linguistic uniformity than generic texts, we believed we could extract adequate performance from a single bi-LSTM layer, while gaining significant speedups.

ELMoL_network.jpg._CB456417562_.jpg
Our ELMo Light (ELMoL) embedding network uses a bi-directional long short-term memory (grey parallelograms) to predict the next word in an utterance from those that preceded it. "Bi-directional" means that it runs both forward (right arrows) and backward (left arrows), so it also predicts the previous word in an utterance from those that follow it.

In our experiments, we compared NLU networks that used three different embedding schemes: Fasttext, as in our previous paper, ELMo, and ELMoL. As a baseline, we also compared the networks’ performance with that of a network that used no embedding scheme at all.

With the Fasttext network, the embedding layers were pretrained; with the ELMo and ELMoL networks, we trained the embedding layers on 250 million unannotated requests to Alexa. Once all the embeddings were trained, we used another four million annotated requests to existing Alexa services to train each network on two standard NLU tasks. The first task was intent classification, or determining what action a customer wished Alexa to perform, and the second was slot tagging, or determining what entities the action should apply to.

Initially, we allowed training on the NLU tasks to adjust the settings of the ELMoL embedding layers, too. But we found that this degraded performance: the early stages of training, when the network’s internal settings were swinging wildly, undid much of the prior training that the embedding layers had undergone.

So we adopted a new training strategy, in which the embedding layers started out fixed. Only after the network as a whole began to converge toward a solution did we allow slow modification of the embedding layers’ internal settings.

Once the networks had been trained as general-purpose intent classifiers and slot-taggers, we re-trained them on limited data to perform new tasks. This was the transfer learning step.

As we expected, the network that used ELMo embeddings performed best, but the ELMoL network was close behind, and both showed significant improvements over the network that used FastText. Those improvements were greatest when the volume of data for the final retraining — the transfer learning step — was small. But that is precisely the context in which transfer learning is most useful.

When the number of training examples ranged from 100 to 500, the error rate improvement over the FastText network hovered around 8%. At those levels, the ELMo network was usually good for another 1-2% reduction, but it was too slow to be practical in a real-time system.

Acknowledgments: Angeliki Metallinou, Aditya Siddhant

Related content

US, WA, Virtual Contact Center-WA
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. About the team The Selling Partner Fees team owns the end-to-end fees experience for two million active third party sellers. We own the fee strategy, fee seller experience, fee accuracy and integrity, fee science and analytics, and we provide scalable technology to monetize all services available to third-party sellers. Within the Science team, our goal is to understand the impact of changing fees on Seller (supply) and Customers (demand) behavior (e.g. price changes, advertising strategy changes, introducing new selection etc.) as well as using this information to optimize our fee structure and maximizing our long term profitability.
US, WA, Seattle
This is a unique opportunity to build technology and science that millions of people will use every day. Are you excited about working on large scale Natural Language Processing (NLP), Machine Learning (ML), and Deep Learning (DL)? We are embarking on a multi-year journey to improve the shopping experience for customers globally. Amazon Search team creates customer-focused search solutions and technologies that makes shopping delightful and effortless for our customers. Our goal is to understand what customers are looking for in whatever language happens to be their choice at the moment and help them find what they need in Amazon's vast catalog of billions of products. As Amazon expands to new geographies, we are faced with the unique challenge of maintaining the bar on Search Quality due to the diversity in user preferences, multilingual search and data scarcity in new locales. We are looking for an applied researcher to work on improving search on Amazon using NLP, ML, and DL technology. As an Applied Scientist, you will lead our efforts in query understanding, semantic matching (e.g. is a drone the same as quadcopter?), relevance ranking (what is a "funny halloween costume"?), language identification (did the customer just switch to their mother tongue?), machine translation (猫の餌を注文する). This is a highly visible role with a huge impact on Amazon customers and business. As part of this role, you will develop high precision, high recall, and low latency solutions for search. Your solutions should work for all languages that Amazon supports and will be used in all Amazon locales world-wide. You will develop scalable science and engineering solutions that work successfully in production. You will work with leaders to develop a strategic vision and long term plans to improve search globally. We are growing our collaborative group of engineers and applied scientists by expanding into new areas. This is a position on Global Search Quality team in Seattle Washington. We are moving fast to change the way Amazon search works. Together with a multi-disciplinary team you will work on building solutions with NLP/ML/DL at its core. Along the way, you’ll learn a ton, have fun and make a positive impact on millions of people. Come and join us as we invent new ways to delight Amazon customers.
US, WA, Seattle
This is a unique opportunity to build technology and science that millions of people will use every day. Are you excited about working on large scale Natural Language Processing (NLP), Machine Learning (ML), and Deep Learning (DL)? We are embarking on a multi-year journey to improve the shopping experience for customers globally. Amazon Search team creates customer-focused search solutions and technologies that makes shopping delightful and effortless for our customers. Our goal is to understand what customers are looking for in whatever language happens to be their choice at the moment and help them find what they need in Amazon's vast catalog of billions of products. As Amazon expands to new geographies, we are faced with the unique challenge of maintaining the bar on Search Quality due to the diversity in user preferences, multilingual search and data scarcity in new locales. We are looking for an applied researcher to work on improving search on Amazon using NLP, ML, and DL technology. As an Applied Scientist, you will lead our efforts in query understanding, semantic matching (e.g. is a drone the same as quadcopter?), relevance ranking (what is a "funny halloween costume"?), language identification (did the customer just switch to their mother tongue?), machine translation (猫の餌を注文する). This is a highly visible role with a huge impact on Amazon customers and business. As part of this role, you will develop high precision, high recall, and low latency solutions for search. Your solutions should work for all languages that Amazon supports and will be used in all Amazon locales world-wide. You will develop scalable science and engineering solutions that work successfully in production. You will work with leaders to develop a strategic vision and long term plans to improve search globally. We are growing our collaborative group of engineers and applied scientists by expanding into new areas. This is a position on Global Search Quality team in Seattle Washington. We are moving fast to change the way Amazon search works. Together with a multi-disciplinary team you will work on building solutions with NLP/ML/DL at its core. Along the way, you’ll learn a ton, have fun and make a positive impact on millions of people. Come and join us as we invent new ways to delight Amazon customers.
US, WA, Seattle
The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon’s on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon’s goods and services are aligned with Amazon’s corporate goals. We are seeking an experienced high-energy Economist to help envision, design and build the next generation of retail pricing capabilities. You will work at the intersection of economic theory, statistical inference, and machine learning to design new methods and pricing strategies to deliver game changing value to our customers. Roughly 85% of previous intern cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Key job responsibilities Amazon’s Pricing Science and Research team is seeking an Economist to help envision, design and build the next generation of pricing capabilities behind Amazon’s on-line retail business. As an economist on our team, you will work at the intersection of economic theory, statistical inference, and machine learning to design new methods and pricing strategies with the potential to deliver game changing value to our customers. This is an opportunity for a high-energy individual to work with our unprecedented retail data to bring cutting edge research into real world applications, and communicate the insights we produce to our leadership. This position is perfect for someone who has a deep and broad analytic background and is passionate about using mathematical modeling and statistical analysis to make a real difference. You should be familiar with modern tools for data science and business analysis. We are particularly interested in candidates with research background in applied microeconomics, econometrics, statistical inference and/or finance. A day in the life Discussions with business partners, as well as product managers and tech leaders to understand the business problem. Brainstorming with other scientists and economists to design the right model for the problem in hand. Present the results and new ideas for existing or forward looking problems to leadership. Deep dive into the data. Modeling and creating working prototypes. Analyze the results and review with partners. Partnering with other scientists for research problems. About the team The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon’s on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon’s goods and services are aligned with Amazon’s corporate goals.
US, CA, San Francisco
The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon's on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon's goods and services are aligned with Amazon's corporate goals. We are seeking an experienced high-energy Economist to help envision, design and build the next generation of retail pricing capabilities. You will work at the intersection of statistical inference, experimentation design, economic theory and machine learning to design new methods and pricing strategies for assessing pricing innovations. Roughly 85% of previous intern cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Key job responsibilities Amazon's Pricing Science and Research team is seeking an Economist to help envision, design and build the next generation of pricing capabilities behind Amazon's on-line retail business. As an economist on our team, you will will have the opportunity to work with our unprecedented retail data to bring cutting edge research into real world applications, and communicate the insights we produce to our leadership. This position is perfect for someone who has a deep and broad analytic background and is passionate about using mathematical modeling and statistical analysis to make a real difference. You should be familiar with modern tools for data science and business analysis. We are particularly interested in candidates with research background in experimentation design, applied microeconomics, econometrics, statistical inference and/or finance. A day in the life Discussions with business partners, as well as product managers and tech leaders to understand the business problem. Brainstorming with other scientists and economists to design the right model for the problem in hand. Present the results and new ideas for existing or forward looking problems to leadership. Deep dive into the data. Modeling and creating working prototypes. Analyze the results and review with partners. Partnering with other scientists for research problems. About the team The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon's on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon's goods and services are aligned with Amazon's corporate goals.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US
The Amazon Supply Chain Optimization Technology (SCOT) organization is looking for an Intern in Economics to work on exciting and challenging problems related to Amazon's worldwide inventory planning. SCOT provides unique opportunities to both create and see the direct impact of your work on billions of dollars’ worth of inventory, in one of the world’s most advanced supply chains, and at massive scale. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. We are looking for a PhD candidate with exposure to Program Evaluation/Causal Inference. Knowledge of econometrics and Stata/R/or Python is necessary, and experience with SQL, Hadoop, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Seattle
The Selling Partner Fees team owns the end-to-end fees experience for two million active third party sellers. We own the fee strategy, fee seller experience, fee accuracy and integrity, fee science and analytics, and we provide scalable technology to monetize all services available to third-party sellers. We are looking for an Intern Economist with excellent coding skills to design and develop rigorous models to assess the causal impact of fees on third party sellers’ behavior and business performance. As a Science Intern, you will have access to large datasets with billions of transactions and will translate ambiguous fee related business problems into rigorous scientific models. You will work on real world problems which will help to inform strategic direction and have the opportunity to make an impact for both Amazon and our Selling Partners.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. We are looking for a PhD candidate with exposure to Program Evaluation/Causal Inference. Some knowledge of econometrics, as well as basic familiarity with Stata or R is necessary, and experience with SQL, Hadoop, Spark and Python would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, MA, Boston
Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Amazon Robotics, a wholly owned subsidiary of Amazon.com, empowers a smarter, faster, more consistent customer experience through automation. Amazon Robotics automates fulfillment center operations using various methods of robotic technology including autonomous mobile robots, sophisticated control software, language perception, power management, computer vision, depth sensing, machine learning, object recognition, and semantic understanding of commands. Amazon Robotics has a dedicated focus on research and development to continuously explore new opportunities to extend its product lines into new areas. AR is seeking uniquely talented and motivated data scientists to join our Global Services and Support (GSS) Tools Team. GSS Tools focuses on improving the supportability of the Amazon Robotics solutions through automation, with the explicit goal of simplifying issue resolution for our global network of Fulfillment Centers. The candidate will work closely with software engineers, Fulfillment Center operation teams, system engineers, and product managers in the development, qualification, documentation, and deployment of new - as well as enhancements to existing - operational models, metrics, and data driven dashboards. As such, this individual must possess the technical aptitude to pick-up new BI tools and programming languages to interface with different data access layers for metric computation, data mining, and data modeling. This role is a 6 month co-op to join AR full time (40 hours/week) from July – December 2023. The Co-op will be responsible for: Diving deep into operational data and metrics to identify and communicate trends used to drive development of new tools for supportability Translating operational metrics into functional requirements for BI-tools, models, and reporting Collaborating with cross functional teams to automate AR problem detection and diagnostics