Tools for Generating Synthetic Data Helped Bootstrap Alexa’s New-Language Releases

In the past few weeks, Amazon announced versions of Alexa in three new languages: Hindi, U.S. Spanish, and Brazilian Portuguese.

Like all new-language launches, these addressed the problem of how to bootstrap the machine learning models that interpret customer requests, without the ability to learn from customer interactions. At a high level, the solution is to use synthetic data. These three locales were the first to benefit from two new in-house tools, developed by the Alexa AI team, that produce higher-quality synthetic data more efficiently.

Each new locale has its own speech recognition model, which converts an acoustic speech signal into text. But interpreting that text — determining what the customer wants Alexa to do — is the job of Alexa’s natural-language-understanding (NLU) systems.

When a new-language version of Alexa is under development, training data for its NLU systems is scarce. Alexa feature teams will propose some canonical examples of customer requests in the new language, which we refer to as “golden utterances”; training data from existing locales can be translated by machine translation systems; crowd workers may be recruited to generate sample texts; and some data may come from Cleo, an Alexa skill that allows multilingual customers to help train new-language models by responding to voice prompts with open-form utterances.

Even when data from all these sources is available, however, it’s sometimes not enough to train a reliable NLU model. The new bootstrapping tools, from Alexa AI’s Applied Modeling and Data Science group, treat the available sample utterances as templates and generate new data by combining and varying those templates.

One of the tools, which uses a technique called grammar induction, analyzes a handful of golden utterances to learn general syntactic and semantic patterns. From those patterns, it produces a series of rewrite expressions that can generate thousands of new, similar sentences. The other tool, guided resampling, generates new sentences by recombining words and phrases from examples in the available data. Guided resampling concentrates on optimizing the volume and distribution of sentence types, to maximize the accuracy of the resulting NLU models.

Rules of Grammar

Grammars have been a tool in Alexa’s NLU toolkit since well before the first Echo device shipped. A grammar is a set of rewrite rules for varying basic template sentences through word insertions, deletions, and substitutions.

Below is a very simple grammar, which models requests to play either pop or rock music, with or without the modifiers “more” and “some”. Below the rules of the grammar is a diagram of a computational system (a finite-state transducer, or FST) that implements them.

diagram of the resulting finite-state transducer
A toy grammar, which can model requests to play pop or rock music, with or without the modifiers “some” or “more”, and a diagram of the resulting finite-state transducer. The question mark indicates that the some_more variable is optional.

Given a list of, say, 50 golden utterances, a computational linguist could probably generate a representative grammar in a day, and it could be operationalized by the end of the following day. With the Applied Modeling and Data Science (AMDS) group’s grammar induction tool, that whole process takes seconds.

AMDS research scientists Ge Yu and Chris Hench and language engineer Zac Smith experimented with a neural network that learned to produce grammars from golden utterances. But they found that an alternative approach, called Bayesian model merging, offered similar performance with advantages in reproducibility and iteration speed.

The resulting system identifies linguistic patterns in lists of golden utterances and uses them to generate candidate rules for varying sentence templates. For instance, if two words (say, “pop” and “rock”) consistently occur in similar syntactic positions, but the phrasing around them varies, then one candidate rule will be that (in some defined contexts) “pop” and “rock” are interchangeable.

After exhaustively listing candidate rules, the system uses Bayesian probability to calculate which rule accounts for the most variance in the sample data, without overgeneralizing or introducing inconsistencies. That rule becomes an eligible variable in further iterations of the process, which recursively repeats until the grammar is optimized.

Crucially, the tool’s method for creating substitution rules allows it to take advantage of existing catalogues of frequently occurring terms or phrases. If, for instance, the golden utterances were sports related, and the grammar induction tool determined that the words “Celtics” and “Lakers” were interchangeable, it would also conclude that they were interchangeable with “Warriors”, “Spurs”, “Knicks”, and all the other names of NBA teams in a standard catalogue used by a variety of Alexa services.

From a list of 50 or 60 golden utterances, the grammar induction tool might extract 100-odd rules that can generate several thousand sentences of training data, all in a matter of seconds.

Safe Swaps

The guided-resampling tool also uses catalogues and existing examples to augment training data. Suppose that the available data contains the sentences “play Camila Cabello” and “can you play a song by Justin Bieber?”, which have been annotated to indicate that “Camila Cabello” and “Justin Bieber” are of the type ArtistName. In NLU parlance, ArtistName is a slot type, and “Camila Cabello” and “Justin Bieber” are slot values.

The guided-resampling tool generates additional training examples by swapping out slot values — producing, for instance, “play Justin Bieber” and “can you play a song by Camila Cabello?” Adding the vast Amazon Music databases of artist names and song titles to the mix produces many additional thousands of training sentences.

Blindly swapping slot values can lead to unintended consequences, so which slot values can be safely swapped? For example, in the sentences “play jazz music” and “read detective books”, both “jazz” and “detective” would be labeled with the slot type GenreName. But customers are unlikely to ask Alexa to play “detective music”, and unnatural training data would degrade the performance of the resulting NLU model.

AMDS’s Olga Golovneva, a research scientist, and Christopher DiPersio, a language engineer, used the Jaccard index — which measures the overlap between two sets — to evaluate pairwise similarity between slot contents in different types of requests. On that basis, they defined a threshold for valid slot mixing.

Quantifying Complexity

As there are many different ways to request music, another vital question is how many variations of each template to generate in order to produce realistic training data. One answer is simply to follow the data distributions from languages that Alexa already supports.

Comparing distributions of sentence types across languages requires representing customer requests in a more abstract form. We can encode a sentence like “play Camila Cabello” according to the word pattern other + ArtistName, where other represents the verb “play”, and ArtistName represents “Camila Cabello”. For “play ‘Havana’ by Camila Cabello”, the pattern would be other + SongName + other + ArtistName. To abstract away from syntactic differences between languages, we can condense this pattern further to other + ArtistName + SongName, which represents only the semantic concepts included in the request.

Given this level of abstraction, Golovneva and DiPersio investigated several alternative techniques for determining the semantic distributions of synthetic data.

Using Shannon entropy, which is a measure of uncertainty, Golovneva and DiPersio calculated the complexity of semantic sentence patterns, focusing on slots and their combinations. Entropy for semantic slots takes into consideration how many different values each slot might have, as well as how frequent each slot is in the data set overall. For example, the slot SongName occurs very frequently in music requests, and its potential values (different song titles) number in the millions; in contrast, GenreName also occurs frequently in music requests, but its set of possible values (music genres) is fairly small.

Customer requests to Alexa often include multiple slots (such as “play ‘Vogue’|SongName by Madonna|ArtistName” or “set a daily|RecurrenceType reminder to {walk the dog}|ReminderContent for {seven a. m.}|Time”), which increases the pattern complexity further.

In their experiments, Golovneva and DiPersio used the entropy measures from slot distributions in the data and the complexity of slot combinations to determine the optimal distribution of semantic patterns in synthetic training data. This results in proportionally larger training sets for more complex patterns than for less complex ones. NLU models trained on such data sets achieved higher performance than those trained on datasets which merely “borrowed” slot distributions from existing languages.

Alexa is always getting smarter, and these and other innovations from AMDS researchers help ensure the best experience possible when Alexa launches in a new locale.

Acknowledgments: Ge Yu, Chris Hench, Zac Smith, Olga Golovneva, Christopher DiPersio, Karolina Owczarzak, Sreekar Bhaviripudi, Andrew Turner

About the Author
Janet Slifka is director of research science in Alexa AI’s Natural Understanding group and leads the Applied Modeling and Data Science team.

Related content

US, WA, Seattle
Job summaryAre you passionate about conducting measurement research and experiments to assess and evaluate talent? Would you like to see your research in products that will drive key talent management behaviors globally to ensure we are raising the bar on our talent? If so, you should consider joining the CXNS team!Amazon CXNS team is an innovative organization that exists to propel Amazon HR toward being the most scientific HR organization on earth. CNXS mission is to use Science to assist and measurably improve every talent decision made at Amazon. CXNS does this by discovering signals in workforce data, infusing intelligence into Amazon’s talent products, and guiding the broader CXNS team to pursue high-impact opportunities with tangible returns. This multi-disciplinary approach spans capabilities, including: data engineering, reporting and analytics, research and behavioral sciences, and applied sciences such as economics and machine learning.In this role, you will support measurement efforts for Amazon Connections (an innovative program that gives Amazonians a confidential and effective way to give feedback on the workplace to help shape the future of the company and improve the employee experience). You will own the research development strategy to evaluate, diagnose, understand, and surface drivers and moderators for key research streams. These include (but are not limited to) attrition, engagement, productivity, diversity, and Amazon culture. You will deep dive and analyze what research should be conducted and to what end, develop hypotheses that can be tested, and support a larger research program to deliver deeper insights that we can surface to leaders on our platform (short term and long).You will use both quantitative and qualitative data as well as conduct research studies to test your hypotheses. You will use a variety of statistical approaches to model and understand behavior. You will develop algorithms and thresholds to surface personalized results to managers/leaders, and partner with machine learning scientists to build these statistical models into production that scales. You will work with an interdisciplinary team of psychologists, economists, ML scientists, UX researchers, engineers, and product managers to inform and build product features to surface deeper people and business insights for our leaders.What you'll do:· Lead a global research strategy to drive more effective decisions and improve the employee experience across all of Amazon· Execute a scalable global content development and research strategy Amazon-wide· Conduct psychometrics analyses to evaluate integrity and practical application of content· Identify research streams to evaluate how to mitigate or remove sources of measurement error· Partner closely and drive effective collaborations across multi-disciplinary research and product teams· Manage full life cycle of large scale research programs (Develop strategy, gather requirements, execute, and evaluate)This person will possess knowledge of different assessment approaches to evaluate performance, a strong psychometrics background, scientific survey methodology, and computing various content validity analyses.
US, WA, Seattle
Job summaryWW Installments is one of the fastest growing businesses within Amazon and we are looking for an Economist to join the team. This group has been entrusted with a massive charter that will impact every customer that visits Amazon.com. We are building the next generation of features and payment products that maximize customer enablement in a simple, transparent, and customer obsessed way. Through these products, we will deliver value directly to Amazon customers improving the shopping experience for hundreds of millions of customers worldwide. Our mission is to delight our customers by building payment experiences and financial services that are trusted, valued, and easy to use from anywhere in any way.Economists at Amazon are solving some of the most challenging applied economics questions in the tech sector. Amazon economists apply the frontier of economic thinking to market design, pricing, forecasting, program evaluation, online advertising and other areas. Our economists build econometric models using our world class data systems, and apply economic theory to solve business problems in a fast-moving environment. A career at Amazon affords economists the opportunity to work with data of unparalleled quality, apply rigorous applied econometric approaches, and work with some of the most talented applied econometricians in the trade.As the Economist within WW Installments, you will be responsible for building long-term causal inference models and experiments. These analysis represent a core capability for WW Installments and businesses across Amazon. Your work will directly impact customers by influencing how objective functions are designed and which inputs are consumed for modeling. You will work across functions including machine learning, business intelligence, data engineering, software development, and finance to induce data driven decisions at every level of the organization.Key job responsibilitiesThis role will be responsible for:• Developing a causal inference and experimentation roadmap for the WW Installments Competitive Pricing team.• Apply expertise in causal and econometric modeling to develop large-scale systems that are deployed across Amazon businesses.• Identify business opportunities, define and execute modeling approach, then deliver outcomes to various Amazon businesses with an Amazon-wide perspective for solutions.• Lead the project plan from a scientific perspective on product launches including identifying potential risks, key milestones, and paths to mitigate risks• Own key inputs to reports consumed by VPs and Directors across Amazon.• Identifying new opportunities to influence business strategy and product vision using causal inference.• Continually improve the WW Installments experimentation roadmap automating and simplifying whenever possible.• Coordinate support across engineers, scientists, and stakeholders to deliver analytical projects and build proof of concept applications.• Work through significant business and technical ambiguity delivering on analytics roadmap across the team with autonomy.
US, WA, Seattle
Job summaryWW Installments is one of the fastest growing businesses within Amazon and we are looking for an Applied Scientist to join the team. This group has been entrusted with a massive charter that will impact every customer that visits Amazon.com. We are building the next generation of features and payment products that maximize customer enablement in a simple, transparent, and customer obsessed way. Through these products, we will deliver value directly to Amazon customers improving the shopping experience for hundreds of millions of customers worldwide. Our mission is to delight our customers by building payment experiences and financial services that are trusted, valued, and easy to use from anywhere in any way.As an Applied Scientist within WW Installments, you will be responsible for building machine learning models and pipelines with direct customer impact. These models represent a core capability for WW Installments and businesses across Amazon. Your work will directly impact customers by influencing how they interact with financing options to make purchases. You will work across functions including data engineering, software development, and business to induce data driven decisions at every level of the organization.Key job responsibilitiesThis role will be responsible for:• Developing production machine learning models and pipelines for the WW Installments Competitive Pricing team that directly impact customers.• Apply expertise in machine learning to develop large-scale production systems that are deployed across Amazon businesses.• Identify business opportunities, define and execute modeling approach, then deliver outcomes to various Amazon businesses with an Amazon-wide perspective for solutions.• Lead the implementation of production ML from a scientific perspective including identifying potential risks, key milestones, and paths to mitigate risks.• Identifying new opportunities to influence business strategy and product vision using data science and machine learning.• Continually improve the WW Installments ML roadmap automating and simplifying whenever possible.• Coordinate support across engineers, scientists, and stakeholders to deliver ML pipelines, analytics projects, and build proof of concept applications.• Work through significant business and technical ambiguity delivering on analytics roadmap across the team with autonomy.
US, WA, Seattle
Job summaryWW Installments is one of the fastest growing businesses within Amazon and we are looking for a Data Scientist to join the team. This group has been entrusted with a massive charter that will impact every customer that visits Amazon.com. We are building the next generation of features and payment products that maximize customer enablement in a simple, transparent, and customer obsessed way. Through these products, we will deliver value directly to Amazon customers improving the shopping experience for hundreds of millions of customers worldwide. Our mission is to delight our customers by building payment experiences and financial services that are trusted, valued, and easy to use from anywhere in any way.As a Data Scientist within WW Installments, you will be responsible for building machine learning models and pipelines with direct customer impact. These models represent a core capability for WW Installments and businesses across Amazon. Your work will directly impact customers by influencing how they interact with financing options to make purchases. You will work across functions including data engineering, software development, and business to induce data driven decisions at every level of the organization.Key job responsibilitiesThis role will be responsible for:• Developing machine learning models and pipelines for the WW Installments Competitive Pricing team.• Apply expertise in machine learning to develop large-scale systems that are deployed across Amazon businesses.• Identify business opportunities, define and execute modeling approach, then deliver outcomes to various Amazon businesses with an Amazon-wide perspective for solutions.• Lead the project plan from a scientific perspective on product launches including identifying potential risks, key milestones, and paths to mitigate risks.• Own key inputs to reports consumed by VPs and Directors across Amazon.• Identifying new opportunities to influence business strategy and product vision using data science and machine learning.• Continually improve the WW Installments ML roadmap automating and simplifying whenever possible.• Coordinate support across engineers, scientists, and stakeholders to deliver ML pipelines, analytics projects, and build proof of concept applications.• Work through significant business and technical ambiguity delivering on analytics roadmap across the team with autonomy.
US, CA, Palo Alto
Job summaryAmazon is the 4th most popular site in the US (http://www.alexa.com/topsites/countries/US). Our product search engine is one of the most heavily used services in the world, indexes billions of products, and serves hundreds of millions of customers world-wide. We are working on a new AI-first initiative to re-architect and reinvent the way we do search through the use of extremely large scale next-generation deep learning techniques. Our goal is to make step function improvements in the use of advanced Machine Learning (ML) on very large scale datasets, specifically through the use of aggressive systems engineering and hardware accelerators. This is a rare opportunity to develop cutting edge ML solutions and apply them to a problem of this magnitude. Some exciting questions that we expect to answer over the next few years include:· Can a focus on compilers and custom hardware help us accelerate model training and reduce hardware costs?· Can combining supervised multi-task training with unsupervised training help us to improve model accuracy?· Can we transfer our knowledge of the customer to every language and every locale ? The Search Science team is looking for a Senior Applied Science Manager to drive roadmap on making large business impact through application of Deep Learning models via close collaboration with partner teams. The team also has a focus on technology solution for deep-learning based embedding generation, sensitive data ingestion and applications, data quality measurement, improvement, data bias identification and reduction to achieve model fairness.Success in this role will require the courage to chart a new course. You will manage your own team to understand all aspects of the customer journey. You and your team will inform other scientists and engineers by providing insights and building models to help improving training data quality and reducing bias. The research focus includes but not limited to Natural Language Processing, recommendation, applications relevant to Amazon buyers, sellers and more. You will be working with cutting edge technologies that enable big data and parallelizable algorithms. You will play an active role in translating business and functional requirements into concrete deliverables and working closely with software development teams to put solutions into production.
US, WA, Seattle
Job summaryAmazon EC2 provides cloud computing which forms the foundation for the majority of AWS services, as well as a large portion of compute use cases for businesses and individuals around the world. A critical factor in the continued success of EC2 is the ability to provide reliable and cost effective computing. The EC2 Fleet Health and Lifecycle (EC2 FHL) organization is responsible for ensuring that the global EC2 server fleet continues to raise the bar for reliability, security, and efficiency. We are looking for seasoned engineering leaders with passion for technology and an entrepreneurial mindset. At Amazon, it is all about working hard, having fun and making history. If you are ready to make history, we want to hear from you!Come join a brand new team, EC2 Health Analytics, under EC2 Foundational Technology, to solve complex cutting-edge problems to power a faster, more robust and performant EC2 of tomorrow. The charter of our team is to improve customer experience on the EC2 fleet by analyzing hundreds of signals and driving next-generation detection and remediation tools. We apply Machine Learning to predict outcomes and optimize decisions that improve customer experience and operational efficiency. As an Applied Scientist in the EC2 Health Analytics team, you will join an industry-leading engineering team solving challenging problems at massive scale.· Build a world-class forecasting platform that scales to handling billions of time series data in real time.· Drive fleet utilization improvement where each 1% means tens of millions of additional free cash flow.· Automate tactical and strategic capacity planning tools to optimize for service availability and infrastructure cost.· Build recommendation algorithms for improving the AWS customer experience.· · Reduce dependence on manual troubleshooting for deep-dives.What you will learn:· State-of-the-art analytics and forecasting methodologies.· Application of machine learning to large-scale data sets.· · Product recommendation algorithms.· Resource management and admission control for the Cloud.· The internals of all AWS services.Inclusive Team CultureHere at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences.Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.
US, CA, Palo Alto
Job summaryThe Amazon Search team creates powerful, customer-focused search and advertising solutions and technologies. Whenever a customer visits an Amazon site worldwide and types in a query or browses through product categories, the Amazon Search services go to work. We design, develop, and deploy high performance, fault-tolerant distributed search systems used by millions of Amazon customers every day. Our team works to maximize the quality and effectiveness of the search experience for visitors to Amazon websites worldwide.
US, WA, Bellevue
Job summaryThe primary mission of ADECT Monitoring team is to protect customer trust and improve customer experience with Alexa skills and devices. As part of this role, you will build models to improve customer’s experience on Alexa. The team uses various signals to ensure that customers get delightful experiences. This could be through experience improvements or ensuring that only high quality experiences reach customers. We use a lot of data along with multiple approaches such as machine learning and other algorithmic approaches to solve challenges that customers face interacting with Alexa. The ideal candidate will be an expert in the areas of data science, machine learning and statistics, having hands-on experience with multiple improvement initiatives as well as balancing technical and business judgment to make the right decisions about technology, models and methodologies. This involves building conversation arbitration models, which validate conversation quality and metrics to measure and continuously improve on it. These are some of the challenges that have not been solved in the industry before. The candidate needs experience with data science / business intelligence, analytics, and reporting systems while striving for simplicity, and demonstrating significant creativity and high judgment backed by statistical proof. The candidate is also expected to take these models into production, so they need to have some experience with software systems as well. There will be guidance provided on the software front though.
US, WA, Bellevue
Job summaryThe primary mission of ADECT Monitoring team is to protect customer trust and improve customer experience with Alexa skills and devices. The team uses various signals to ensure that customers get delightful experiences. This could be through experience improvements or ensuring that only high quality experiences reach customers. We use a lot of data along with multiple approaches such as machine learning and other algorithmic approaches to solve challenges that customers face interacting with Alexa. The ideal candidate will be an expert in the areas of data science, machine learning and statistics, having hands-on experience with multiple improvement initiatives as well as balancing technical and business judgment to make the right decisions about technology, models and methodologies. As part of this role, you will build models to improve customer’s experience on Alexa. This involves building conversation arbitration models, which validate conversation quality and metrics to measure and continuously improve on it. These are some of the challenges that have not been solved in the industry before. The candidate needs experience with data science / business intelligence, analytics, and reporting systems while striving for simplicity, and demonstrating significant creativity and high judgment backed by statistical proof. The candidate is also expected to work on ML models to improve customer trust. This role will have an opportunity to convert to an Applied Scientist.
US, CA, Sunnyvale
Job summaryAmazon Lab126 is an inventive research and development company that designs and engineers high-profile consumer electronics. Lab126 began in 2004 as a subsidiary of Amazon.com, Inc., originally creating the best-selling Kindle family of products. Since then, we have produced groundbreaking devices like Fire tablets, Fire TV and Amazon Echo. What will you help us create?The Role:As a Design Analysis Engineer, you will be responsible for bringing new product designs through to manufacturing. Thermal and structural engineering contributes unique, in-depth technical knowledge to solve complex engineering problems in concert with multi-disciplinary teams including Industrial Design, Hardware Engineering, and Operations.You will work closely with multi-disciplinary groups including Product Design, Industrial Design, Hardware Engineering, and Operations, to drive key aspects of engineering of consumer electronics products. In this role, you will:· Perform analysis and testing of complex electronic assemblies using advanced simulation and experimentation tools and techniques· Strong fundamentals in dynamics with emphasis on system dynamics, mechanism analysis (Multi Body Dynamics analysis) and co-simulation· Develop, analyze and test thermal, acoustic and structural solutions; from concept design, feature development, product architecture, through system validation· Support creative developments through application of analysis and testing of complex electronic assemblies using advanced simulation and experimentation tools and techniques· Use simulation tools like Abaqus, LS-Dyna, Simpack for analysis and design of products· Validate design modifications using simulation and actual prototypes· Use of programming languages like Python and Matlab for analytical/statistical analyses and automation· Establish noise thresholds for usability and compliance requirements· Determine and validate structural performance under use and test conditions· Have strong knowledge of various materials such as heat spreaders solutions to resolve thermal issues, damping materials for noise and vibration suppression· Use various data acquisition systems with thermocouples, accelerometers, strain gauges and IR cameras· Collaborate as part of the device team to iterate and optimize design parameters of enclosures and structural parts to establish and deliver project performance objectives· Design and execute tests using statistical tools to validate analytical models, identify risks and assess design margins· Create and present analytical and experimental results· Develop and apply design guidelines based on project results