Text normalization with only 3% as much training data

Proteno model dramatically increases the efficiency of the first step in text-to-speech conversion.

With services like Alexa, which use synthesized speech for output, text normalization (TN) is usually the first step in the process of text-to-speech conversion. TN takes raw text as input— say, the string 6-21-21 — and expands it into a verbalized form that a text-to-speech model can use to produce the final speech — “twenty first of June twenty twenty one”.

Historically, TN algorithms relied on hard-coded rules, which didn’t generalize across languages and were hard to maintain: a typical rule-based TN system for a single language might have thousands of rules, which evolve over time and whose development requires linguistic expertise.

Text Normalization.png
Text normalization converts the output of computational processes — such as the natural-language-understanding models that handle Alexa customers' requests — into a form that will make sense when read out as synthesized speech.
Credit: Glynis Condon

More recently, academic and industry researchers have begun developing machine-learning-based TN models. But these have drawbacks, too. 

Sequence-to-sequence models occasionally make unacceptable errors, such as converting “$5” to “five pounds”. Semiotic-classification models require domain-specific information classes created by linguistic experts — classes such as emoticonor telephone number — which limits their generalizability. And both types of models require large amounts of training data, which makes it difficult to scale them across languages.

At this year’s meeting of the North American Chapter of the Association for Computational Linguistics (NAACL), my colleagues and I are presenting a new text normalization model, called Proteno, that addresses these challenges.

We evaluated Proteno on three languages, English, Spanish, and Tamil. There’s a large body of research on TN in English, but no TN datasets existed for Spanish and Tamil. Consequently, we created our own datasets, which we have publicly released for use by other TN researchers.

Proteno specifies only a few, low-level normalization classes — such as ordinal number, cardinal number, or Roman numeral — which generalize well across languages. From the data, Proteno then learns a huge variety of additional, fine-grained classes. 

In our experiments on English, for instance, we used eight predefined classes, and Proteno automatically generated another 2,658. By contrast, semiotic-classification models typically have about 20 classes.

Proteno also uses a simple but effective scheme for tokenization, or splitting texts up into smaller chunks. Prior tokenization techniques required linguistic knowledge or data-heavy training; Proteno’s tokenization technique, by contrast, simply breaks text up at spaces and at transitions between Unicode classes, such as letternumeral, or punctuation mark. Consequently, it can generalize across languages, it enables the majority of normalizations to be learned from the data, and it reduces the incidence of unacceptable errors. 

Together, these techniques also allow Proteno to make do with much less training data than previous machine learning approaches. In our experiments, Proteno offered comparable performance with the previous state of the art in English — while requiring only 3% as much training data. 

Because there were no prior TN models trained on Spanish and Tamil, we had no benchmarks for our experiments. But on comparable amounts of training data, the Proteno models trained on Tamil and Spanish achieved accuracies comparable to that of the one trained on English (99.1% for Spanish, 96.7% for Tamil, and 97.4% for English).

Methods

Proteno treats TN as a sequence classification problem, where most of the classes are learned. The figure below illustrates Proteno’s training and run-time processing pipelines, which have slightly different orders.

Proteno pipeline (new).png

The training pipeline consists of the following steps:

  • Tokenization: Previous tokenization methods relied on language-specific rules devised by linguists. For instance, the string 6-21-21 would be treated as a single token of the type date. We propose a granular tokenization mechanism that is language independent and applicable to any space-separated language. The text to be normalized is first split at its spaces and then further split wherever there is a change in the Unicodeclass. The string 6-21-21 thus becomes five tokens, and we count on Proteno to learn how to handle them properly.
  • Annotation: The tokenized, unnormalized text is annotated token by token, which gives us a one-to-one mapping between each unnormalized token and its ground-truth normalization. This data will be used to train the model.
  • Class generation: Each token is then mapped to a class. A class may receive tokens only of a particular type; so, for instance, the class corresponding to dollars will not accept the type pounds, and vice versa. This prevents the model from making unacceptable errors. Each class also has an associated normalization function.

    There are two kinds of classes:

    1. Predefined: We define a limited number of classes (around 8-10) containing basic normalization rules. A small subset of these (3-5) contain language-specific rules, such as how to distinguish cardinal and ordinal uses of a number. Other classes — such as self, digit, and Roman numerals — remain similar across many languages.
    2. Auto-generated (AG): The model also generates classes automatically by analyzing the unnormalized-to-normalized token mappings in the dataset. If no existing class (pre-coded or AG) can generate the target normalization for a token in the training data, a new class is automatically generated. For instance, if the dataset includes the annotation “12→December", and if none of the existing classes can generate this normalization, then the class “12_to_December_AG" is created. This class accepts only “12", and its normalization function returns “December". AGs enable Proteno to learn the majority of normalizations automatically from data.
  • Classification: We model TN as a sequence-tagging problem, where the input is a sequence of unnormalized tokens and the output is the sequence of classes that can generate the normalized text. We experimented with four different types of classifiers: conditional random fields (CRFs), bi-directional long-short-term-memory models (bi-LSTMs), bi-LSTM-CRF combinations, and Transformers.

Datasets

As the goal of Proteno is to be applicable to multiple languages, we evaluated it across three languages, English, Spanish, and Tamil. English had significantly more auto-generated classes than Tamil or Spanish, as written English tends to use many more abbreviations than the other two languages. 

LanguageTotal predefined classesLanguage-specific predefined classesAuto-generated classes
Spanish105279
Tamil8374
English842,658
Proteno v. SOA.png
Proteno’s performance on 11 classes found in existing datasets, compared to the performance of two state-of-the-art models trained on 32 times as much data.

To benchmark Proteno’s performance in English, we could compare it to earlier models on only 11 of the 13 predefined classes found in existing datasets; differences in tokenization schemes meant that there were no logical mappings for the other two classes.

These results indicate that Proteno is a strong candidate for doing TN with low data annotation requirements while curbing unacceptable errors, which would make it a robust and scalable solution for production text-to-speech models.

About the Author
Shubhi Tyagi is an applied scientist in the Amazon Text-to-Speech group.

Related content

US, WA, Seattle
Job summaryAre you passionate about conducting measurement research and experiments to assess and evaluate talent? Would you like to see your research in products that will drive key talent management behaviors globally to ensure we are raising the bar on our talent? If so, you should consider joining the CXNS team!Amazon CXNS team is an innovative organization that exists to propel Amazon HR toward being the most scientific HR organization on earth. CNXS mission is to use Science to assist and measurably improve every talent decision made at Amazon. CXNS does this by discovering signals in workforce data, infusing intelligence into Amazon’s talent products, and guiding the broader CXNS team to pursue high-impact opportunities with tangible returns. This multi-disciplinary approach spans capabilities, including: data engineering, reporting and analytics, research and behavioral sciences, and applied sciences such as economics and machine learning.In this role, you will support measurement efforts for Amazon Connections (an innovative program that gives Amazonians a confidential and effective way to give feedback on the workplace to help shape the future of the company and improve the employee experience). You will own the research development strategy to evaluate, diagnose, understand, and surface drivers and moderators for key research streams. These include (but are not limited to) attrition, engagement, productivity, diversity, and Amazon culture. You will deep dive and analyze what research should be conducted and to what end, develop hypotheses that can be tested, and support a larger research program to deliver deeper insights that we can surface to leaders on our platform (short term and long).You will use both quantitative and qualitative data as well as conduct research studies to test your hypotheses. You will use a variety of statistical approaches to model and understand behavior. You will develop algorithms and thresholds to surface personalized results to managers/leaders, and partner with machine learning scientists to build these statistical models into production that scales. You will work with an interdisciplinary team of psychologists, economists, ML scientists, UX researchers, engineers, and product managers to inform and build product features to surface deeper people and business insights for our leaders.What you'll do:· Lead a global research strategy to drive more effective decisions and improve the employee experience across all of Amazon· Execute a scalable global content development and research strategy Amazon-wide· Conduct psychometrics analyses to evaluate integrity and practical application of content· Identify research streams to evaluate how to mitigate or remove sources of measurement error· Partner closely and drive effective collaborations across multi-disciplinary research and product teams· Manage full life cycle of large scale research programs (Develop strategy, gather requirements, execute, and evaluate)This person will possess knowledge of different assessment approaches to evaluate performance, a strong psychometrics background, scientific survey methodology, and computing various content validity analyses.
US, WA, Seattle
Job summaryWW Installments is one of the fastest growing businesses within Amazon and we are looking for an Economist to join the team. This group has been entrusted with a massive charter that will impact every customer that visits Amazon.com. We are building the next generation of features and payment products that maximize customer enablement in a simple, transparent, and customer obsessed way. Through these products, we will deliver value directly to Amazon customers improving the shopping experience for hundreds of millions of customers worldwide. Our mission is to delight our customers by building payment experiences and financial services that are trusted, valued, and easy to use from anywhere in any way.Economists at Amazon are solving some of the most challenging applied economics questions in the tech sector. Amazon economists apply the frontier of economic thinking to market design, pricing, forecasting, program evaluation, online advertising and other areas. Our economists build econometric models using our world class data systems, and apply economic theory to solve business problems in a fast-moving environment. A career at Amazon affords economists the opportunity to work with data of unparalleled quality, apply rigorous applied econometric approaches, and work with some of the most talented applied econometricians in the trade.As the Economist within WW Installments, you will be responsible for building long-term causal inference models and experiments. These analysis represent a core capability for WW Installments and businesses across Amazon. Your work will directly impact customers by influencing how objective functions are designed and which inputs are consumed for modeling. You will work across functions including machine learning, business intelligence, data engineering, software development, and finance to induce data driven decisions at every level of the organization.Key job responsibilitiesThis role will be responsible for:• Developing a causal inference and experimentation roadmap for the WW Installments Competitive Pricing team.• Apply expertise in causal and econometric modeling to develop large-scale systems that are deployed across Amazon businesses.• Identify business opportunities, define and execute modeling approach, then deliver outcomes to various Amazon businesses with an Amazon-wide perspective for solutions.• Lead the project plan from a scientific perspective on product launches including identifying potential risks, key milestones, and paths to mitigate risks• Own key inputs to reports consumed by VPs and Directors across Amazon.• Identifying new opportunities to influence business strategy and product vision using causal inference.• Continually improve the WW Installments experimentation roadmap automating and simplifying whenever possible.• Coordinate support across engineers, scientists, and stakeholders to deliver analytical projects and build proof of concept applications.• Work through significant business and technical ambiguity delivering on analytics roadmap across the team with autonomy.
US, WA, Seattle
Job summaryWW Installments is one of the fastest growing businesses within Amazon and we are looking for an Applied Scientist to join the team. This group has been entrusted with a massive charter that will impact every customer that visits Amazon.com. We are building the next generation of features and payment products that maximize customer enablement in a simple, transparent, and customer obsessed way. Through these products, we will deliver value directly to Amazon customers improving the shopping experience for hundreds of millions of customers worldwide. Our mission is to delight our customers by building payment experiences and financial services that are trusted, valued, and easy to use from anywhere in any way.As an Applied Scientist within WW Installments, you will be responsible for building machine learning models and pipelines with direct customer impact. These models represent a core capability for WW Installments and businesses across Amazon. Your work will directly impact customers by influencing how they interact with financing options to make purchases. You will work across functions including data engineering, software development, and business to induce data driven decisions at every level of the organization.Key job responsibilitiesThis role will be responsible for:• Developing production machine learning models and pipelines for the WW Installments Competitive Pricing team that directly impact customers.• Apply expertise in machine learning to develop large-scale production systems that are deployed across Amazon businesses.• Identify business opportunities, define and execute modeling approach, then deliver outcomes to various Amazon businesses with an Amazon-wide perspective for solutions.• Lead the implementation of production ML from a scientific perspective including identifying potential risks, key milestones, and paths to mitigate risks.• Identifying new opportunities to influence business strategy and product vision using data science and machine learning.• Continually improve the WW Installments ML roadmap automating and simplifying whenever possible.• Coordinate support across engineers, scientists, and stakeholders to deliver ML pipelines, analytics projects, and build proof of concept applications.• Work through significant business and technical ambiguity delivering on analytics roadmap across the team with autonomy.
US, WA, Seattle
Job summaryWW Installments is one of the fastest growing businesses within Amazon and we are looking for a Data Scientist to join the team. This group has been entrusted with a massive charter that will impact every customer that visits Amazon.com. We are building the next generation of features and payment products that maximize customer enablement in a simple, transparent, and customer obsessed way. Through these products, we will deliver value directly to Amazon customers improving the shopping experience for hundreds of millions of customers worldwide. Our mission is to delight our customers by building payment experiences and financial services that are trusted, valued, and easy to use from anywhere in any way.As a Data Scientist within WW Installments, you will be responsible for building machine learning models and pipelines with direct customer impact. These models represent a core capability for WW Installments and businesses across Amazon. Your work will directly impact customers by influencing how they interact with financing options to make purchases. You will work across functions including data engineering, software development, and business to induce data driven decisions at every level of the organization.Key job responsibilitiesThis role will be responsible for:• Developing machine learning models and pipelines for the WW Installments Competitive Pricing team.• Apply expertise in machine learning to develop large-scale systems that are deployed across Amazon businesses.• Identify business opportunities, define and execute modeling approach, then deliver outcomes to various Amazon businesses with an Amazon-wide perspective for solutions.• Lead the project plan from a scientific perspective on product launches including identifying potential risks, key milestones, and paths to mitigate risks.• Own key inputs to reports consumed by VPs and Directors across Amazon.• Identifying new opportunities to influence business strategy and product vision using data science and machine learning.• Continually improve the WW Installments ML roadmap automating and simplifying whenever possible.• Coordinate support across engineers, scientists, and stakeholders to deliver ML pipelines, analytics projects, and build proof of concept applications.• Work through significant business and technical ambiguity delivering on analytics roadmap across the team with autonomy.
US, CA, Palo Alto
Job summaryAmazon is the 4th most popular site in the US (http://www.alexa.com/topsites/countries/US). Our product search engine is one of the most heavily used services in the world, indexes billions of products, and serves hundreds of millions of customers world-wide. We are working on a new AI-first initiative to re-architect and reinvent the way we do search through the use of extremely large scale next-generation deep learning techniques. Our goal is to make step function improvements in the use of advanced Machine Learning (ML) on very large scale datasets, specifically through the use of aggressive systems engineering and hardware accelerators. This is a rare opportunity to develop cutting edge ML solutions and apply them to a problem of this magnitude. Some exciting questions that we expect to answer over the next few years include:· Can a focus on compilers and custom hardware help us accelerate model training and reduce hardware costs?· Can combining supervised multi-task training with unsupervised training help us to improve model accuracy?· Can we transfer our knowledge of the customer to every language and every locale ? The Search Science team is looking for a Senior Applied Science Manager to drive roadmap on making large business impact through application of Deep Learning models via close collaboration with partner teams. The team also has a focus on technology solution for deep-learning based embedding generation, sensitive data ingestion and applications, data quality measurement, improvement, data bias identification and reduction to achieve model fairness.Success in this role will require the courage to chart a new course. You will manage your own team to understand all aspects of the customer journey. You and your team will inform other scientists and engineers by providing insights and building models to help improving training data quality and reducing bias. The research focus includes but not limited to Natural Language Processing, recommendation, applications relevant to Amazon buyers, sellers and more. You will be working with cutting edge technologies that enable big data and parallelizable algorithms. You will play an active role in translating business and functional requirements into concrete deliverables and working closely with software development teams to put solutions into production.
US, WA, Seattle
Job summaryAmazon EC2 provides cloud computing which forms the foundation for the majority of AWS services, as well as a large portion of compute use cases for businesses and individuals around the world. A critical factor in the continued success of EC2 is the ability to provide reliable and cost effective computing. The EC2 Fleet Health and Lifecycle (EC2 FHL) organization is responsible for ensuring that the global EC2 server fleet continues to raise the bar for reliability, security, and efficiency. We are looking for seasoned engineering leaders with passion for technology and an entrepreneurial mindset. At Amazon, it is all about working hard, having fun and making history. If you are ready to make history, we want to hear from you!Come join a brand new team, EC2 Health Analytics, under EC2 Foundational Technology, to solve complex cutting-edge problems to power a faster, more robust and performant EC2 of tomorrow. The charter of our team is to improve customer experience on the EC2 fleet by analyzing hundreds of signals and driving next-generation detection and remediation tools. We apply Machine Learning to predict outcomes and optimize decisions that improve customer experience and operational efficiency. As an Applied Scientist in the EC2 Health Analytics team, you will join an industry-leading engineering team solving challenging problems at massive scale.· Build a world-class forecasting platform that scales to handling billions of time series data in real time.· Drive fleet utilization improvement where each 1% means tens of millions of additional free cash flow.· Automate tactical and strategic capacity planning tools to optimize for service availability and infrastructure cost.· Build recommendation algorithms for improving the AWS customer experience.· · Reduce dependence on manual troubleshooting for deep-dives.What you will learn:· State-of-the-art analytics and forecasting methodologies.· Application of machine learning to large-scale data sets.· · Product recommendation algorithms.· Resource management and admission control for the Cloud.· The internals of all AWS services.Inclusive Team CultureHere at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences.Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.
US, CA, Palo Alto
Job summaryThe Amazon Search team creates powerful, customer-focused search and advertising solutions and technologies. Whenever a customer visits an Amazon site worldwide and types in a query or browses through product categories, the Amazon Search services go to work. We design, develop, and deploy high performance, fault-tolerant distributed search systems used by millions of Amazon customers every day. Our team works to maximize the quality and effectiveness of the search experience for visitors to Amazon websites worldwide.
US, WA, Bellevue
Job summaryThe primary mission of ADECT Monitoring team is to protect customer trust and improve customer experience with Alexa skills and devices. As part of this role, you will build models to improve customer’s experience on Alexa. The team uses various signals to ensure that customers get delightful experiences. This could be through experience improvements or ensuring that only high quality experiences reach customers. We use a lot of data along with multiple approaches such as machine learning and other algorithmic approaches to solve challenges that customers face interacting with Alexa. The ideal candidate will be an expert in the areas of data science, machine learning and statistics, having hands-on experience with multiple improvement initiatives as well as balancing technical and business judgment to make the right decisions about technology, models and methodologies. This involves building conversation arbitration models, which validate conversation quality and metrics to measure and continuously improve on it. These are some of the challenges that have not been solved in the industry before. The candidate needs experience with data science / business intelligence, analytics, and reporting systems while striving for simplicity, and demonstrating significant creativity and high judgment backed by statistical proof. The candidate is also expected to take these models into production, so they need to have some experience with software systems as well. There will be guidance provided on the software front though.
US, WA, Bellevue
Job summaryThe primary mission of ADECT Monitoring team is to protect customer trust and improve customer experience with Alexa skills and devices. The team uses various signals to ensure that customers get delightful experiences. This could be through experience improvements or ensuring that only high quality experiences reach customers. We use a lot of data along with multiple approaches such as machine learning and other algorithmic approaches to solve challenges that customers face interacting with Alexa. The ideal candidate will be an expert in the areas of data science, machine learning and statistics, having hands-on experience with multiple improvement initiatives as well as balancing technical and business judgment to make the right decisions about technology, models and methodologies. As part of this role, you will build models to improve customer’s experience on Alexa. This involves building conversation arbitration models, which validate conversation quality and metrics to measure and continuously improve on it. These are some of the challenges that have not been solved in the industry before. The candidate needs experience with data science / business intelligence, analytics, and reporting systems while striving for simplicity, and demonstrating significant creativity and high judgment backed by statistical proof. The candidate is also expected to work on ML models to improve customer trust. This role will have an opportunity to convert to an Applied Scientist.
US, CA, Sunnyvale
Job summaryAmazon Lab126 is an inventive research and development company that designs and engineers high-profile consumer electronics. Lab126 began in 2004 as a subsidiary of Amazon.com, Inc., originally creating the best-selling Kindle family of products. Since then, we have produced groundbreaking devices like Fire tablets, Fire TV and Amazon Echo. What will you help us create?The Role:As a Design Analysis Engineer, you will be responsible for bringing new product designs through to manufacturing. Thermal and structural engineering contributes unique, in-depth technical knowledge to solve complex engineering problems in concert with multi-disciplinary teams including Industrial Design, Hardware Engineering, and Operations.You will work closely with multi-disciplinary groups including Product Design, Industrial Design, Hardware Engineering, and Operations, to drive key aspects of engineering of consumer electronics products. In this role, you will:· Perform analysis and testing of complex electronic assemblies using advanced simulation and experimentation tools and techniques· Strong fundamentals in dynamics with emphasis on system dynamics, mechanism analysis (Multi Body Dynamics analysis) and co-simulation· Develop, analyze and test thermal, acoustic and structural solutions; from concept design, feature development, product architecture, through system validation· Support creative developments through application of analysis and testing of complex electronic assemblies using advanced simulation and experimentation tools and techniques· Use simulation tools like Abaqus, LS-Dyna, Simpack for analysis and design of products· Validate design modifications using simulation and actual prototypes· Use of programming languages like Python and Matlab for analytical/statistical analyses and automation· Establish noise thresholds for usability and compliance requirements· Determine and validate structural performance under use and test conditions· Have strong knowledge of various materials such as heat spreaders solutions to resolve thermal issues, damping materials for noise and vibration suppression· Use various data acquisition systems with thermocouples, accelerometers, strain gauges and IR cameras· Collaborate as part of the device team to iterate and optimize design parameters of enclosures and structural parts to establish and deliver project performance objectives· Design and execute tests using statistical tools to validate analytical models, identify risks and assess design margins· Create and present analytical and experimental results· Develop and apply design guidelines based on project results