Text normalization with only 3% as much training data

Proteno model dramatically increases the efficiency of the first step in text-to-speech conversion.

With services like Alexa, which use synthesized speech for output, text normalization (TN) is usually the first step in the process of text-to-speech conversion. TN takes raw text as input— say, the string 6-21-21 — and expands it into a verbalized form that a text-to-speech model can use to produce the final speech — “twenty first of June twenty twenty one”.

Historically, TN algorithms relied on hard-coded rules, which didn’t generalize across languages and were hard to maintain: a typical rule-based TN system for a single language might have thousands of rules, which evolve over time and whose development requires linguistic expertise.

Text Normalization.png
Text normalization converts the output of computational processes — such as the natural-language-understanding models that handle Alexa customers' requests — into a form that will make sense when read out as synthesized speech.
Credit: Glynis Condon

More recently, academic and industry researchers have begun developing machine-learning-based TN models. But these have drawbacks, too. 

Sequence-to-sequence models occasionally make unacceptable errors, such as converting “$5” to “five pounds”. Semiotic-classification models require domain-specific information classes created by linguistic experts — classes such as emoticonor telephone number — which limits their generalizability. And both types of models require large amounts of training data, which makes it difficult to scale them across languages.

At this year’s meeting of the North American Chapter of the Association for Computational Linguistics (NAACL), my colleagues and I are presenting a new text normalization model, called Proteno, that addresses these challenges.

We evaluated Proteno on three languages, English, Spanish, and Tamil. There’s a large body of research on TN in English, but no TN datasets existed for Spanish and Tamil. Consequently, we created our own datasets, which we have publicly released for use by other TN researchers.

Proteno specifies only a few, low-level normalization classes — such as ordinal number, cardinal number, or Roman numeral — which generalize well across languages. From the data, Proteno then learns a huge variety of additional, fine-grained classes. 

In our experiments on English, for instance, we used eight predefined classes, and Proteno automatically generated another 2,658. By contrast, semiotic-classification models typically have about 20 classes.

Proteno also uses a simple but effective scheme for tokenization, or splitting texts up into smaller chunks. Prior tokenization techniques required linguistic knowledge or data-heavy training; Proteno’s tokenization technique, by contrast, simply breaks text up at spaces and at transitions between Unicode classes, such as letternumeral, or punctuation mark. Consequently, it can generalize across languages, it enables the majority of normalizations to be learned from the data, and it reduces the incidence of unacceptable errors. 

Together, these techniques also allow Proteno to make do with much less training data than previous machine learning approaches. In our experiments, Proteno offered comparable performance with the previous state of the art in English — while requiring only 3% as much training data. 

Because there were no prior TN models trained on Spanish and Tamil, we had no benchmarks for our experiments. But on comparable amounts of training data, the Proteno models trained on Tamil and Spanish achieved accuracies comparable to that of the one trained on English (99.1% for Spanish, 96.7% for Tamil, and 97.4% for English).

Methods

Proteno treats TN as a sequence classification problem, where most of the classes are learned. The figure below illustrates Proteno’s training and run-time processing pipelines, which have slightly different orders.

Proteno pipeline (new).png

The training pipeline consists of the following steps:

  • Tokenization: Previous tokenization methods relied on language-specific rules devised by linguists. For instance, the string 6-21-21 would be treated as a single token of the type date. We propose a granular tokenization mechanism that is language independent and applicable to any space-separated language. The text to be normalized is first split at its spaces and then further split wherever there is a change in the Unicodeclass. The string 6-21-21 thus becomes five tokens, and we count on Proteno to learn how to handle them properly.
  • Annotation: The tokenized, unnormalized text is annotated token by token, which gives us a one-to-one mapping between each unnormalized token and its ground-truth normalization. This data will be used to train the model.
  • Class generation: Each token is then mapped to a class. A class may receive tokens only of a particular type; so, for instance, the class corresponding to dollars will not accept the type pounds, and vice versa. This prevents the model from making unacceptable errors. Each class also has an associated normalization function.

    There are two kinds of classes:

    1. Predefined: We define a limited number of classes (around 8-10) containing basic normalization rules. A small subset of these (3-5) contain language-specific rules, such as how to distinguish cardinal and ordinal uses of a number. Other classes — such as self, digit, and Roman numerals — remain similar across many languages.
    2. Auto-generated (AG): The model also generates classes automatically by analyzing the unnormalized-to-normalized token mappings in the dataset. If no existing class (pre-coded or AG) can generate the target normalization for a token in the training data, a new class is automatically generated. For instance, if the dataset includes the annotation “12→December", and if none of the existing classes can generate this normalization, then the class “12_to_December_AG" is created. This class accepts only “12", and its normalization function returns “December". AGs enable Proteno to learn the majority of normalizations automatically from data.
  • Classification: We model TN as a sequence-tagging problem, where the input is a sequence of unnormalized tokens and the output is the sequence of classes that can generate the normalized text. We experimented with four different types of classifiers: conditional random fields (CRFs), bi-directional long-short-term-memory models (bi-LSTMs), bi-LSTM-CRF combinations, and Transformers.

Datasets

As the goal of Proteno is to be applicable to multiple languages, we evaluated it across three languages, English, Spanish, and Tamil. English had significantly more auto-generated classes than Tamil or Spanish, as written English tends to use many more abbreviations than the other two languages. 

LanguageTotal predefined classesLanguage-specific predefined classesAuto-generated classes
Spanish105279
Tamil8374
English842,658
Proteno v. SOA.png
Proteno’s performance on 11 classes found in existing datasets, compared to the performance of two state-of-the-art models trained on 32 times as much data.

To benchmark Proteno’s performance in English, we could compare it to earlier models on only 11 of the 13 predefined classes found in existing datasets; differences in tokenization schemes meant that there were no logical mappings for the other two classes.

These results indicate that Proteno is a strong candidate for doing TN with low data annotation requirements while curbing unacceptable errors, which would make it a robust and scalable solution for production text-to-speech models.

Related content

US, VA, Arlington
The People eXperience and Technology Central Science Team (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. We are an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. We are looking for economists who are able to apply economic methods to address business problems. The ideal candidate will work with engineers and computer scientists to estimate models and algorithms on large scale data, design pilots and measure their impact, and transform successful prototypes into improved policies and programs at scale. We are looking for creative thinkers who can combine a strong technical economic toolbox with a desire to learn from other disciplines, and who know how to execute and deliver on big ideas as part of an interdisciplinary technical team. Ideal candidates will work in a team setting with individuals from diverse disciplines and backgrounds. They will work with teammates to develop scientific models and conduct the data analysis, modeling, and experimentation that is necessary for estimating and validating models. They will work closely with engineering teams to develop scalable data resources to support rapid insights, and take successful models and findings into production as new products and services. They will be customer-centric and will communicate scientific approaches and findings to business leaders, listening to and incorporate their feedback, and delivering successful scientific solutions. Key job responsibilities Use causal inference methods to evaluate the impact of policies on employee outcomes. Examine how external labor market and economic conditions impact Amazon's ability to hire and retain talent. Use scientifically rigorous methods to develop and recommend career paths for employees. A day in the life Work with teammates to apply economic methods to business problems. This might include identifying the appropriate research questions, writing code to implement a DID analysis or estimate a structural model, or writing and presenting a document with findings to business leaders. Our economists also collaborate with partner teams throughout the process, from understanding their challenges, to developing a research agenda that will address those challenges, to help them implement solutions. About the team We are a multidisciplinary team that combines the talents of science and engineering to develop innovative solutions to make Amazon Earth's Best Employer.
US, WA, Seattle
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Knowledge of econometrics, as well as basic familiarity with Python (or R, Matlab, or equivalent) is necessary, and experience with SQL would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Virtual Contact Center-WA
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. About the team The Selling Partner Fees team owns the end-to-end fees experience for two million active third party sellers. We own the fee strategy, fee seller experience, fee accuracy and integrity, fee science and analytics, and we provide scalable technology to monetize all services available to third-party sellers. Within the Science team, our goal is to understand the impact of changing fees on Seller (supply) and Customers (demand) behavior (e.g. price changes, advertising strategy changes, introducing new selection etc.) as well as using this information to optimize our fee structure and maximizing our long term profitability.
US, WA, Seattle
This is a unique opportunity to build technology and science that millions of people will use every day. Are you excited about working on large scale Natural Language Processing (NLP), Machine Learning (ML), and Deep Learning (DL)? We are embarking on a multi-year journey to improve the shopping experience for customers globally. Amazon Search team creates customer-focused search solutions and technologies that makes shopping delightful and effortless for our customers. Our goal is to understand what customers are looking for in whatever language happens to be their choice at the moment and help them find what they need in Amazon's vast catalog of billions of products. As Amazon expands to new geographies, we are faced with the unique challenge of maintaining the bar on Search Quality due to the diversity in user preferences, multilingual search and data scarcity in new locales. We are looking for an applied researcher to work on improving search on Amazon using NLP, ML, and DL technology. As an Applied Scientist, you will lead our efforts in query understanding, semantic matching (e.g. is a drone the same as quadcopter?), relevance ranking (what is a "funny halloween costume"?), language identification (did the customer just switch to their mother tongue?), machine translation (猫の餌を注文する). This is a highly visible role with a huge impact on Amazon customers and business. As part of this role, you will develop high precision, high recall, and low latency solutions for search. Your solutions should work for all languages that Amazon supports and will be used in all Amazon locales world-wide. You will develop scalable science and engineering solutions that work successfully in production. You will work with leaders to develop a strategic vision and long term plans to improve search globally. We are growing our collaborative group of engineers and applied scientists by expanding into new areas. This is a position on Global Search Quality team in Seattle Washington. We are moving fast to change the way Amazon search works. Together with a multi-disciplinary team you will work on building solutions with NLP/ML/DL at its core. Along the way, you’ll learn a ton, have fun and make a positive impact on millions of people. Come and join us as we invent new ways to delight Amazon customers.
US, WA, Seattle
This is a unique opportunity to build technology and science that millions of people will use every day. Are you excited about working on large scale Natural Language Processing (NLP), Machine Learning (ML), and Deep Learning (DL)? We are embarking on a multi-year journey to improve the shopping experience for customers globally. Amazon Search team creates customer-focused search solutions and technologies that makes shopping delightful and effortless for our customers. Our goal is to understand what customers are looking for in whatever language happens to be their choice at the moment and help them find what they need in Amazon's vast catalog of billions of products. As Amazon expands to new geographies, we are faced with the unique challenge of maintaining the bar on Search Quality due to the diversity in user preferences, multilingual search and data scarcity in new locales. We are looking for an applied researcher to work on improving search on Amazon using NLP, ML, and DL technology. As an Applied Scientist, you will lead our efforts in query understanding, semantic matching (e.g. is a drone the same as quadcopter?), relevance ranking (what is a "funny halloween costume"?), language identification (did the customer just switch to their mother tongue?), machine translation (猫の餌を注文する). This is a highly visible role with a huge impact on Amazon customers and business. As part of this role, you will develop high precision, high recall, and low latency solutions for search. Your solutions should work for all languages that Amazon supports and will be used in all Amazon locales world-wide. You will develop scalable science and engineering solutions that work successfully in production. You will work with leaders to develop a strategic vision and long term plans to improve search globally. We are growing our collaborative group of engineers and applied scientists by expanding into new areas. This is a position on Global Search Quality team in Seattle Washington. We are moving fast to change the way Amazon search works. Together with a multi-disciplinary team you will work on building solutions with NLP/ML/DL at its core. Along the way, you’ll learn a ton, have fun and make a positive impact on millions of people. Come and join us as we invent new ways to delight Amazon customers.
US, WA, Seattle
The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon’s on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon’s goods and services are aligned with Amazon’s corporate goals. We are seeking an experienced high-energy Economist to help envision, design and build the next generation of retail pricing capabilities. You will work at the intersection of economic theory, statistical inference, and machine learning to design new methods and pricing strategies to deliver game changing value to our customers. Roughly 85% of previous intern cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Key job responsibilities Amazon’s Pricing Science and Research team is seeking an Economist to help envision, design and build the next generation of pricing capabilities behind Amazon’s on-line retail business. As an economist on our team, you will work at the intersection of economic theory, statistical inference, and machine learning to design new methods and pricing strategies with the potential to deliver game changing value to our customers. This is an opportunity for a high-energy individual to work with our unprecedented retail data to bring cutting edge research into real world applications, and communicate the insights we produce to our leadership. This position is perfect for someone who has a deep and broad analytic background and is passionate about using mathematical modeling and statistical analysis to make a real difference. You should be familiar with modern tools for data science and business analysis. We are particularly interested in candidates with research background in applied microeconomics, econometrics, statistical inference and/or finance. A day in the life Discussions with business partners, as well as product managers and tech leaders to understand the business problem. Brainstorming with other scientists and economists to design the right model for the problem in hand. Present the results and new ideas for existing or forward looking problems to leadership. Deep dive into the data. Modeling and creating working prototypes. Analyze the results and review with partners. Partnering with other scientists for research problems. About the team The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon’s on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon’s goods and services are aligned with Amazon’s corporate goals.
US, CA, San Francisco
The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon's on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon's goods and services are aligned with Amazon's corporate goals. We are seeking an experienced high-energy Economist to help envision, design and build the next generation of retail pricing capabilities. You will work at the intersection of statistical inference, experimentation design, economic theory and machine learning to design new methods and pricing strategies for assessing pricing innovations. Roughly 85% of previous intern cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Key job responsibilities Amazon's Pricing Science and Research team is seeking an Economist to help envision, design and build the next generation of pricing capabilities behind Amazon's on-line retail business. As an economist on our team, you will will have the opportunity to work with our unprecedented retail data to bring cutting edge research into real world applications, and communicate the insights we produce to our leadership. This position is perfect for someone who has a deep and broad analytic background and is passionate about using mathematical modeling and statistical analysis to make a real difference. You should be familiar with modern tools for data science and business analysis. We are particularly interested in candidates with research background in experimentation design, applied microeconomics, econometrics, statistical inference and/or finance. A day in the life Discussions with business partners, as well as product managers and tech leaders to understand the business problem. Brainstorming with other scientists and economists to design the right model for the problem in hand. Present the results and new ideas for existing or forward looking problems to leadership. Deep dive into the data. Modeling and creating working prototypes. Analyze the results and review with partners. Partnering with other scientists for research problems. About the team The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon's on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon's goods and services are aligned with Amazon's corporate goals.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US
The Amazon Supply Chain Optimization Technology (SCOT) organization is looking for an Intern in Economics to work on exciting and challenging problems related to Amazon's worldwide inventory planning. SCOT provides unique opportunities to both create and see the direct impact of your work on billions of dollars’ worth of inventory, in one of the world’s most advanced supply chains, and at massive scale. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. We are looking for a PhD candidate with exposure to Program Evaluation/Causal Inference. Knowledge of econometrics and Stata/R/or Python is necessary, and experience with SQL, Hadoop, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Seattle
The Selling Partner Fees team owns the end-to-end fees experience for two million active third party sellers. We own the fee strategy, fee seller experience, fee accuracy and integrity, fee science and analytics, and we provide scalable technology to monetize all services available to third-party sellers. We are looking for an Intern Economist with excellent coding skills to design and develop rigorous models to assess the causal impact of fees on third party sellers’ behavior and business performance. As a Science Intern, you will have access to large datasets with billions of transactions and will translate ambiguous fee related business problems into rigorous scientific models. You will work on real world problems which will help to inform strategic direction and have the opportunity to make an impact for both Amazon and our Selling Partners.