Amazon’s new research on automatic speech recognition

Interspeech papers include novel approaches to speaker identification and the training of end-to-end speech recognition models.

As the largest conference devoted to speech technologies, Interspeech has long been a showcase for the latest research on automatic speech recognition (ASR) from Amazon Alexa. This year, Alexa researchers had 12 ASR papers accepted at the conference.

Diagram illustrating the architecture of the RNN-T ASR system.
The architecture of the RNN-T ASR system. Xt indicates the current frame of the acoustic signal. Yu-1 indicates the sequence of output subwords corresponding to the preceding frames.
From "Efficient minimum word error rate training of RNN-transducer for end-to-end speech recognition"

One of these, “Speaker identification for household scenarios with self-attention and adversarial training”, reports the speech team’s recent innovations in speaker ID, or recognizing which of several possible speakers is speaking at a given time.

Two others — “Subword regularization: an analysis of scalability and generalization for end-to-end automatic speech recognition” and “Efficient minimum word error rate training of RNN-transducer for end-to-end speech recognition” —examine ways to improve the quality of speech recognizers that use an architecture know as a recurrent neural network-transducer, or RNN-T.

In his keynote address this week at Interspeech, Alexa director of ASR Shehzad Mavawalla highlighted both of these areas — speaker ID and the use of RNN-Ts for ASR — as ones in which the Alexa science team has made rapid strides in recent years.

Speaker ID

Speaker ID systems — which enable voice agents to personalize content to particular customers — typically rely on either recurrent neural networks or convolutional neural networks, both of which are able to track consistencies in the speech signal over short spans of time. 

In “Speaker identification for household scenarios with self-attention and adversarial training”, Amazon applied scientist Ruirui Li and colleagues at Amazon, the University of California, Los Angeles, and the University of Notre Dame instead use an attention mechanism to identify longer-range consistencies in the speech signal.

In neural networks — such as speech processors — that receive sequential inputs, attention mechanisms determine which other elements of the sequence should influence the network’s judgment about the current element. 

Speech signals are typically divided into frames, which represent power concentrations at different sound frequencies over short spans of time. For a given utterance, Li and his colleagues’ model represents each frame as a weighted sum of itself and all the other frames in the utterance. The weights depend on correlations between the frequency characteristics of the frames; the greater the correlation, the greater the weight.

This representation has the advantage of capturing the distinctive properties of a speaker’s voice conveyed by each frame but suppressing accidental properties that are unique to individual frames and less characteristic of the speaker’s voice as a whole. 

These representations pass to a neural network that, during training, learns which of these properties are the best indicators of a speaker’s identity. Finally, the sequential outputs of this network — one for each frame — are averaged together to produce a snapshot of the utterance as a whole. These snapshots are compared to stored profiles to determine the speaker’s identity.

Li and his colleagues also used a few other tricks to make their system more reliable, such as adversarial training.

In tests, the researchers compared their system to four prior systems and found that its speaker identifications were more accurate across the board. Compared to the best-performing of the four baselines, the system reduced the identification error rate by about 12% on speakers whose utterances were included in the model training data and by about 30% on newly encountered speakers.

The RNN-T architecture

Another pair of papers examine ways to improve the quality of speech recognizers that use the increasingly popular recurrent-neural-network-transducer architecture, or RNN-T. An RNN-T processes a sequence of inputs in order, so that the output corresponding to each input factors in both the inputs and outputs that preceded it. 

Illustration of a series of possible subword segmentations of the speech input, with the probability of each.
A series of possible subword segmentations of the speech input, with the probability of each.
From “Subword regularization: an analysis of scalability and generalization for end-to-end automatic speech recognition”

In the ASR application, the RNN-T takes in frames of an acoustic speech signal and outputs text — a sequence of subwords, or word components. For instance, the output corresponding to the spoken word “subword” might be the subwords “sub” and “_word”. 

Training the model to output subwords keeps the network size small. It also enables the model to deal with unfamiliar inputs, which it may be able to break into familiar components.

In the RNN-T architecture we consider, the input at time t — the current frame of the input speech — passes to an encoder network, which extracts acoustic features useful for speech recognition. At the same time, the current, incomplete sequence of output subwords passes to a prediction network, whose output indicates likely semantic properties of the next subword in the sequence.

These two representations — the encoding of the current frame and the likely semantic properties of the next subword — pass to another network, which on the basis of both representations determines the next word in the output sequence.

New wrinkles

Subword regularization: an analysis of scalability and generalization for end-to-end automatic speech recognition”, by applied scientist Egor Lakomkin and his Amazon colleagues, investigates the regularization of subwords in the model, or the enforcement of greater consistency in how words are segmented into subwords. In experiments, the researchers show that using multiple segmentations of the same speech transcription during training can reduce the ASR error rate by 8.4% in a model trained on 5,000 hours of speech data.

Efficient minimum word error rate training of RNN-transducer for end-to-end speech recognition”, by applied scientist Jinxi Guo and six of his Amazon colleagues, investigates a novel loss function — an evaluation criterion during training — for such RNN-T ASR systems. In experiments, it reduced the systems’ error rates by 3.6% to 9.2%.

For each input, RNN-Ts output multiple possible solutions — or hypotheses — ranked according to probability. In ASR applications, RNN-Ts are typically trained to maximize the probabilities they assign the correct transcriptions of the input speech.

But trained speech recognizers are judged, by contrast, according to their word error rates, or the rate at which they make mistakes — misinterpretations, omissions, or erroneous insertions. Jinxi Guo and his colleagues investigated efficient ways to directly train an RNN-T ASR system to minimize word error rate.

That means, for each training example, minimizing the expected word errors of the most probable hypotheses. But computing the probabilities of those hypotheses isn’t as straightforward as it may sound.

That’s because the exact same sequence of output subwords can align with the sequence of input frames in different ways: one output sequence, for instance, might identify the same subword as having begun one frame earlier or later than another output sequence does. Computing the probability of a hypothesis requires summing the probabilities of all its alignments.

The brute-force solution to this problem would be computationally impractical. But Guo and his colleagues propose using the forward-backward algorithm, which exploits the overlaps between alignments, storing intermediate computations that can be re-used. The result is a computationally efficient algorithm that enables a 3.6% to 9.2% reduction in error rates for various RNN-T models.

The other Amazon ASR papers at this year’s Interspeech are

DiPCo - Dinner Party Corpus
Maarten Van Segbroeck, Zaid Ahmed, Ksenia Kutsenko, Cirenia Huerta, Tinh Nguyen, Björn Hoffmeister, Jan Trmal, Maurizio Omologo, Roland Maas

End-to-end neural transformer based spoken language understanding
Martin Radfar, Athanasios Mouchtaris, Siegfried Kunzmann

Improving speech recognition of compound-rich languages
Prabhat Pandey, Volker Leutnant, Simon Wiesler, Jahn Heymann, Daniel Willett

Improved training strategies for end-to-end speech recognition in digital voice assistants
Hitesh Tulsiani, Ashtosh Sapru, Harish Arsikere, Surabhi Punjabi, Sri Garimella

Leveraging unlabeled speech for sequence discriminative training of acoustic models
Ashtosh Sapru, Sri Garimella

Quantization aware training with absolute-cosine regularization for automatic speech recognition
Hieu Duy Nguyen, Anastasios Alexandridis, Athanasios Mouchtaris

Rescore in a flash: Compact, cache efficient hashing data structures for N-gram language models
Grant P. Strimel, Ariya Rastrow, Gautam Tiwari, Adrien Pierard, Jon Webb

Semantic complexity in end-to-end spoken language understanding
Joseph McKenna, Samridhi Choudhary, Michael Saxon, Grant P. Strimel, Athanasios Mouchtaris

Speech to semantics: Improve ASR and NLU jointly via all-neural interfaces
Milind Rao, Anirudh Raju, Pranav Dheram, Bach Bui, Ariya Rastrow 

Related content

US, WA, Seattle
The Global Media Entertainment Science team uses state of the art economics and machine learning models to provide Amazon’s entertainment businesses guidance on strategically important questions. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Key job responsibilities
US, CA, Palo Alto
The Amazon Search team creates powerful, customer-focused search solutions and technologies. Whenever a customer visits an Amazon site worldwide and types in a query or browses through product categories, Amazon Product Search services go to work. We design, develop, and deploy high performance, fault-tolerant distributed search systems used by millions of Amazon customers every day. Our Search Relevance team works to maximize the quality and effectiveness of the search experience for visitors to Amazon websites worldwide. The Search Relevance team focuses on several technical areas for improving search quality. In this role, you will invent universally applicable signals and algorithms for training machine-learned ranking models. The relevance improvements you make will help millions of customers discover the products they want from a catalog containing millions of products. You will work on problems such as predicting the popularity of new products, developing new ranking features and algorithms that capture unique characteristics, and analyzing the differences in behavior of different categories of customers. The work will span the whole development pipeline, including data analysis, prototyping, A/B testing, and creating production-level components. Joining this team, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon.com (AMZN), one of the world’s leading Internet companies. We provide a highly customer-centric, team-oriented environment in our offices located in Palo Alto, California. Please visit https://www.amazon.science for more information
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. Do you have a strong machine learning background and want to help build new speech and language technology? Amazon is looking for PhD students who are ready to tackle some of the most interesting research problems on the leading edge of natural language processing. We are hiring in all areas of spoken language understanding: NLP, NLU, ASR, text-to-speech (TTS), and more! A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will develop and implement novel scalable algorithms and modeling techniques to advance the state-of-the-art in technology areas at the intersection of ML, NLP, search, and deep learning. You will work side-by-side with global experts in speech and language to solve challenging groundbreaking research problems on production scale data. The ideal candidate must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. Amazon has positions available for Natural Language Processing & Speech Intern positions in multiple locations across the United States. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. Please visit our website to stay updated with the research our teams are working on: https://www.amazon.science/research-areas/conversational-ai-natural-language-processing
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. The Research team at Amazon works passionately to apply cutting-edge advances in technology to solve real-world problems. Do you have a strong machine learning background and want to help build new speech and language technology? Do you welcome the challenge to apply optimization theory into practice through experimentation and invention? Would you love to help us develop the algorithms and models that power computer vision services at Amazon, such as Amazon Rekognition, Amazon Go, Visual Search, etc? At Amazon we hire research science interns to work in a number of domains including Operations Research, Optimization, Speech Technologies, Computer Vision, Robotics, and more! As an intern, you will be challenged to apply theory into practice through experimentation and invention, develop new algorithms using mathematical programming techniques for complex problems, implement prototypes and work with massive datasets. Amazon has a culture of data-driven decision-making, and the expectation is that analytics are timely, accurate, innovative and actionable. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Amazon Scientist use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science.
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. The Research team at Amazon works passionately to apply cutting-edge advances in technology to solve real-world problems. Do you have a strong machine learning background and want to help build new speech and language technology? Do you welcome the challenge to apply optimization theory into practice through experimentation and invention? Would you love to help us develop the algorithms and models that power computer vision services at Amazon, such as Amazon Rekognition, Amazon Go, Visual Search, etc? At Amazon we hire research science interns to work in a number of domains including Operations Research, Optimization, Speech Technologies, Computer Vision, Robotics, and more! As an intern, you will be challenged to apply theory into practice through experimentation and invention, develop new algorithms using mathematical programming techniques for complex problems, implement prototypes and work with massive datasets. Amazon has a culture of data-driven decision-making, and the expectation is that analytics are timely, accurate, innovative and actionable. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Amazon Scientist use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science.
CA, ON, Toronto
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. Are you a Masters student interested in machine learning, natural language processing, computer vision, automated reasoning, or robotics? We are looking for skilled scientists capable of putting theory into practice through experimentation and invention, leveraging science techniques and implementing systems to work on massive datasets in an effort to tackle never-before-solved problems. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Our scientists use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science.
CA, ON, Toronto
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. Are you a PhD student interested in machine learning, natural language processing, computer vision, automated reasoning, or robotics? We are looking for skilled scientists capable of putting theory into practice through experimentation and invention, leveraging science techniques and implementing systems to work on massive datasets in an effort to tackle never-before-solved problems. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Our scientists use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science.
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. We are looking for Masters or PhD students excited about working on Automated Reasoning or Storage System problems at the intersection of theory and practice to drive innovation and provide value for our customers. AWS Automated Reasoning teams deliver tools that are called billions of times daily. Amazon development teams are integrating automated-reasoning tools such as Dafny, P, and SAW into their development processes, raising the bar on the security, durability, availability, and quality of our products. AWS Automated Reasoning teams are changing how computer systems built on top of the cloud are developed and operated. AWS Automated Reasoning teams work in areas including: Distributed proof search, SAT and SMT solvers, Reasoning about distributed systems, Automating regulatory compliance, Program analysis and synthesis, Security and privacy, Cryptography, Static analysis, Property-based testing, Model-checking, Deductive verification, compilation into mainstream programming languages, Automatic test generation, and Static and dynamic methods for concurrent systems. AWS Storage Systems teams manage trillions of objects in storage, retrieving them with predictable low latency, building software that deploys to thousands of hosts, achieving 99.999999999% (you didn’t read that wrong, that’s 11 nines!) durability. AWS storage services grapple with exciting problems at enormous scale. Amazon S3 powers businesses across the globe that make the lives of customers better every day, and forms the backbone for applications at all scales and in all industries ranging from multimedia to genomics. This scale and data diversity requires constant innovation in algorithms, systems and modeling. AWS Storage Systems teams work in areas including: Error-correcting coding and durability modeling, system and distributed system performance optimization and modeling, designing and implementing distributed, multi-tenant systems, formal verification and strong, practical assurances of correctness, bits-IOPS-Watts: the interplay between computation, performance, and energy, data compression - both general-purpose and domain specific, research challenges with storage media, both existing and emerging, and exploring the intersection between storage and quantum technologies. As an Applied Science Intern, you will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment who is comfortable with ambiguity. Amazon believes that scientific innovation is essential to being the world’s most customer-centric company. Our ability to have impact at scale allows us to attract some of the brightest minds in Automated Reasoning and related fields. Our scientists work backwards to produce innovative solutions that delight our customers. Please visit https://www.amazon.science (https://www.amazon.science/) for more information.
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. We are looking for PhD students excited about working on Automated Reasoning or Storage System problems at the intersection of theory and practice to drive innovation and provide value for our customers. AWS Automated Reasoning teams deliver tools that are called billions of times daily. Amazon development teams are integrating automated-reasoning tools such as Dafny, P, and SAW into their development processes, raising the bar on the security, durability, availability, and quality of our products. AWS Automated Reasoning teams are changing how computer systems built on top of the cloud are developed and operated. AWS Automated Reasoning teams work in areas including: Distributed proof search, SAT and SMT solvers, Reasoning about distributed systems, Automating regulatory compliance, Program analysis and synthesis, Security and privacy, Cryptography, Static analysis, Property-based testing, Model-checking, Deductive verification, compilation into mainstream programming languages, Automatic test generation, and Static and dynamic methods for concurrent systems. AWS Storage Systems teams manage trillions of objects in storage, retrieving them with predictable low latency, building software that deploys to thousands of hosts, achieving 99.999999999% (you didn’t read that wrong, that’s 11 nines!) durability. AWS storage services grapple with exciting problems at enormous scale. Amazon S3 powers businesses across the globe that make the lives of customers better every day, and forms the backbone for applications at all scales and in all industries ranging from multimedia to genomics. This scale and data diversity requires constant innovation in algorithms, systems and modeling. AWS Storage Systems teams work in areas including: Error-correcting coding and durability modeling, system and distributed system performance optimization and modeling, designing and implementing distributed, multi-tenant systems, formal verification and strong, practical assurances of correctness, bits-IOPS-Watts: the interplay between computation, performance, and energy, data compression - both general-purpose and domain specific, research challenges with storage media, both existing and emerging, and exploring the intersection between storage and quantum technologies. As an Applied Science Intern, you will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment who is comfortable with ambiguity. Amazon believes that scientific innovation is essential to being the world’s most customer-centric company. Our ability to have impact at scale allows us to attract some of the brightest minds in Automated Reasoning and related fields. Our scientists work backwards to produce innovative solutions that delight our customers. Please visit https://www.amazon.science (https://www.amazon.science/) for more information.
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. Help us develop the algorithms and models that power computer vision services at Amazon, such as Amazon Rekognition, Amazon Go, Visual Search, and more! We are combining computer vision, mobile robots, advanced end-of-arm tooling and high-degree of freedom movement to solve real-world problems at huge scale. As an intern, you will help build solutions where visual input helps the customers shop, anticipate technological advances, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. You will own the design and development of end-to-end systems and have the opportunity to write technical white papers, create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Amazon Scientist use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science