How dynamic lookahead improves speech recognition

Determining on the fly how much additional audio to process to resolve ambiguities increases accuracy while reducing latency relative to fixed-lookahead approaches.

Automatic speech recognition (ASR) models, which convert speech into text, come in two varieties, causal and noncausal. A causal model processes speech as it comes in; to determine the correct interpretation of the current frame (discrete chunk) of audio, it can use only the frames that preceded it. A noncausal model waits until an utterance is complete; in interpreting the current frame, it can use both the frames that preceded it and those that follow it.

Causal models tend to have lower latencies, since they don’t have to wait for frames to come in, but noncausal models tend to be more accurate, because they have additional contextual information. Many ASR models try to strike a balance between the two approaches by using lookahead: they let a few additional frames come in before deciding on the interpretation of the current frame. Sometimes, however, those additional frames don’t include the crucial bit of information that could resolve a question of interpretation, and sometimes, the model would have been just as accurate without them.

In a paper we presented at this year’s International Conference on Machine Learning (ICML), we describe an ASR model that dynamically determines lookahead for each frame, based on the input.

We compared our model to a causal model and two standard types of lookahead models and found that, across the board, our model achieved lower error rates than any of the baselines. At the same time, for a given error rate, it achieved lower latencies than either of the earlier lookahead models.

Computational graph

We represent the computations executed by our model with a computational graph. From left to right, the graph depicts successive time steps in the processing of input frames; from bottom to top, it depicts successive layers of the ASR network, from input to output. Edges in the graph depict causal relationships between nodes at past time steps and nodes at the current time step, and they also depict dependency relationships between nodes at future time steps and the current output.

Computational graph.png
In this computational graph, grey arrows depict causal relationships between network nodes at different time steps, and blue arrows depict relationships between nodes at future time steps and the current output node (vi3).
Lookahead model.png
The adjacency matrix of a standard lookahead model.
Dynamic lookahead model.png
A mask generated dynamically by our model.

We represent each layer in the graph, in turn, with an adjacency matrix, which maps all the layer’s nodes against those from the prior layer; the value in any cell of the matrix indicates the row node’s dependency on the column node.

The matrix of a purely causal model is divided by a diagonal from top left to bottom right; all the values to the right of the diagonal are zero, because there are no dependencies between future time steps and the current time step. An entirely noncausal model, by contrast, has a full matrix. A standard lookahead model has a diagonal that’s offset by as many frames as it looks ahead.

Our goal is to train a scheduler that generates adjacency matrices on the fly, with differing degrees of lookahead for different rows of the matrices. We call these matrices masks, because they mask out parts of the adjacency matrix. 

Annealing

Ultimately, we want the values of the masks to be binary: either we look ahead to a future frame or we don’t. But the loss function we use during training must be differentiable, so we can use the standard gradient descent algorithm to update the model weights. Consequently, during training, we allow fractional values in the adjacency matrices.

In a process known as annealing, over the course of successive training epochs, we force the values of the adjacency matrix to diverge more and more, toward either 1 or 0. At inference time, the values output by the model will still be fractional, but they will be close enough to 1 or 0 that we can produce the adjacency matrix by simple rounding.

Annealing.png
Dependency weights across successive time steps for several different cells in an adjacency matrix as the annealing “temperature” is turned up during training. Fractional values gradually resolve to 1 or 0.

Latency

A lookahead ASR model needs to balance accuracy and latency, and with our architecture, we strike that balance through the choice of loss function during training.

A naïve approach would be simply to have two terms in the loss function, one that penalizes error and one that penalizes total lookahead within the masks as a proxy for latency. But we take a more sophisticated approach.

During training, for every computational graph generated by our model, we compute the algorithmic latency for each output. Recall that, during training, the values in the graph can be fractional; we define algorithmic latency as the number of time steps between the current output node and the future input node whose dependency path to the current node has the highest weight.

Lookahead compute graph.png
In this example, the algorithmic latency for node v3i is two frames, since it depends on the input value v0i+2.

This allows us to compute the average algorithmic latency for all the examples in our training set and, consequently, to regularize the latency measure we use during training. That is, the latency penalty is not absolute but relative to the average lookahead necessary to ensure model accuracy.

In a separate set of experiments, we used a different notion of latency: computational latency, rather than algorithmic latency. There, the key was to calculate how much of its backlogged computations the model could get through in each time step; the unfinished computations after the final time step determined the user-perceived latency.

Computational latency.png
With computational (rather than algorithmic) latency, the computational backlog after the final time step determines the user-perceived latency (UPL).

 As with any multi-objective loss function, we can tune the relative contribution of each loss term. Below are masks generated by two versions of our model for the same input data. Both versions were trained using algorithmic latency, but in one case (right), the latency penalty was more severe than in the other. As can be seen, the result is a significant drop in latency, but at the cost of an increase in error rate.

Different latency penalties.png
Masks generated by two different models, one (left) trained with a lower latency penalty and one (right) with a higher penalty.

We compared our model’s performance to four baselines: one was a causal model, with no lookahead; one was a layerwise model, which used the same lookahead for each frame; one was a chunked model, which executes a lookahead once, catches up with it, then executes another lookahead; and the last was a version of our dynamic-lookahead model, except using the standard latency penalty term. We also tested two versions of our model, one built with the Conformer architecture and one with the Transformer.

For the fixed-lookahead baselines, we considered three different lookahead intervals: two frames, five frames, and 10 frames. Across the board, our models were more accurate than all four baselines, while also achieving lower latencies.

Dynamic-lookahead results.png
Results of dynamic-lookahead experiments.

Acknowledgments: Martin Radfar, Ariya Rastrow, Athanasios Mouchtaris

Research areas

Related content

US, WA, Seattle
Amazon's Global Fixed Marketing Campaign Measurement & Optimization (CMO) team is looking for a senior economic expert in causal inference and applied ML to advance the economic measurement, accuracy validation and optimization methodologies of Amazon's global multi-billion dollar fixed marketing spend. This is a thought leadership position to help set the long-term vision, drive methods innovation, and influence cross-org methods alignment. This role is also an expert in modeling and measuring marketing and customer value with proven capacity to innovate, scale measurement, and mentor talent. This candidate will also work closely with senior Fixed Marketing tech, product, finance and business leadership to devise science roadmaps for innovation and simplification, and adoption of insights to influence important resource allocation, fixed marketing spend and prioritization decisions. Excellent communication skills (verbal and written) are required to ensure success of this collaboration. The candidate must be passionate about advancing science for business and customer impact. Key job responsibilities - Advance measurement, accuracy validation, and optimization methodology within Fixed Marketing. - Motivate and drive data generation to size. - Develop novel, innovative and scalable marketing measurement techniques and methodologies. - Enable product and tech development to scale science solutions and approaches. A day in the life - Propose and refine economic and scientific measurement, accuracy validation, and optimization methodology to improve Fixed Marketing models, outputs and business results - Brief global fixed marketing and retails executives about FM measurement and optimization approaches, providing options to address strategic priorities. - Collaborate with and influence the broader scientific methodology community. About the team CMO's vision is to maximizing long-term free cash flow by providing reliable, accurate and useful global fixed marketing measurement and decision support. The team measures and helps optimize the incremental impact of Amazon (Stores, AWS, Devices) fixed marketing investment across TV, Digital, Social, Radio, and many other channels globally. This is a fully self supported team composed of scientists, economists, engineers, and product/program leaders with S-Team visibility. We are open to hiring candidates to work out of one of the following locations: Irvine, CA, USA | San Francisco, CA, USA | Seattle, WA, USA | Sunnyvale, CA, USA
GB, Cambridge
Our team builds generative AI solutions that will produce some of the future’s most influential voices in media and art. We develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video, with Amazon Game Studios and Alexa, the ground-breaking service that powers the audio for Echo. Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language, Audio and Video technology. As an Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and generative AI models to drive the state of the art in audio (and vocal arts) generation. Position Responsibilities: * Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. * Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. * Research and implement novel ML and statistical approaches to add value to the business. * Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
US, TX, Austin
The Workforce Solutions Analytics and Tech team is looking for a senior Applied Scientist who is interested in solving challenging optimization problems in the labor scheduling and operations efficiency space. We are actively looking to hire senior scientists to lead one or more of these problem spaces. Successful candidates will have a deep knowledge of Operations Research and Machine Learning methods, experience in applying these methods to large-scale business problems, the ability to map models into production-worthy code in Python or Java, the communication skills necessary to explain complex technical approaches to a variety of stakeholders and customers, and the excitement to take iterative approaches to tackle big research challenges. As a member of our team, you'll work on cutting-edge projects that directly impact over a million Amazon associates. This is a high-impact role with opportunities to designing and improving complex labor planning and cost optimization models. The successful candidate will be a self-starter comfortable with ambiguity, with strong attention to detail and outstanding ability in balancing technical leadership with strong business judgment to make the right decisions about model and method choices. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs. Key job responsibilities • Candidates will be responsible for developing solutions to better manage and optimize flexible labor capacity. The successful candidate should have solid research experience in one or more technical areas of Operations Research or Machine Learning. As a senior scientist, you will also help coach/mentor junior scientists on the team. • In this role, you will be a technical leader in applied science research with significant scope, impact, and high visibility. You will lead science initiatives for strategic optimization and capacity planning. They require superior logical thinkers who are able to quickly approach large ambiguous problems, turn high-level business requirements into mathematical models, identify the right solution approach, and contribute to the software development for production systems. • Invent and design new solutions for scientifically-complex problem areas and identify opportunities for invention in existing or new business initiatives. • Successfully deliver large or critical solutions to complex problems in the support of medium-to-large business goals. • Apply mathematical optimization techniques and algorithms to design optimal or near optimal solution methodologies to be used for labor planning. • Research, prototype, simulate, and experiment with these models and participate in the production level deployment in Python or Java. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Austin, TX, USA | Bellevue, WA, USA | Nashville, TN, USA | Seattle, WA, USA | Tempe, AZ, USA
US, NY, New York
Where will Amazon's growth come from in the next year? What about over the next five? Which product lines are poised to quintuple in size? Are we investing enough in our infrastructure, or too much? How do our customers react to changes in prices, product selection, or delivery times? These are among the most important questions at Amazon today. The Topline Forecasting team in the Supply Chain Optimization Technologies (SCOT) group is looking for innovative, passionate and results-oriented Economists to answer these questions. You will have an opportunity to own the long-run outlook for Amazon’s global consumer business and shape strategic decisions at the highest level. The successful candidate will be able to formalize problem definitions from ambiguous requirements, build econometrics models using Amazon’s world-class data systems, and develop cutting-edge solutions for non-standard problems. Key job responsibilities · Develop new econometric models or improve existing approaches using scalable techniques. · Extract data for analysis and model development from large, complex datasets. · Closely work with engineering teams to build scalable, efficient systems that implement prototypes in production. · Apply economic theory to solve business problems in a fast moving environment. · Distill problem definitions from informal business requirements and communicate technical solutions to senior business leaders. · Drive innovation and best practices in applied research across the Amazon research science community. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA
US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. Key job responsibilities On our team you will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals. A day in the life You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. You will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. About the team The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for an Applied Scientist to join our Applied AI team to work on LLM-based solutions. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for a Senior Applied Scientist to join our Applied AI team to work on LLM-based solutions. We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, excellent project management skills, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals. Key job responsibilities You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. A day in the life On our team you will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, MD, Virtual Location - Maryland
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. This is a part time position, 29 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Virtual Location - MD
US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for a Senior Applied Scientist to join our Applied AI team to work on LLM-based solutions. We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, excellent project management skills, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals. Key job responsibilities You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. You will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
IN, KA, Bengaluru
Amazon strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon continues to grow and evolve as a world-class e-commerce platform. The AOP team is an integral part of this and strives to provide Analytical Capabilities to fulfil all customer processes in the IN-ECCF regions. We’re seeking a Data Scientist with expertise in a breadth of ML techniques. Your responsibilities will include developing, prototyping and productionizing innovative models using a range of techniques (Supervised/Unsupervised/Reinforcement). We are also looking for innovators capable of using generative AI to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. Key job responsibilities - Demonstrate thorough technical knowledge on feature engineering of massive datasets, effective exploratory data analysis, and model building using industry standard AI/ML models and working with Large Language Models - Proficiency in both Supervised(Linear/Logistic Regression) and UnSupervised algorithms(k means clustering) - Understand the business reality behind large sets of data and develop meaningful solutions comprising of analytics as well as marketing management. - Work closely with internal stakeholders like the business teams, engineering teams and partner teams and align them with respect to your focus area - Innovate by adapting new modeling techniques and procedures - Passionate about working with huge data sets ( training/fine tuning) and be someone who loves to bring datasets together to answer business questions. You should have deep expertise in creation and management of datasets - Exposure at implementing and operating stable, scalable data flow solutions from production systems into end-user facing applications/reports. These solutions will be fault tolerant, self-healing and adaptive. - Work with distributed machine learning and statistical algorithms to harness enormous volumes of data at scale to serve our customers We are open to hiring candidates to work out of one of the following locations: Bengaluru, KA, IND | Hyderabad, TS, IND
DE, Aachen
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Aachen, DEU