How to make on-device speech recognition practical

Branching encoder networks make operation more efficient, while “neural diffing” reduces bandwidth requirements for model updates.

Historically, Alexa’s automatic-speech-recognition models, which convert speech to text, have run in the cloud. But in recent years, we’ve been working to move more of Alexa’s computational capacity to the edge of the network — to Alexa-enabled devices themselves.

The move to the edge promises faster response times, since data doesn’t have to travel to and from the cloud; lower consumption of Internet bandwidth, which is important in some applications; and availability on devices with inconsistent Internet connections, such as Alexa-enabled in-car sound systems.

At this year’s Interspeech, we and our colleagues presented two papers describing some of the innovations we’re introducing to make it practical to run Alexa at the edge.

In one paper, “Amortized neural networks for low-latency speech recognition”, we show how to reduce the computational cost of neural-network-based automatic speech recognition (ASR) by 45% with no loss in accuracy. Our method also has lower latencies than similar methods for reducing computation, meaning that it enables Alexa to respond more quickly to customer requests.

In the other paper, “Learning a neural diff for speech models”, we show how to dramatically reduce the bandwidth required to update neural models on the edge. Instead of transmitting a complete model, we transmit a set of updates for some select parameters. In our experiments, this reduced the size of the update by as much as 98% with negligible effect on model accuracy.

Amortized neural networks

Neural ASR models are usually encoder-decoder models. The input to the encoder is a sequence of short speech snippets called frames, which the encoder converts into a representation that’s useful for decoding. The decoder translates that representation into text.

Neural encoders can be massive, requiring millions of computations for each input. But much of a speech signal is uninformative, consisting of pauses between syllables or redundant sounds. Passing uninformative frames through a huge encoder is just wasted computation.

Our approach is to use multiple encoders, of differing complexity, and decide on the fly which should handle a given frame of speech. That decision is made by a small neural network called an arbitrator, which must process every input frame before it’s encoded. The arbitrator adds some computational overhead to the process, but the time savings from using a leaner encoder is more than enough to offset it.

Researchers have tried similar approaches in domains other than speech, but when they trained their models, they minimized the average complexity of the frame-encoding process. That leaves open the possibility that the last few frames of the signal may pass to the more complex encoder, causing delays (increasing latency).

amortized-loss-2.png
Both processing flows above (a and b) distribute the same number of frames to the fast and slow (F and S) encoders, respectively, resulting in the same average computational cost. But the top flow incurs a significantly higher latency.

In our paper, we propose a new loss function that adds a penalty (Lamr in the figure above) for routing frames to the fast encoder when we don’t have a significant audio backlog. Without the penalty term, our branched-encoder model reduces latency to 29 to 234 milliseconds, versus thousands of milliseconds for models with a single encoder. But adding the penalty term cuts latency even further, to the 2-to-9-millisecond range.

AmazonScience_AmnetDemo_V1.gif
The audio backlog is one of the factors that the arbitrator considers when deciding which encoder should receive a given frame of audio.

In our experiments, we used two encoders, one complex and one lean, although in principle, our approach could generalize to larger numbers of encoders.

We train the arbitrator and both encoders together, end to end. During training, the same input passes through both encoders, and based on the accuracy of the resulting speech transcription, the arbitrator learns a probability distribution, which describes how often it should route frames with certain characteristics to the slow or fast encoder.

Over multiple epochs — multiple passes through the training data — we turn up the “temperature” on the arbitrator, skewing the distribution it learns more dramatically. In the first epoch, the split for a certain type of frame might be 70%-30% toward one encoder or the other. After three or four epochs, however, all of the splits are more like 99.99%-0.01% — essentially binary classifications.

We used three baselines in our experiments, all of which were single-encoder models. One was the full-parameter model, and the other two were compressed versions of the same model. One of these was compressed through sparsification (pruning of nonessential network weights), the other through matrix factorization (decomposing the model’s weight matrix into two smaller matrices that are multiplied together). 

Against the baselines, we compared two versions of our model, which were compressed through the same two methods. We ran all the models on a single-threaded processor at 650 million FLOPs per second.

Our sparse model had the lowest latency —two milliseconds, compared to 3,410 to 6,154 milliseconds for the baselines — and our matrix factorization model required the fewest number of floating-point operations per frame — 23 million, versus 30 million to 43 million for the baselines. Our accuracy remained comparable, however — a word error rate of 8.6% to 8.7%, versus 8.5% to 8.7% for the baselines.

Neural diffs

The ASR models that power Alexa are constantly being updated. During the Olympics, for instance, we anticipated a large spike in requests that used words like “Ledecky” and “Kalisz” and updated our models accordingly.

With cloud-based ASR, when we’ve updated a model, we simply send copies of it to a handful of servers in a data center. But with edge ASR, we may ultimately need to send updates to millions of devices simultaneously. So one of our research goals is to minimize the bandwidth requirements for edge updates.

In our other Interspeech paper, we borrow an idea from software engineering — that of the diff, or a file that charts the differences between the previous version of a codebase and the current one.

Our idea was that, if we could develop the equivalent of a diff for neural networks, we could use it to update on-device ASR models, rather than having to transmit all the parameters of a complete network with every update.

We experimented with two different approaches to creating a diff, matrix sparsification and hashing. With matrix sparsification we begin with two matrices of the same size, one that represents the weights of the connections in the existing ASR model and one that’s all zeroes.

Then, when we retrain the ASR model on new data, we update, not the parameters of the old model, but the entries in the second matrix — the diff. The updated model is a linear combination of the original weights and the values in the diff.

sparse_mask_training_image_only.png
Over successive training epochs, we prune the entries of matrices with too many non-zeroes, gradually sparsifying the diff.

When training the diff, we use an iterative procedure that prunes matrices with too many non-zero entries. As we did when training the arbitrator in the branched-encoder network, we turn up the temperature over successive epochs to make the diff sparser and sparser.

Our other approach to creating diffs was to use a hash function, a function that maps a large number of mathematical objects to a much smaller number of storage locations, or “buckets”. Hash functions are designed to distribute objects evenly across buckets, regardless of the objects’ values.

With this approach, we hash the locations in the diff matrix to buckets, and then, during training, we update the values in the buckets, rather than the values in the matrices. Since each bucket corresponds to multiple locations in the diff matrix, this reduces the amount of data we need to transfer to update a model. 

Hashed diffing.jpg
With hash diffing, a small number of weights (in the hash buckets at bottom) are used across a matrix with a larger number of entries.
Credit: Glynis Condon

One of the advantages of our approach, relative to other approaches to compression, such as matrix factorization, is that with each update, our diffs can target a different set of model weights. By contrast, traditional compression methods will typically lock you into modifying the same set of high-importance weights with each update.

AmazonScience_CarModel_V1.gif
An advantage of our diffing approach is that we can target a different set of weights with each model update, which gives us more flexibility in adapting to a changing data landscape.

In our experiments, we investigated the effects of three to five consecutive model updates, using different diffs for each. Hash diffing sometimes worked better for the first few updates, but over repeated iterations, models updated through hash diffing diverged more from full-parameter models. With sparsification diffing, the word error rate of a model updated five times in a row was less than 1% away from that of the full-parameter model, with diffs whose size was set at 10% of the full model’s.

About the Author
Grant Strimel is a senior applied scientist with Alexa AI.

Related content

US, WA, Seattle
Job summaryAmazon brings buyers and sellers together. Our retail customers depend on us to give them access to every product at the best possible price. Our sellers depend on us to give them a platform to launch their business into every home and marketplace. Making this happen is the mission of every engineer in Amazon's North America Consumer (NAC) organization.To this end, the Science team is tasked with:· Organizing available data sources, and creating detailed dictionaries of data that can be used in future analyses.· Partnering with product teams in evaluating the financial and operational impact of new product offerings.· Conducting research into optimization and machine learning algorithms which can be applied to solve business problems.· Partnering with other scientists in evaluating algorithms and suggestions from a business view point.· Carrying out independent data-backed initiatives that can be leveraged later on in the fields of network organization, costing and financial modeling of processes.In order to execute the above mandate we are on the look out for smart and qualified Data Scientists who will own projects in partnership with product and research teams as well as operate autonomously on independent initiatives that are expected to unlock benefits in the future. A past background in Statistics is necessary, along with advanced proficiency in languages such as Python and R.Key job responsibilitiesAs a Data Scientist, you are able to use a range of advanced analytical methodologies to solve challenging business problems when the solution is unclear. You have a combination of business acumen, broad knowledge of statistics, deep understanding of ML algorithms, and an analytical mindset. You thrive in a collaborative environment, and are passionate about learning. Our team utilizes a variety of AWS tools such as Redshift, Sagemaker, Lambda, S3, and EC2 with a variety of skillsets in Linear and Discrete Optimization, ML, NLP, Forecasting, Probabilistic ML and Causal ML. You will bring knowledge in many of these domains along with your own specialties and skillsets.
US, CA, Pasadena
Job summaryThe Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is hiring a Quantum Research Scientist to join a multi-disciplinary, fast-paced team of theoretical and experimental physicists, materials scientists, and hardware and software engineers pushing the forefront of quantum computing. The candidate should demonstrate a thorough knowledge of experimental measurement techniques as well as quantum mechanics theory.Inclusive Team CultureHere at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences.Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.Key job responsibilities* Contribute to fast-paced and agile research to help close the many orders of magnitude gap in gate error rates required for fault tolerant quantum computation* Design and perform experiments to characterize quantum devices in close collaboration with software and engineering teams* Develop models to understand and improve device performance* Effectively document results and communicate to a broad audience* Create robust software for implementation, automation, and analysis of measurements* Specify technical requirements in a cross-team collaboration using analytical arguments derived from physics theoryA day in the life* Analyze experimental data* Develop software to test and run new experiments on existing devices; collaborate with software engineers to achieve high code standard* Debug test setups to achieve high-quality data* Present results and cross-collaborate with others’ work* Perform code review for a colleague’s merge request
US, CA, Pasadena
Job summaryThe Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Quantum Research Scientist in the Test and Measurement group. You will join a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers working at the forefront of quantum computing. You should have a deep and broad knowledge of experimental measurement techniques.Candidates with a track record of original scientific contributions will be preferred. We are looking for candidates with strong engineering principles, resourcefulness and a bias for action, superior problem solving, and excellent communication skills. Working effectively within a team environment is essential. As a research scientist you will be expected to work on new ideas and stay abreast of the field of experimental quantum computation.Inclusive Team CultureHere at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences.Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.Key job responsibilitiesIn this role, you will drive improvements in qubit performance by characterizing the impact of environmental and material noise on qubit dynamics. This will require designing experiments to assess the role of specific noise sources, ensuring the collection of statistically significant data, analyzing the results, and preparing clear summaries for the team. Finally, you will work with hardware engineers, material scientists, and circuit designers to implement changes which mitigate the impact of the most significant noise sources.
US, MA, Cambridge
Job summaryThe Alexa Artificial Intelligence (AI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background, to help build industry-leading Speech and Language technology.Key job responsibilitiesAs an Applied Scientist with the Alexa AI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art in spoken language understanding. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding.About the teamThe Alexa AI team has a mission to push the envelope in Natural Language Understanding (NLU). Specifically, we focus on incremental learning, continual learning and fairness, in order to provide the best-possible experience for our customers.
US, WA, Seattle
Job summaryThe Alexa Artificial Intelligence (AI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background to help build industry-leading Speech and Language technology. Our mission is to push the envelope in Natural Language Understanding (NLU), Audio Signal Processing, text-to-speech (TTS), and Dialog Management, in order to provide the best-possible experience for our customers.Key job responsibilitiesAs an Applied Scientist, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art in spoken language understanding. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding.
US, MA, Cambridge
Job summaryWant to transform the way people enjoy music, video, and radio? Come join the team that made Amazon Music, Spotify, Hulu, Netflix, Pandora, available to Alexa customers. We are innovating the way our customers interact with entertainment in the living room, on the go, and in the car. We are at the epicenter of the future of entertainment.Alexa Entertainment is looking for an Applied Scientist as we build a team of talented and passionate scientists for ASR (automatic speech recognition) and NLU (natural language understanding). As a Research Scientist, you will participate in the design, development, and evaluation of models and ML (machine learning) technology so that customers have the magical experience of entertainment via Alexa. You will help lay the foundation to move from directed interactions to learned behaviors that enable Alexa to proactively take action on behalf of the customer. And, you will have the satisfaction of working on a product your friends and family can relate to, and want to use every day. Like the world of smart phones less than 10 years ago, this is a rare opportunity to have a giant impact on the way people live.You will be part of a team delivering features that are highly anticipated by media and well received by our customers.
US, VA, Arlington
Job summaryThe People eXperience and Technology Central Science Team (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. We are an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal.We are looking for a research scientist with expertise in applying causal inference, experimental design, or causal machine learning techniques to topics in labor, personnel, education, health, public, or behavioral science. We are particularly interested in candidates with experience applying these skills to strategic problems with significant business and/or social policy impact.Candidates will work with economists, scientists and engineers to estimate and validate their models on large scale data, and will help business partners turn the results of their analysis into policies, programs, and actions that have a major impact on Amazon’s business and its workforce. We are looking for creative thinkers who can combine a strong scientific toolbox with a desire to learn from others, and who know how to execute and deliver on big ideas.You will conduct, direct, and coordinate all phases of research projects, including defining key research questions, developing models, designing and implementing appropriate data collection methods, executing analysis plans, and communicating results. You will earn trust from our business partners by collaborating with them to define key research questions, communicate scientific approaches and findings, listen to and incorporate their feedback, and deliver successful solutions.
US, WA, Seattle
Job summaryWant to work on one of Amazon’s most ambitious efforts? Time and Attendance (TAA) is leading the charge to build products that support our global workforce of passionate Amazonians!At Amazon we take seriously our commitment to pay employees accurately and on-time. While each line of business is responsible for knowing and driving down pay defects for their own employees, the centralized Perfect Pay team manages data stores and analytics, program oversight, cross-org technical and non-technical projects, and drives accountability across leaders.TAA is looking for a strong Data Scientist, Machine Learning for the Perfect Pay program to drive and own design and development of Machine Learning products to detect anomalies and risks to prevent pay errors before they happen. You will lead the team in designing anomaly and risk detection models to identify and prevent defects for Amazonians in their HR and pay data. You will work on all aspects of the product development life cycle, with a focus on the hardest problems around building scalable machine learning models with native AWS solutions that leverage tools like SageMaker, Glue, and Redshift to grow with Amazon. You will build high quality, scalable models which create immediate and impactful value for our Amazonians worldwide, while also ensuring that our products are evolving in a sustainable long-term direction.Who are we looking for to join our team?We are looking for a Data Science, machine learning specialist to build new and innovative systems that can predict pay defects before they happen and drive operational excellence across businesses. The HR systems and tools have never been analyzed together in context. The opportunity to automate improving the Amazonian experience using ML and AI span from improving the pay experience, to building risk prevention, to automatically triggering internal HR systems to correct anomalies. Getting the opportunity to cross-functionally explore data sets which support 1.4 million Amazonians for the first time is a unique opportunity. The ideal candidate will be experienced in innovating in domains without current ML/AI products. Domain experience in time and attendance and payroll, or Amazon operations field experience is useful but not required.Key job responsibilitiesMain responsibilities• Use statistical and machine learning techniques to create scalable anomaly detection and risk management systems• Analyzing and understanding large amounts of Amazon’s historical HR data for specific instances of defects or broader risk trends• Design, development, and evaluation of highly innovative models for anomaly detection and risk assessment• Working closely with data engineering team to scope scalable data architecture solutions that support your ML models• Working closely with software engineering teams to drive real-time model implementations and new feature creations• Working closely with operations staff to optimize defect prevention and model implementations• Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation• Research and implement novel machine learning and statistical approaches• Working closely with HR Business Partners to understand their use-cases for anomaly and risk detection as well as to define the data needed to carry out the work
US, WA, Bellevue
Job summaryAmazon relies on the latest technology to deliver millions of packages every day to our customers – on time, at low cost, and safely. The Middle Mile Planning Research & Optimization Science team builds complex science models and solutions that work across our vendors, warehouses and carriers to optimize both time & cost of getting the packages delivered. Our models are state-of-the-art, make business decisions impacting billions of dollars a year, and improve ordering and delivery experience for millions of online shoppers. That said, this remains a fast growing business and our journey has only started. Our mission is to build the most efficient and transportation network on the planet, using our science and technology as our biggest advantage. We aim to leverage cutting edge technologies in machine learning and operations research to grow our businesses.As a Machine Learning Applied Scientist, you’ll design, model, develop and implement state-of-the-art machine learning models and solutions used by Amazon worldwide. You will need to collaborate effectively with internal stakeholders and cross-functional teams to solve problems, create operational efficiencies, and deliver successfully against high organizational standards. As part of your role you will regularly interact with software engineering teams and business leadership. The focus of this role is to research, develop, and deploy predictive models that will inform and support our business, primarily in the areas of carrier safety.Tasks/ Responsibilities:· Lead and partner with the engineering and operations teams to drive modeling and technical design for complex business problems.· Develop accurate and scalable machine learning models and methods to solve our hardest predictive problems in transportation.· Lead complex modeling analyses to aid management in making key business decisions and set new policies.
US, NJ, Newark
Job summaryGood storytelling starts with great listening. At Audible, that means each role and every project has our audience in mind. Because the same people who design, develop, and deploy our products also happen to use them. To us, that speaks volumes.ABOUT THIS ROLEAudible is searching for an exceptional data scientist to join our economics team and drive the development of models at the intersection of machine learning and econometrics at scale. The Audible economics organization works across the business to measure and maximize the value Audible delivers to customers, creators, and communities globally. In this role, there will be a focus on partnering with our content and product teams to build a groundbreaking catalog of audiobooks and spoken-word entertainment, develop innovative tools to generate value for creators, and optimize content distribution and monetization.We are looking for someone experienced in building ML models at scale for complex prediction and optimization problems, who also has a background (or burgeoning interest!) in causal inference or interpretable machine learning. In addition to working with our staff economists and data scientists, you will also collaborate closely with scientists across Audible and partner teams at Amazon on problems pertinent to subscription businesses and the production of original media content.As a Data Scientist, you will...· Work with leadership in our content and product organizations to identify key analytical problems and opportunities – your work is expected to be a key input to our future content strategy.· Develop and maintain scalable, innovative data science and machine learning models that deliver actionable insights and results.· Collaborate with other data scientists, economists, and analysts at Audible to build data-driven solutions to key business problems.