New speech recognition experiments demonstrate how machine learning can scale

Customer interactions with Alexa are constantly growing more complex, and on the Alexa science team, we strive to stay ahead of the curve by continuously improving Alexa’s speech recognition system.

Increasingly, keeping pace with Alexa’s expanding capabilities will require automating the learning process, through techniques such as semi-supervised learning, which leverages a small amount of annotated data to extract information from a much larger store of unannotated data.

At this year’s International Conference on Acoustics, Speech, and Signal Processing, Alexa senior principal scientist Nikko Strom and I will report what amounts to a large-scale experiment in semi-supervised learning. We developed an acoustic model, a key component of a speech recognition system, using just 7,000 hours of annotated data and 1 million hours of unannotated data. To our knowledge, the largest data set previously used to train an acoustic model was 125,000 hours. In our paper, we describe a number of techniques that, in combination, made it computationally feasible to scale to a dataset eight times that size.

Compared to a model trained only on the annotated data, our semi-supervised model reduces the speech recognition error rate by 10% to 22%, with greater improvements coming on noisier data. We are currently working to integrate the new model into Alexa, with a projected release date of later this year.

As valuable as the model is in delivering better performance, it’s equally valuable for what it taught us about doing machine learning at scale.

Automatic speech recognition systems typically comprise three components: an acoustic model, which translates audio signals into phones, the smallest phonetic units of speech; a pronunciation model, which stitches phones into words; and a language model, which distinguishes between competing interpretations of the same phonetic sequences by evaluating the relative probabilities of different word sequences. Our new work concentrates on just the first stage in this process, acoustic modeling.

To build our model, we turned to a semi-supervised-learning technique called teacher-student training. Using our 7,000 hours of labeled data, we trained a powerful but impractically slow “teacher” network to convert frequency-level descriptions of audio data into sequences of phones. Then we used the teacher to automatically label unannotated data, which we used to train a leaner, more efficient “student” network.

LSTMnetworkanimationV3.gif._CB467045280_.gif
In our experiments, we used a small set of annotated data (green) to train a powerful but inefficient "teacher" model, which in turn labeled a much larger set of unannotated data (red). We then used both datasets to train a leaner, more efficient "student" model.

Both the teacher and the student were five-layer long-short-term-memory (LSTM) networks. LSTMs are common in speech and language applications because they process data in sequence, and the output corresponding to any given input factors in the inputs and outputs that preceded it.

The teacher LSTM is more than three times the size of the student — 78 million parameters, versus 24 million — which makes it more than three times as slow. It’s also bidirectional, which means that it processes every input sequence both forward and backward. Bidirectional processing generally improves an LSTM’s accuracy, but it also requires that the input sequence be complete before it’s fed to the network. That’s impractical for a real-time, interactive system like Alexa, so the student network runs only in the forward direction.

The inputs to both networks are split into 30-millisecond chunks, or “frames”, which are small enough that any given frame could belong to multiple phones. Phones, in turn, can sound different depending on the phones that precede and follow them, so the acoustic model doesn’t just associate each frame with a range of possible phones; it associates it with a range of possible three-phone sequences, or triphones.

In the classification scheme we use, there are more than 4 million such triphones, but we group them into roughly 3,000 clusters. Still, for every frame, the output of the model is a 3,000-dimensional vector, representing the probabilities that the phone belongs to each of the clusters.

Because the teacher is so slow, we want to store its outputs for quick lookup while we’re training the more efficient student. Storing a 3,000-dimensional vector for every frame of audio in the training set is impractical, so we instead keep only the 20 highest probabilities. During training, the student’s goal is to match all 20 of those probabilities as accurately as it can.

The 7,000 hours of annotated data are more accurately labeled than the machine-labeled data, so while training the student, we interleave the two. Our intuition was that if the machine-labeled data began to steer the model in the wrong direction, the annotated data could provide a course correction.

As a corollary, we also increased the model’s learning rate when it was being trained on the annotated data. Essentially, that means that it could make more dramatic adjustments to its internal settings than it could when being trained on machine-labeled data.

Our experiments bore out our intuitions. Interleaving the annotated data and machine-labeled data during training led to a 23% improvement in error rate reduction versus a training regimen that segregated them.

We also experimented with different techniques for parallelizing the training procedure. Optimizing the settings of a neural network is like exploring a landscape with peaks and valleys, except in millions of dimensions. The elevations of the landscape represent the network’s error rates on the training data, so the goal is to find the bottom of one of the deepest valleys.

We were using so much training data that we had to split it up among processors. But the topography of the error landscape is a function of the data, so each processor sees a different landscape.

Historically, the Alexa team has solved this problem through a method called gradient threshold compression (GTC). After working through a batch of data, each processor sends a compressed representation of the gradients it measured — the slopes of the inclines in the error landscape — to all the other processors. Each processor aggregates all the gradients and updates its copy of the neural model accordingly.

We found, however, that with enough processors working in parallel, this approach required the exchange of so much data that transmission time started to eat up the time savings from parallelization. So we also experimented with a technique called blockwise model update filtering (BMUF). With this approach, each processor updates only its own, local copy of the neural model after working through each batch of data. Only rarely — every 50 batches or so — does a processor broadcast its local copy of the model to the other processors, saving a great deal of communication bandwidth.

Where GTC averages gradients, BMUF averages models. But averaging gradients provides an exact solution of the optimization problem, whereas averaging models provides only an approximate solution. We found that, on the same volume of training data, BMUF yielded slightly less accurate models than GTC. But it enabled distribution of the computation to four times as many processors, which means that in a given time frame, it could learn from four times as much data. Or, alternatively, it could deliver comparable performance improvements in one-fourth the time.

We believe that these techniques — and a few others we describe in greater detail in the paper — will generalize to other applications of large-scale semi-supervised learning, a possibility that we have begun to explore in the Alexa AI group.

Acknowledgments: Nikko Strom

Related content

US, CA, Santa Clara
Machine learning (ML) has been strategic to Amazon from the early years. We are pioneers in areas such as recommendation engines, product search, eCommerce fraud detection, and large-scale optimization of fulfillment center operations. The Generative AI team helps AWS customers accelerate the use of Generative AI to solve business and operational challenges and promote innovation in their organization. As an applied scientist, you are proficient in designing and developing advanced ML models to solve diverse challenges and opportunities. You will be working with terabytes of text, images, and other types of data to solve real-world problems. You'll design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for talented scientists capable of applying ML algorithms and cutting-edge deep learning (DL) and reinforcement learning approaches to areas such as drug discovery, customer segmentation, fraud prevention, capacity planning, predictive maintenance, pricing optimization, call center analytics, player pose estimation, event detection, and virtual assistant among others. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services. Key job responsibilities The primary responsibilities of this role are to: Design, develop, and evaluate innovative ML models to solve diverse challenges and opportunities across industries Interact with customer directly to understand their business problems, and help them with defining and implementing scalable Generative AI solutions to solve them Work closely with account teams, research scientist teams, and product engineering teams to drive model implementations and new solutions About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. We are open to hiring candidates to work out of one of the following locations: San Francisco, CA, USA | Santa Clara, CA, USA
US, NY, New York
We are looking for a motivated and experienced Senior Data Scientist with experience in Machine Learning (ML), Artificial Intelligence (AI), Big Data, and Service Oriented Architecture with deep understanding in advertising businesses, to be part of a team of talented scientists and engineers to innovate, iterate, and solve real world problem with cutting-edge AWS technologies. In this role, you will take a leading role in defining the problem, innovating the ML/AI solutions, and information the tech roadmap. You will join a cross-functional, fun-loving team, working closely with scientists and engineers in a daily basis. You will innovate on behalf of our customers by prototyping, delivering functional proofs of concept (POCs), and partnering with our engineers to productize and scale successful POCs. If you are passionate about creating the future, come join us as we have fun, and make history. Key job responsibilities - Define and execute a research & development roadmap that drives data-informed decision making for marketers and advertisers - Establish and drive data hygiene best practices to ensure coherence and integrity of data feeding into production ML/AI solutions - Collaborate with colleagues across science and engineering disciplines for fast turnaround proof-of-concept prototyping at scale - Partner with product managers and stakeholders to define forward-looking product visions and prospective business use cases - Drive and lead of culture of data-driven innovations within and outside across Amazon Ads Marketing orgs About the team Marketing Decision Science provides science products to enable Amazon Ads Marketing to deliver relevant and compelling guidance across marketing channels to prospective and active advertisers for success on Amazon. We own the product, technology and deployment roadmap for AI- and analytics-powered products across Amazon Ads Marketing. We analyze the needs, experiences, and behaviors of Amazon advertisers at petabytes scale, to deliver the right marketing communications to the right advertiser at the right team to help them make the data-informed advertising decisions. Our science-based products enable applications and synergies across Ads organization, spanning marketing, product, and sales use cases. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, WA, Seattle
Are you excited about developing models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale. We are looking for scientists, engineers and program managers for a variety of roles. The Amazon Robotics software team is seeking a collaborative Applied Scientist to focus on computer vision machine learning models. This includes building multi-viewpoint and time-series computer vision systems. It includes building large-scale models using data from many different tasks and scenes. This work spans from basic research such as cross domain training, to experimenting on prototype in the lab, to running wide-scale A/B tests on robots in our facilities. Key job responsibilities * Research vision - Where should we be focusing our efforts * Research delivery – Proving/dis-proving strategies in offline data or in the lab * Production studies - Insights from production data or ad-hoc experimentation. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Bellevue
Looking for your next challenge? North America Sort Centers (NASC) are experiencing growth and looking for a skilled, highly motivated Data Scientist to join the NASC Engineering Data, Product and Simulation Team. The Sort Center network is the critical Middle-Mile solution in the Amazon Transportation Services (ATS) group, linking Fulfillment Centers to the Last Mile. The experience of our customers is dependent on our ability to efficiently execute volume flow through the middle-mile network. Key job responsibilities The Senior Data Scientist will design and implement solutions to address complex business questions using simulation. In this role, you will apply advanced analysis techniques and statistical concepts to draw insights from massive datasets, and create intuitive simulations and data visualizations. You can contribute to each layer of a data solution – you work closely with process design engineers, business intelligence engineers and technical product managers to obtain relevant datasets and create simulation models, and review key results with business leaders and stakeholders. Your work exhibits a balance between scientific validity and business practicality. On this team, you will have a large impact on the entire NASC organization, with lots of opportunity to learn and grow within the NASC Engineering team. This role will be the first dedicated simulation expert, so you will have an exceptional opportunity to define and drive vision for simulation best practices on our team. To be successful in this role, you must be able to turn ambiguous business questions into clearly defined problems, develop quantifiable metrics and deliver results that meet high standards of data quality, security, and privacy. About the team NASC Engineering’s Product and Analytics Team’s sole objective is to develop tools for under the roof simulation and optimization, supporting the needs of our internal and external stakeholders (i.e Process Design Engineering, NASC Engineering, ACES, Finance, Safety and Operations). We develop data science tools to evaluate what-if design and operations scenarios for new and existing sort centers to understand their robustness, stability, scalability, and cost-effectiveness. We conceptualize new data science solutions, using optimization and machine learning platforms, to analyze new and existing process, identify and reduce non-value added steps, and increase overall performance and rate. We work by interfacing with various functional teams to test and pilot new hardware/software solutions. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
IN, KA, Bangalore
Alexa is the voice activated digital assistant powering devices like Amazon Echo, Echo Dot, Echo Show, and Fire TV, which are at the forefront of this latest technology wave. To preserve our customers’ experience and trust, the Alexa Sensitive Content Intelligence (ASCI) team creates policies and builds services and tools through Machine Learning techniques to detect and mitigate sensitive content across Alexa. We are looking for an experienced Senior Applied Scientist to build industry-leading technologies in attribute extraction and sensitive content detection across all languages and countries. An Applied Scientist will be a tech lead for a team of exceptional scientists to develop novel algorithms and modeling techniques to advance the state of the art in NLP or CV related tasks. You will work in a hybrid, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. You will collaborate with and mentor other scientists to raise the bar of scientific research in Amazon. Your work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. We are looking for a leader with strong technical experiences a passion for building scientific driven solutions in a fast-paced environment. You should have good understanding of NLP models (e.g. LSTM, transformer based models) or CV models (e.g. CNN, AlexNet, ResNet) and where to apply them in different business cases. You leverage your exceptional technical expertise, a sound understanding of the fundamentals of Computer Science, and practical experience of building large-scale distributed systems to creating reliable, scalable, and high-performance products. In addition to technical depth, you must possess exceptional communication skills and understand how to influence key stakeholders. You will be joining a select group of people making history producing one of the most highly rated products in Amazon's history, so if you are looking for a challenging and innovative role where you can solve important problems while growing as a leader, this may be the place for you. Key job responsibilities You'll lead the science solution design, run experiments, research new algorithms, and find new ways of optimizing customer experience. You set examples for the team on good science practice and standards. Besides theoretical analysis and innovation, you will work closely with talented engineers and ML scientists to put your algorithms and models into practice. Your work will directly impact the trust customers place in Alexa, globally. You contribute directly to our growth by hiring smart and motivated Scientists to establish teams that can deliver swiftly and predictably, adjusting in an agile fashion to deliver what our customers need. A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test scientific proposal/solutions to improve our sensitive contents detection and mitigation. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. You work closely with partner teams across Alexa to deliver platform features that require cross-team leadership. About the hiring group About the team The mission of the Alexa Sensitive Content Intelligence (ASCI) team is to (1) minimize negative surprises to customers caused by sensitive content, (2) detect and prevent potential brand-damaging interactions, and (3) build customer trust through appropriate interactions on sensitive topics. The term “sensitive content” includes within its scope a wide range of categories of content such as offensive content (e.g., hate speech, racist speech), profanity, content that is suitable only for certain age groups, politically polarizing content, and religiously polarizing content. The term “content” refers to any material that is exposed to customers by Alexa (including both 1P and 3P experiences) and includes text, speech, audio, and video. We are open to hiring candidates to work out of one of the following locations: Bangalore, KA, IND
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, MA, North Reading
We are looking for experienced scientists and engineers to explore new ideas, invent new approaches, and develop new solutions in the areas of Controls, Dynamic modeling and System identification. Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Key job responsibilities Applied Scientists take on big unanswered questions and guide development team to state-of-the-art solutions. We want to hear from you if you have deep industry experience in the Mechatronics domain and : * the ability to think big and conceive of new ideas and novel solutions; * the insight to correctly identify those worth exploring; * the hands-on skills to quickly develop proofs-of-concept; * the rigor to conduct careful experimental evaluations; * the discipline to fast-fail when data refutes theory; * and the fortitude to continue exploring until your solution is found We are open to hiring candidates to work out of one of the following locations: North Reading, MA, USA | Westborough, MA, USA
GB, London
Are you excited about applying economic models and methods using large data sets to solve real world business problems? Then join the Economic Decision Science (EDS) team. EDS is an economic science team based in the EU Stores business. The teams goal is to optimize and automate business decision making in the EU business and beyond. An internship at Amazon is an opportunity to work with leading economic researchers on influencing needle-moving business decisions using incomparable datasets and tools. It is an opportunity for PhD students and recent PhD graduates in Economics or related fields. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL would be a plus. As an Economics Intern, you will be working in a fast-paced, cross-disciplinary team of researchers who are pioneers in the field. You will take on complex problems, and work on solutions that either leverage existing academic and industrial research, or utilize your own out-of-the-box pragmatic thinking. In addition to coming up with novel solutions and prototypes, you may even need to deliver these to production in customer facing products. Roughly 85% of previous intern cohorts have converted to full time scientist employment at Amazon. We are open to hiring candidates to work out of one of the following locations: London, GBR
GB, London
Are you excited about applying economic models and methods using large data sets to solve real world business problems? Then join the Economic Decision Science (EDS) team. EDS is an economic science team based in the EU Stores business. The teams goal is to optimize and automate business decision making in the EU business and beyond. An internship at Amazon is an opportunity to work with leading economic researchers on influencing needle-moving business decisions using incomparable datasets and tools. It is an opportunity for PhD students and recent PhD graduates in Economics or related fields. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL would be a plus. As an Economics Intern, you will be working in a fast-paced, cross-disciplinary team of researchers who are pioneers in the field. You will take on complex problems, and work on solutions that either leverage existing academic and industrial research, or utilize your own out-of-the-box pragmatic thinking. In addition to coming up with novel solutions and prototypes, you may even need to deliver these to production in customer facing products. Roughly 85% of previous intern cohorts have converted to full time scientist employment at Amazon. We are open to hiring candidates to work out of one of the following locations: London, GBR