Alexa’s speech recognition research at ICASSP 2022

Multimodal training, signal-to-interpretation, and BERT rescoring are just a few topics covered by Amazon’s 21 speech-related papers.

This week, the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) got under way in virtual form, to be followed by an in-person meeting two weeks later (May 22-27) in Singapore. ICASSP is the flagship conference of the IEEE Signal Processing Society and, as such, one of the premier venues for publishing the latest advances in automatic speech recognition (ASR) and other speech-processing and speech-related fields, with strong participation from both industry and academia.

More ICASSP coverage on Amazon Science

This year, the Alexa AI ASR organization is represented by 21 papers, more than in any prior year, reflecting the growth of speech-related science in Alexa AI. Here we highlight a few of these papers, to give an idea of their breadth.

Multimodal pretraining for end-to-end ASR

Deep-learning methods have taken over as the method of choice in speech-based recognition and classification tasks, and increasingly, self-supervised representation learning is used to pretrain models on large unlabeled datasets, followed by “fine-tuning” on task-labeled data.

In their paper “Multi-modal Pretraining for Automated Speech Recognition”, David Chan and colleagues give a new twist to this approach by pretraining speech representations on audiovisual data. As the self-supervision task for both modalities, they adapt the masked language model, in which words of training sentences are randomly masked out, and the model learns to predict them. In their case, however, the masks are applied to features extracted from the video and audio stream.

Multimodal MLM.png
In "Multi-modal pre-training for automated speech recognition", Amazon researchers adapt the masked language model, which learns to predict masked-out words of training sentences, to features extracted from video and audio streams.

Once pretrained, the audio-only portion of the learned representation is fused with a more standard front-end representation to feed into an end-to-end speech recognition system. The researchers show that this approach yields more accurate ASR results than pretraining with only audio-based self-supervision, suggesting that the correlations between acoustic and visual signals are helpful in extracting higher-level structures relevant to the encoding of speech.

Signal-to-interpretation with multimodal embeddings

The advantages of multimodality are not limited to unsupervised-learning settings. In “Tie your embeddings down: Cross-modal latent spaces for end-to-end spoken language understanding”, Bhuvan Agrawal and coauthors study signal-to-interpretation (S2I) recognizers that map a sequential acoustic input to an embedding, from which the intent of an utterance is directly inferred.

Cross-modal SLU.png
In "Tie your embeddings down: Cross-modal latent spaces for end-to-end spoken language understanding", Amazon researchers train encoders to generate acoustic and text embeddings in the same representational space, so that the origin of the embeddings becomes indistinguishable.

This bypasses the need for explicit speech transcription but still uses supervision for utterance intents. Due to their compactness, S2I models are attractive for on-device deployment, which has multiple benefits. For example, Alexa AI has used on-device speech processing to make Alexa faster and lower-bandwidth.

Agrawal and colleagues show that S2I recognizers give better results when their acoustic embeddings are constrained to be close to embeddings of the corresponding textual input produced by a pretrained language model (BERT). As in the earlier paper, this cross-modal signal is used during learning only and not required for inference (i.e., at runtime). It is a clever way to sneak linguistic structure back into the S2I system while also infusing it with knowledge gleaned from the vastly larger language model training data.

TinyS2I.png
The TinyS2I architecture. From "TINYS2I: A small-footprint utterance classification model with contextual support for on-device SLU".

The idea of matching embeddings derived from audio to those for corresponding text strings (i.e., transcripts) also has other applications. In their paper “TinyS2I: A small-footprint utterance classification model with contextual support for on-device SLU”, Anastasios Alexandridis et al. show that extremely compact, low-latency speech-understanding models can be obtained for the utterances most frequently used to control certain applications, such as media playback.

The most frequent control commands (“pause”, “volume up”, and the like) can be classified directly from an acoustic embedding. For commands involving an item from a contextual menu (“play [title]”), the acoustic embedding is matched to the media title’s textual embedding. In this paper, unlike the previous one, the textual embeddings are trained jointly with the acoustic ones. But the same triplet loss function can be used to align the cross-modal embeddings in a shared space.

ASR rescoring with BERT

Deep encoders of text trained using the masked-language-model (MLM) paradigm, such as BERT, have been widely used as the basis for all sorts of natural-language tasks. As mentioned earlier, they can incorporate vast amounts of language data through self-supervised pretraining, followed by task-specific supervised fine-tuning.

Related content
Second-pass language models that rescore automatic-speech-recognition hypotheses benefit from multitask training on natural-language-understanding objectives.

So far, however, the practical impact of MLMs on ASR proper has been limited, in part because of unsatisfactory tradeoffs between computational overhead (latency) and achievable accuracy gains. This is now changing with the work of Liyan Xu et al., as described in “RescoreBERT: Discriminative speech recognition rescoring with BERT”.

The researchers show how BERT-generated sentence encodings can be incorporated into a model that rescores the text strings output by an ASR model. Because BERT is trained on large corpora of (text-only) public data, it understands the relative probabilities of different ASR hypotheses better than the ASR model can.

The researchers achieved their best results with a combined loss function that is based on both sentence pseudo-likelihood — a more computationally tractable estimate of sentence likelihood — and word error prediction. The resulting rescoring model is so effective compared to standard LSTM (long short-term memory) language models, while also exhibiting lower latency, that the RescoreBERT method has gone from internship project to Alexa production in less than a year.

Ontological biasing for acoustic-event detection

We round out this short selection of papers with one from an ASR-adjacent field. In “Improved representation learning for acoustic event classification using tree-structured ontology”, Arman Zharmagambetov and coauthors look at an alternative to self-supervised training for the task of acoustic-event detection (AED). (AED is the technology behind Alexa’s ability to detect breaking glass, smoke alarms, and other noteworthy events around the house.)

They show that AED classifier training can be enhanced by forcing the resulting representations to identify not only the target event label (such as “dog barking”) but also supercategories (such as “domestic animal” and “animal sound”) drawn from an ontology, a hierarchical representation of relationships between concepts. The method can be further enhanced by forcing the classification to stay the same under distortions of the inputs. The researchers found that their method is more effective than purely self-supervised pretraining and comes close to fully supervised training with only a fraction of the labeled data.

AED architecture.png
In "Improved representation learning for acoustic event classification using tree-structured ontology", Amazon researchers present a two-module joint model consisting of a representation neural network and a decision tree based on a predefined tree-structured ontology.

Conclusion and outlook

As we have seen, Alexa relies on a range of audio-based technologies that use deep-learning architectures. The need to train these models robustly, fairly, and with limited supervision, as well as computational constraints at runtime, continues to drive research in Alexa Science. We have highlighted some of the results from that work as they are about to be presented to the wider science community, and we are excited to see the field as a whole come up with creative solutions and push toward ever more capable applications of speech-based AI.

Related content

US, WA, Seattle
Note that this posting is for a handful of teams within Amazon Robotics. Teams include: Robotics, Computer Vision, Machine Learning, Optimization, and more.Are you excited about building high-performance robotic systems that can perceive and learn to help deliver for customers? The Amazon Robotics team is creating new science products and technologies that make this possible, at Amazon scale. We work at the intersection of computer vision, machine learning, robotic manipulation, navigation, and human-robot interaction.Amazon Robotics is seeking broad, curious applied scientists and engineering interns to join our diverse, full-stack team. In addition to designing, building, and delivering end-to-end robotic systems, our team is responsible for core infrastructure and tools that serve as the backbone of our robotic applications, enabling roboticists, applied scientists, software and hardware engineers to collaborate and deploy systems in the lab and in the field. We will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Come join us!A day in the lifeAs an intern you will develop a new algorithm to solve one of the challenging computer vision and manipulation problems in Amazon's robotic warehouses. Your project will fit your academic research experience and interests. You will code and test out your solutions in increasingly realistic scenarios and iterate on the idea with your mentor to find the best solution to the problem.
US, WA, Bellevue
The Global Supply Chain-ACES organization aims to raise the bar on Amazon’s customer experience by delivering holistic solutions for Global Customer Fulfillment that facilitate the effective and efficient movement of product through our supply chain. We develop strategies, processes, material handling and technology solutions, reporting and other mechanisms, which are simple, technology enabled, globally scalable, and locally relevant. We achieve this through cross-functional partnerships, listening to the needs of our customers and prioritizing initiatives to deliver maximum impact across the value chain. Within the organization, our Quality team balances tactical operation with operations partners with global engagement on programs to deliver improved inventory accuracy in our network. The organization is looking for an experienced Principal Data Scientist to partner with senior leadership to develop long term strategic solutions. As a Principal Scientist, they will lead critical initiatives for Global Supply Chain, leveraging complex data analysis and visualization to:a. Collaborate with business teams to define data requirements and processes;b. Automate data pipelines;c. Design, develop, and maintain scalable (automated) reports and dashboards that track progress towards plans;d. Define, track and report program success metrics.e. Serve as a technical science lead on our most demanding, cross-functional projects.
US, MA, North Reading
Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. The Research Science team at Amazon Robotics is seeking interns with a passion for robotic research to work on cutting edge algorithms for robotics. Our team works on challenging and high-impact projects, including allocating resources to complete a million orders a day, coordinating the motion of thousands of robots, autonomous navigation in warehouses, and learning how to grasp all the products Amazon sells. We are seeking internship candidates with backgrounds in computer vision, machine learning, resource allocation, discrete optimization, search, planning/scheduling, and reinforcement learning. As an intern you will develop a new algorithm to solve one of the challenging computer vision and manipulation problems in Amazon's robotic warehouses. Your project will fit your academic research experience and interests. You will code and test out your solutions in increasingly realistic scenarios and iterate on the idea with your mentor to find the best solution to the problem.
US, WA, Seattle
Are you excited about building high-performance robotic systems that can perceive, learn, and act intelligently alongside humans? The Robotics AI team is creating new science products and technologies that make this possible, at Amazon scale. We work at the intersection of computer vision, machine learning, robotic manipulation, navigation, and human-robot interaction.The Amazon Robotics team is seeking broad, curious applied scientists and engineering interns to join our diverse, full-stack team. In addition to designing, building, and delivering end-to-end robotic systems, our team is responsible for core infrastructure and tools that serve as the backbone of our robotic applications, enabling roboticists, applied scientists, software and hardware engineers to collaborate and deploy systems in the lab and in the field. Come join us!
US, WA, Bellevue
Employer: Amazon.com Services LLCPosition: Research Scientist IILocation: Bellevue, WA Multiple Positions Available1. Research, build and implement highly effective and innovative methods in Statistical Modeling, Machine Learning, and other quantitative techniques such as operational research and optimization to deliver algorithms that solve real business problems.2. Take initiative to scope and plan research projects based on roadmap of business owners and enable data-driven solutions. Participate in shaping roadmap for the research team.3. Ensure data quality throughout all stages of acquisition and processing of the data, including such areas as data sourcing/collection, ground truth generation, data analysis, experiment, evaluation and visualization etc.4. Navigate a variety of data sources, understand the business reality behind large-scale data and develop meaningful science solutions.5. Partner closely with product or/and program owners, as well as scientists and engineers in cross-functional teams with a clear path to business impact and deliver on demanding projects.6. Present proposals and results in a clear manner backed by data and coupled with conclusions to business customers and leadership team with various levels of technical knowledge, educating them about underlying systems, as well as sharing insights.7. Perform experiments to validate the feature additions as requested by domain expert teams.8. Some telecommuting benefits available.The pay range for this position in Bellevue, WA is $136,000-$184,000 (yr); however, base pay offered may vary depending on job-related knowledge, skills, and experience. A sign-on bonus and restricted stock units may be provided as part of the compensation package, in addition to a full range of medical, financial, and/or other benefits, dependent on the position offered. This information is provided by the Washington Equal Pay Act. Base pay information is based on market location. Applicants should apply via Amazon's internal or external careers site.#0000
US, VA, Arlington
The Central Science Team within Amazon’s People Experience and Technology org (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, well-being, and the value of work to Amazonians. We are an interdisciplinary team, which combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. As Director for PXT Central Science Technology, you will be responsible for leading multiple teams through rapidly evolving complex demands and define, develop, deliver and execute on our science roadmap and vision. You will provide thought leadership to scientists and engineers to invent and implement scalable machine learning recommendations and data driven algorithms supporting flexible UI frameworks. You will manage and be responsible for delivering some of our most strategic technical initiatives. You will design, develop and operate new, highly scalable software systems that support Amazon’s efforts to be Earth’s Best Employer and have a significant impact on Amazon’s commitment to our employees and communities where we both serve and employ 1.3 million Amazonians. As Director of Applied Science, you will be part of the larger technical leadership community at Amazon. This community forms the backbone of the company, plays a critical role in the broad business planning, works closely with senior executives to develop business targets and resource requirements, influences our long-term technical and business strategy, helps hire and develop engineering leaders and developers, and ultimately enables us to deliver engineering innovations.This role is posted for Arlington, VA, but we are flexible on location at many of our offices in the US and Canada.
US, VA, Arlington
Employer: Amazon.com Services LLCPosition: Data Scientist IILocation: Arlington, VAMultiple Positions Available1. Manage and execute entire projects or components of large projects from start to finish including data gathering and manipulation, synthesis and modeling, problem solving, and communication of insights and recommendations.2. Oversee the development and implementation of data integration and analytic strategies to support population health initiatives.3. Leverage big data to explore and introduce areas of analytics and technologies.4. Analyze data to identify opportunities to impact populations.5. Perform advanced integrated comprehensive reporting, consultative, and analytical expertise to provide healthcare cost and utilization data and translate findings into actionable information for internal and external stakeholders.6. Oversee the collection of data, ensuring timelines are met, data is accurate and within established format.7. Act as a data and technical resource and escalation point for data issues, ensuring they are brought to resolution.8. Serve as the subject matter expert on health care benefits data modeling, system architecture, data governance, and business intelligence tools. #0000
US, TX, Dallas
Employer: Amazon.com Services LLCPosition: Data Scientist II (multiple positions available)Location: Dallas, TX Multiple Positions Available:1. Assist customers to deliver Machine Learning (ML) and Deep Learning (DL) projects from beginning to end, by aggregating data, exploring data, building and validating predictive models, and deploying completed models to deliver business impact to the organization;2. Apply understanding of the customer’s business need and guide them to a solution using AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances;3. Use Deep Learning frameworks like MXNet, PyTorch, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our customers build DL models;4. Research, design, implement and evaluate novel computer vision algorithms and ML/DL algorithms;5. Work with data architects and engineers to analyze, extract, normalize, and label relevant data;6. Work with DevOps engineers to help customers operationalize models after they are built;7. Assist customers with identifying model drift and retraining models;8. Research and implement novel ML and DL approaches, including using FPGA;9. Develop computer vision and machine learning methods and algorithms to address real-world customer use-cases; and10. Design and run experiments, research new algorithms, and work closely with engineers to put algorithms and models into practice to help solve customers' most challenging problems.11. Approximately 15% domestic and international travel required.12. Telecommuting benefits are available.#0000
US, WA, Seattle
MULTIPLE POSITIONS AVAILABLECompany: AMAZON.COM SERVICES LLCPosition Title: Manager III, Data ScienceLocation: Bellevue, WashingtonPosition Responsibilities:Manage a team of data scientists working to build large-scale, technical solutions to increase effectiveness of Amazon Fulfillment systems. Define key business goals and map them to the success of technical solutions. Aggregate, analyze and model data from multiple sources to inform business decisions. Manage and quantify improvement in the customer experience resulting from research outcomes. Develop and manage a long-term research vision and portfolio of research initiatives, with algorithms and models that to be integrated in production systems. Hire and mentor junior scientists.Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation #0000
US, VA, Arlington
MULTIPLE POSITIONS AVAILABLECompany: AMAZON.COM SERVICES LLCPosition Title: Data Scientist IILocation: Arlington, VirginiaPosition Responsibilities:Design and implement scalable and reliable approaches to support or automate decision making throughout the business. Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult business problems and cases in which the solution approach is unclear. Acquire data by building the necessary SQL / ETL queries. Import processes through various company specific interfaces for accessing Oracle, RedShift, and Spark storage systems. Build relationships with stakeholders and counterparts. Analyze data for trends and input validity by inspecting univariate distributions, exploring bivariate relationships, constructing appropriate transformations, and tracking down the source and meaning of anomalies. Build models using statistical modeling, mathematical modeling, econometric modeling, network modeling, social network modeling, natural language processing, machine learning algorithms, genetic algorithms, and neural networks. Validate models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. Implement models that comply with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production.Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation #0000