Amazon’s papers at SLT

Quantization with self-adjustable centroids, contrastive predictive coding for transfer learning, teacher ensembles for differential privacy, and more — Amazon’s speech research features a battery of cutting-edge machine learning techniques.

A quick guide to Amazon’s innovative work at the IEEE Spoken Language Technology Workshop (SLT), which begins next week:

Accelerator-aware training for transducer-based speech recognition
Suhaila Shakiah, Rupak Vignesh Swaminathan, Hieu Duy Nguyen, Raviteja Chinta, Tariq Afzal, Nathan Susanj, Athanasios Mouchtaris, Grant Strimel, Ariya Rastrow

Machine learning models trained at full precision can suffer performance falloffs when deployed on neural-network accelerator (NNA) chips, which leverage highly parallelized fixed-point arithmetic to improve efficiency. To avoid this problem, Amazon researchers propose a method for emulating NNA operations at training time.

Related content
Combination of distillation and distillation-aware quantization compresses BART model to 1/16th its size.

An analysis of the effects of decoding algorithms on fairness in open-ended language generation
Jwala Dhamala, Varun Kumar, Rahul Gupta, Kai-Wei Chang, Aram Galstyan

The researchers systematically study the effects of different decoding algorithms on the fairness of large language models, showing that fairness varies significantly with changes in decoding algorithms’ hyperparameters. They also provide recommendations for reporting decoding details during fairness evaluations and optimizing decoding algorithms.

An experimental study on private aggregation of teacher ensemble learning for end-to-end speech recognition
Chao-Han Huck Yang, I-Fan Chen, Andreas Stolcke, Sabato Marco Siniscalchi, Chin-Hui Lee

For machine learning models, meeting differential-privacy (DP) constraints usually means adding noise to data, which can hurt performance. Amazon researchers apply private aggregation of teacher ensembles (PATE), which uses different noisy models to train a single student model, to automatic speech recognition, reducing word error rate by 26% to 28% while meeting DP constraints.

Related content
Technique that mixes public and private training data can meet differential-privacy criteria while cutting error increase by 60%-70%.

Exploration of language-specific self-attention parameters for multilingual end-to-end speech recognition
Brady Houston, Katrin Kirchhoff

Multilingual, end-to-end, automatic-speech-recognition models perform better when they’re trained using both language-specific and language-universal model parameters. Amazon researchers show that using language-specific parameters in the attention mechanisms of Conformer-based encoders can improve the performance of ASR models across six languages by up to 12% relative to multilingual baselines and 36% relative to monolingual baselines.

Guided contrastive self-supervised pre-training for automatic speech recognition
Aparna Khare, Minhua Wu, Saurabhchand Bhati, Jasha Droppo, Roland Maas

Contrastive predictive coding (CPC) is a representation-learning method that maximizes the mutual information between a model’s intermediate representations and its output. Amazon researchers present a modification of CPC that maximizes the mutual information between representations from a prior-knowledge model and the output of a model being pretrained, reducing the word error rate relative to CPC pretraining only.

Guided CPC.png
The conventional contrastive-predictive-coding (CPC) representation-learning approach (left) and Amazon researchers' proposed guided CPC method (right, in red), which maximizes the mutual information between representations from a prior-knowledge model and the output of a model being pretrained. From "Guided contrastive self-supervised pre-training for automatic speech recognition".

Implicit acoustic echo cancellation for keyword spotting and device-directed speech detection
Samuele Cornell, Thomas Balestri, Thibaud Sénéchal

In realistic human-machine interactions, customer speech can overlap with device playback. Amazon researchers propose a way to improve keyword spotting and device-directed-speech detection in these circumstances. They teach the model to ignore playback audio via an implicit acoustic echo cancellation mechanism. They show that, by conditioning on the reference signal as well as the signal captured at the microphone, they can improve recall by as much as 56%.

Mixture of domain experts for language understanding: An analysis of modularity, task performance, and memory tradeoffs
Benjamin Kleiner, Jack FitzGerald, Haidar Khan, Gokhan Tur

Amazon researchers show that natural-language-understanding models that incorporate mixture-of-experts networks, in which each network layer corresponds to a different domain, are easier to update after deployment, with less effect on performance, than other types of models.

N-best hypotheses reranking for text-to-SQL systems
Lu Zeng, Sree Hari Krishnan Parthasarathi, Dilek Hakkani-Tür

Text-to-SQL models map natural-language requests to structured database queries, and today’s state-of-the-art systems rely on fine-tuning pretrained language models. Amazon researchers improve the coherence of such systems with a model that generates a query plan predicting whether a SQL query contains particular clauses; they improve the correctness of such systems with an algorithm that generates schemata that can be used to match prefixes and abbreviations for slot values (such as “left” and “L”).

Related content
At re:Invent, AWS announces that the CodeWhisperer preview has added support for two new programming languages.

On granularity of prosodic representations in expressive text-to-speech
Mikolaj Babianski, Kamil Pokora, Raahil Shah, Rafal Sienkiewicz, Daniel Korzekwa, Viacheslav Klimkov

In expressive-speech synthesis, the same input text can be mapped to different acoustic realizations. Prosodic embeddings at the utterance, word, or phoneme level can be used at training time to simplify that mapping. Amazon researchers study these approaches, showing that utterance-level embeddings have insufficient capacity and phoneme-level embeddings tend to introduce instabilities, while word-level representations strike a balance between capacity and predictability. The researchers use that finding to close the gap in naturalness between synthetic speech and recordings by 90%.

Personalization of CTC speech recognition models
Saket Dingliwal, Monica Sunkara, Srikanth Ronanki, Jeff Farris, Katrin Kirchhoff, Sravan Bodapati

Connectionist temporal classification (CTC) loss functions are an attractive option for automatic speech recognition because they yield simple models with low inference latency. But CTC models are hard to personalize because of their conditional-independence assumption. Amazon researchers propose a battery of techniques to bias a CTC model’s encoder and its beam search decoder, yielding a 60% improvement in F1 score on domain-specific rare words over a strong CTC baseline.

Related content
Accounting for data heterogeneity across edge devices enables more useful model updates, both locally and globally.

Remap, warp and attend: Non-parallel many-to-many accent conversion with normalizing flows
Abdelhamid Ezzerg, Tom Merritt, Kayoko Yanagisawa, Piotr Bilinski, Magdalena Proszewska, Kamil Pokora, Renard Korzeniowski, Roberto Barra-Chicote, Daniel Korzekwa

Regional accents affect not only how words are pronounced but prosodic aspects of speech such as speaking rate and intonation. Amazon researchers investigate an approach to accent conversion that uses normalizing flows. The approach has three steps: remapping the phonetic conditioning, to better match the target accent; warping the duration of the converted speech, to better suit the target phonemes; and applying an attention mechanism to implicitly align source and target speech sequences.

Residual adapters for targeted updates in RNN-transducer based speech recognition system
Sungjun Han, Deepak Baby, Valentin Mendelev

While it is possible to incrementally fine-tune an RNN-transducer (RNN-T) automatic-speech-recognition model to recognize multiple sets of new words, this creates a dependency between the updates, which is not ideal when we want each update to be applied independently. Amazon researchers propose training residual adapters on the RNN-T model and combining them on the fly through adapter fusion, enabling a recall on new words of more than 90%, with less than 1% relative word error rate degradation.

Residual adapters.png
An RNN-transducer model with n independently trained adapters combined through different adapter-fusion methods. From "Residual adapters for targeted updates in RNN-transducer based speech recognition system".

Sub-8-bit quantization for on-device speech recognition: a regularization-free approach
Kai Zhen, Martin Radfar, Hieu Nguyen, Grant Strimel, Nathan Susanj, Athanasios Mouchtaris

For on-device automatic speech recognition (ASR), quantization-aware training (QAT) can help manage the trade-off between performance and efficiency. Among existing QAT methods, one major drawback is that the quantization centroids have to be predetermined and fixed. Amazon researchers introduce a compression mechanism with self-adjustable centroids that results in a simpler yet more versatile quantization scheme that enables a 30.73% memory footprint savings and a 31.75% user-perceived latency reduction, compared to eight-bit QAT.

Related content

US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
LU, Luxembourg
Are you a talented and inventive scientist with a strong passion about modern data technologies and interested to improve business processes, extracting value from the data? Would you like to be a part of an organization that is aiming to use self-learning technology to process data in order to support the management of the procurement function? The Global Procurement Technology, as a part of Global Procurement Operations, is seeking a skilled Data Scientist to help build its future data intelligence in business ecosystem, working with large distributed systems of data and providing Machine Learning (ML) and Predictive Modeling expertise. You will be a member of the Data Engineering and ML Team, joining a fast-growing global organization, with a great vision to transform the Procurement field, and become the role model in the market. This team plays a strategic role supporting the core Procurement business domains as well as it is the cornerstone of any transformation and innovation initiative. Our mission is to provide a high-quality data environment to facilitate process optimization and business digitalization, on a global scale. We are supporting business initiatives, including but not limited to, strategic supplier sourcing (e.g. contracting, negotiation, spend analysis, market research, etc.), order management, supplier performance, etc. We are seeking an individual who can thrive in a fast-paced work environment, be collaborative and share knowledge and experience with his colleagues. You are expected to deliver results, but at the same time have fun with your teammates and enjoy working in the company. In Amazon, you will find all the resources required to learn new skills, grow your career, and become a better professional. You will connect with world leaders in your field and you will be tackling Data Science challenges to ensure business continuity, by taking the right decisions for your customers. As a Data Scientist in the team, you will: -be the subject matter expert to support team strategies that will take Global Procurement Operations towards world-class predictive maintenance practices and processes, driving more effective procurement functions, e.g. supplier segmentation, negotiations, shipping supplies volume forecast, spend management, etc. -have strong analytical skills and excel in the design, creation, management, and enterprise use of large data sets, combining raw data from different sources -provide technical expertise to support the development of ML models to facilitate intelligent digital services, such as Contract Lifecycle Management (CLM) and Negotiations platform -cooperate closely with different groups of stakeholders, e.g. data/software engineers, product/program managers, analysts, senior leadership, etc. to evaluate business needs and objectives to set up the best data management environment -create and share with audiences of varying levels technical papers and presentations -deal with ambiguity, prioritizing needs, and delivering results in a dynamic environment Basic qualifications -Master’s Degree in Computer Science/Engineering, Informatics, Mathematics, or a related technical discipline -3+ years of industry experience in data engineering/science, business intelligence or related field -3+ years experience in algorithm design, engineering and implementation for very-large scale applications to solve real problems -Very good knowledge of data modeling and evaluation -Very good understanding of regression modeling, forecasting techniques, time series analysis, machine-learning concepts such as supervised and unsupervised learning, classification, random forest, etc. -SQL and query performance tuning skills Preferred qualifications -2+ years of proficiency in using R, Python, Scala, Java or any modern language for data processing and statistical analysis -Experience with various RDBMS, such as PostgreSQL, MS SQL Server, MySQL, etc. -Experience architecting Big Data and ML solutions with AWS products (Redshift, DynamoDB, Lambda, S3, EMR, SageMaker, Lex, Kendra, Forecast etc.) -Experience articulating business questions and using quantitative techniques to arrive at a solution using available data -Experience with agile/scrum methodologies and its benefits of managing projects efficiently and delivering results iteratively -Excellent written and verbal communication skills including data visualization, especially in regards to quantitative topics discussed with non-technical colleagues
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, WA, Seattle
We are a team of doers working passionately to apply cutting-edge advances in deep learning in the life sciences to solve real-world problems. As a Senior Applied Science Manager you will participate in developing exciting products for customers. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the leading edge of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with others teams. Location is in Seattle, US Embrace Diversity Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust Balance Work and Life Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives Mentor & Grow Careers Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. Key job responsibilities • Manage high performing engineering and science teams • Hire and develop top-performing engineers, scientists, and other managers • Develop and execute on project plans and delivery commitments • Work with business, data science, software engineer, biological, and product leaders to help define product requirements and with managers, scientists, and engineers to execute on them • Build and maintain world-class customer experience and operational excellence for your deliverables
US, Virtual
The Amazon Economics Team is hiring Interns in Economics. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL, UNIX, Sawtooth, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, data scientists and MBAʼs. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Seattle
Amazon internships are full-time (40 hours/week) for 12 consecutive weeks with start dates in May - July 2023. Our internship program provides hands-on learning and building experiences for students who are interested in a career in hardware engineering. This role will be based in Seattle, and candidates must be willing to work in-person. Corporate Projects (CPT) is a team that sits within the broader Corporate Development organization at Amazon. We seek to bring net-new, strategic projects to life by working together with customers and evolving projects from ZERO-to-ONE. To do so, we deploy our resources towards proofs-of-concept (POCs) and pilot programs and develop them from high-level ideas (the ZERO) to tangible short-term results that provide validating signal and a path to scale (the ONE). We work with our customers to develop and create net-new opportunities by relentlessly scouring all of Amazon and finding new and innovative ways to strengthen and/or accelerate the Amazon Flywheel. CPT seeks an Applied Science intern to work with a diverse, cross-functional team to build new, innovative customer experiences. Within CPT, you will apply both traditional and novel scientific approaches to solve and scale problems and solutions. We are a team where science meets application. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems.