Alexa speech science developments at Interspeech 2022

Research from Alexa Speech covers a range of topics related to end-to-end neural speech recognition and fairness.

Interspeech, the world’s largest and most comprehensive conference on the science and technology of spoken-language processing, took place this week in Incheon, Korea, with Amazon as a platinum sponsor. Amazon Science asked three of Alexa AI’s leading scientists — in the fields of speech, spoken-language-understanding, and text-to-speech — to highlight some of Amazon’s contributions to the conference.

Related content
Methods for learning from noisy data, using phonetic embeddings to improve entity resolution, and quantization-aware training are a few of the highlights.

In this installment, senior principal scientist Andreas Stolcke selects papers from Alexa AI’s speech science organization, focusing on two overarching themes in recent research on speech-enabled AI: end-to-end neural speech recognition and fairness.

End-to-end neural speech recognition

Traditionally, speech recognition systems have included components specialized for different aspects of linguistic knowledge: acoustic models to capture the correspondence between speech sounds and acoustic waveforms (phonetics), pronunciation models to map those sounds to words, and language models (LMs) to capture higher-order properties such as syntax, semantics, and dialogue context.

All these models are trained on separate data and combined using graph and search algorithms, to infer the most probable sequence of words corresponding to acoustic input. The latest versions of these systems employ neural networks for individual components, typically in the acoustic and language models, while still relying on non-neural methods for model integration; they are therefore known as “hybrid” automatic-speech-recognition (ASR) systems.

While the hybrid ASR approach is structured and modular, it also makes it hard to model the ways in which acoustic, phonetic, and word-level representations interact and to optimize the recognition system end to end. For these reasons, much recent research in ASR has focused on so-called end-to-end or all-neural recognition systems, which infer a sequence of words directly from acoustic inputs.

Related content
Innovative training methods and model compression techniques combine with clever engineering to keep speech processing local.

End-to-end ASR systems use deep multilayered neural architectures that can be optimized end to end for recognition accuracy. While they do require large amounts of data and computation for training, once trained, they offer a simplified computational architecture for inference, as well as superior performance.

Alexa’s ASR employs end-to-end as its core algorithm, both in the cloud and on-device. Across the industry and in academic research, end-to-end architectures are still being improved to achieve better accuracy, to require less computation and/or latency, or to mitigate the lack of modularity that makes it challenging to inject external (e.g., domain-specific) knowledge at run time.

Alexa AI papers at Interspeech address several open problems in end-to-end ASR, and we summarize a few of those papers here.

In “ConvRNN-T: Convolutional augmented recurrent neural network transducers for streaming speech recognition”, Martin Radfar and coauthors propose a new variant of the popular recurrent-neural-network-transducer (RNN-T) end-to-neural architecture. One of their goals is to preserve the property of causal processing, meaning that the model output depends only on past and current (but not future) inputs, which enables streaming ASR. At the same time, they want to improve the model’s ability to capture long-term contextual information.

ConvRNN.png
A high-level block diagram of ConvRNN-T.

To achieve both goals, they augment the vanilla RNN-T with two distinct convolutional (CNN) front ends: a standard one for encoding correlations localized in time and a novel “global CNN” encoder that is designed to capture long-term correlations by summarizing activations over the entire utterance up to the current time step (while processing utterances incrementally through time).

The authors show that the resulting ConvRNN-T gives superior accuracy compared to other proposed neural streaming ASR architectures, such as the basic RNN-T, Conformer, and ContextNet.

Another concern with end-to-end ASR models is computational efficiency, especially since the unified neural architecture makes these models very attractive for on-device deployment, where compute cycles and (for mobile devices) power are at a premium.

In their paper “Compute cost amortized Transformer for streaming ASR”, Yi Xie and colleagues exploit the intuitive observation that the amount of computation a model performs should vary as a function of the difficulty of the task; for instance, input in which noise or an accent causes ambiguity may require more computation than a clean input with a mainstream accent. (We may think of this as the ASR model “thinking harder” in places where the words are more difficult to discern.)

Related content
A new approach to determining the “channel configuration” of convolutional neural nets improves accuracy while maintaining runtime efficiency.

The researchers achieve this with a very elegant method that leverages the integrated neural structure of the model. Their starting point is a Transformer-based ASR system, consisting of multiple stacked layers of multiheaded self-attention (MHA) and feed-forward neural blocks. In addition, they train “arbitrator” networks that look at the acoustic input (and, optionally, also at intermediate block outputs) to toggle individual components on or off.

Because these component blocks have “skip connections” that combine their outputs with the outputs of earlier layers, they are effectively optional for the overall computation to proceed. A block that is toggled off for a given input frame saves all the computation normally carried out by that block, producing a zero vector output. The following diagram shows the structure of both the elementary Transformer building block and the arbitrator that controls it:

Arbitrator:Transformer backbone.png
Illustration of the arbitrator and Transformer backbone of each block. The lightweight arbitrator toggles whether to evaluate subcomponents during the forward pass.

The arbitrator networks themselves are small enough that they do not contribute significant additional computation. What makes this scheme workable and effective, however, is that both the Transformer assemblies and the arbitrators that control them can be trained jointly, with dual goals: to perform accurate ASR and to minimize the overall amount of computation. The latter is achieved by adding a term to the training objective function that rewards reducing computation. Dialing a hyperparameter up or down selects the desired balance between accuracy and computation.

Related content
Branching encoder networks make operation more efficient, while “neural diffing” reduces bandwidth requirements for model updates.

The authors show that their method can achieve a 60% reduction in computation with only a minor (3%) increase in ASR error. Their cost-amortized Transformer proves much more effective than a benchmark method that constrains the model to attend only to sliding windows over the input, which yields only 13% savings and an error increase of almost three times as much.

Finally, in this short review of end-to-end neural ASR advances, we look at ways to recognize speech from more than one speaker, while keeping track of who said what (also known as speaker-attributed ASR).

This has traditionally been done with modular systems that perform ASR and, separately, perform speaker diarization, i.e., labeling stretches of audio according to who is speaking. However, here, too, neural models have recently brought advances and simplification, by integrating these two tasks in a single end-to-end neural model.

In their paper “Separator-transducer-segmenter: Streaming recognition and segmentation of multi-party speech”, Ilya Sklyar and colleagues not only integrate ASR and segmentation-by-speaker but do so while processing inputs incrementally. Streaming multispeaker ASR with low latency is a key technology to enable voice assistants to interact with customers in collaborative settings. Sklyar’s system does this with a generalization of the RNN-T architecture that keeps track of turn-taking between multiple speakers, up to two of whom can be active simultaneously. The researchers’ separator-transducer-segmenter model is depicted below:

Separator-transducer-segmenter.png
Separator-transducer-segmenter. The tokens <sot> and <eot> represent the start of turn and end of turn. Model blocks with the same color have tied parameters, and transcripts in the color-matched boxes belong to the same speaker.

A key element that yields improvements over an earlier approach is the use of dedicated tokens to recognize both starts and ends of speaker turns, for what the authors call “start-pointing” and “end-pointing”. (End-pointing is a standard feature of many interactive ASR systems necessary to predict when a talker is done.) Beyond representing the turn-taking structure in this symbolic way, the model is also penalized during training for taking too long to output these markers, in order to improve the latency and temporal accuracy of the outputs.

Fairness in the performance of speech-enabled AI

The second theme we’d like to highlight, and one that is receiving increasing attention in speech and other areas of AI, is performance fairness: the desire to avert large differences in accuracy across different cohorts of users or on content associated with protected groups. As an example, concerns about this type of fairness gained prominence with demonstrations that certain computer vision algorithms performed poorly for certain skin tones, in part due to underrepresentation in the training data.

Related content
The team’s latest research on privacy-preserving machine learning, federated learning, and bias mitigation.

There’s a similar concern about speech-based AI, with speech properties varying widely as a function of speaker background and environment. A balanced representation in training sets is hard to achieve, since the speakers using commercial products are largely self-selected, and speaker attributes are often unavailable for many reasons, privacy among them. This topic is also the subject of a special session at Interspeech, Inclusive and Fair Speech Technologies, which several Alexa AI scientists are involved in as co-organizers and presenters.

One of the special-session papers, “Reducing geographic disparities in automatic speech recognition via elastic weight consolidation”, by Viet Anh Trinh and colleagues, looks at how geographic location within the U.S. affects ASR accuracy and how models can be adapted to narrow the gap for the worst-performing regions. Here and elsewhere, a two-step approach is used: first, subsets of speakers with higher-than-average error rates are identified; then a mitigation step attempts to improve performance for those cohorts. Trinh et al.’s method identifies the cohorts by partitioning the speakers according to their geographic longitude and latitude, using a decision-tree-like algorithm that maximizes the word-error-rate (WER) differences between resulting regions:

Reducing geographical disparities.png
A map of 126 regions identified by the clustering tree. The color does not indicate a specific word error rate (WER), but regions with the same color do have the same WER.

Next, the regions are ranked by their average WERs; data from the highest-error regions is identified for performance improvement. To achieve that, the researchers use fine-tuning to optimize the model parameters for the targeted regions, while also employing a technique called elastic weight consolidation (EWC) to minimize performance degradation on the remaining regions.

This is important to prevent a phenomenon known as “catastrophic forgetting”, in which neural models degrade substantially on prior training data during fine-tuning. The idea is to quantify the influence that different dimensions of the parameter space have on the overall performance and then avoid large variations along those dimensions when adapting to a data subset. This approach decreases the WER mean, maximum, and variance across regions and even the overall WER (including the regions not fine-tuned on), beating out several baseline methods for model adaptation.

Pranav Dheram et al., in their paper “Toward fairness in speech recognition: Discovery and mitigation of performance disparities”, look at alternative methods for identifying underperforming speaker cohorts. One approach is to use human-defined geographic regions as given by postal (a.k.a. zip) codes, in combination with demographic information from U.S. census data, to partition U.S. geography.

Related content
NSF deputy assistant director Erwin Gianchandani on the challenges addressed by funded projects.

Zip codes are sorted into binary partitions by majority demographic attributes, so as to maximize WER discrepancies. The partition with higher WER is then targeted for mitigations, an approach similar to that adopted in the Trinh et al. paper. However, this approach is imprecise (since it lumps together speakers by zip code) and limited to available demographic data, so it generalizes poorly to other geographies.

Alternatively, Dheram et al. use speech characteristics learned by a neural speaker identification model to group speakers. These “speaker embedding vectors” are clustered, reflecting the intuition that speakers who sound similar will tend to have similar ASR difficulty.

Subsequently, these virtual speaker regions (not individual identities) can be ranked by difficulty and targeted for mitigation, without relying on human labeling, grouping, or self-identification of speakers or attributes. As shown in the table below, the automatic approach identifies a larger gap in ASR accuracy than the “geo-demographic” approach, while at the same time targeting a larger share of speakers for performance mitigation:

Cohort discoveryWER gap (%)Bottom-cohort share (%)

Geodemographic

Automatic

41.7

65.0

0.8

10.0

The final fairness-themed paper we highlight explores yet another approach to avoiding performance disparities, known as adversarial reweighting (ARW). Instead of relying on explicit partitioning of the input space, this approach assigns continuous weights to the training instances (as a function of input features), with the idea that harder examples get higher weights and thereby exert more influence on the performance optimization.

Related content
Method significantly reduces bias while maintaining comparable performance on machine learning tasks.

Secondly, ARW more tightly interleaves, and iterates, the (now weighted) cohort identification and mitigation steps. Mathematically, this is formalized as a min-max optimization algorithm that alternates between maximizing the error by changing the sample weights (hence “adversarial”) and minimizing the weighted verification error by adjusting the target model parameters.

ARW was designed for group fairness in classification and regression tasks that take individual data points as inputs. “Adversarial reweighting for speaker verification fairness”, by Minho Jin et al., looks at how the concept can be applied to a classification task that depends on pairs of input samples, i.e., checking whether two speech samples come from the same speaker. Solving this problem could help make a voice-based assistant more reliable at personalization and other functions that require knowing who is speaking.

The authors look at several ways to adapt ARW to learning similarity among speaker embeddings. The method that ultimately worked best assigns each pair of input samples an adversarial weight that is the sum of individual sample weights (thereby reducing the dimensionality of the weight prediction). The individual sample weights are also informed by which region of the speaker embedding space a sample falls into (as determined by unsupervised k-means clustering, the same technique used in Dheram et al.’s automatic cohort-identification method).

Computing ARW weights.png
Computing adversarial-reweighting (ARW) weights.

I omit the details, but once the pairwise (PW) adversarial weights are formalized in this way, we can insert them into the loss function for metric learning, which is the basis of training a speaker verification model. Min-max optimization can then take turns training the adversary network that predicts the weights and optimizing the speaker embedding extractor that learns speaker similarity.

On a public speaker verification corpus, the resulting system reduced overall equal-error rate by 7.6%, while also reducing the gap between genders by 17%. It also reduced the error variability across different countries of origin, by nearly 10%. Note that, as in the case of the Trinh et al. ASR fairness paper, fairness mitigation improves both performance disparities and overall accuracy.

This concludes our thematic highlights of Alexa Speech Interspeech papers. Note that Interspeech covers much more than speech and speaker recognition. Please check out companion pieces that feature additional work, drawn from technical areas that are no less essential for a functioning speech-enabled AI assistant: natural-language understanding and speech synthesis.

Research areas

Related content

LU, Luxembourg
Have you ever wished to build high standard Operations Research and Machine Learning algorithms to optimize one of the most complex logistics network? Have you ever ordered a product on Amazon websites and wondered how it got delivered to you so fast, and what kinds of algorithms & processes are running behind the scenes to power the whole operation? If so, this role is for you. The team: Global transportation services, Research and applied science - Operations is at the heart of the Amazon customer experience. Each action we undertake is on behalf of our customers, as surpassing their expectations is our passion. We improve customer experience through continuously optimizing the complex movements of goods from vendors to customers throughout Europe. - Global transportation analytical teams are transversal centers of expertise, composed of engineers, analysts, scientists, technical program managers and developers. We are focused on Amazon most complex problems, processes and decisions. We work with fulfillment centers, transportation, software developers, finance and retail teams across the world, to improve our logistic infrastructure and algorithms. - GTS RAS is one of those Global transportation scientific team. We are obsessed by delivering state of the art OR and ML tools to support the rethinking of our advanced end-to-end supply chain. Our overall mission is simple: we want to implement the best logistics network, so Amazon can be the place where our customers can be delivered the next-day. The role: Applied scientist, speed and long term network design The person in this role will have end-to-end ownership on augmenting RAS Operation Research and Machine Learning modeling tools. They will help understand where are the constraints in our transportation network, and how we can remove them to make faster deliveries at a lower cost. You will be responsible for designing and implementing state-of-the-art algorithmic in transportation planning and network design, to expand the scope of our Operations Research and Machine Learning tools, to reflect the constantly evolving constraints in our network. You will enable the creation of a product that drives ever-greater automation, scalability and optimization of every aspect of transportation, planning the best network and modeling the constraints that prevent us from offering more speed to our customer, to maximize the utilization of the associated resources. The impact of your work will be in the Amazon EU global network. The product you will build will span across multiple organizations that play a role in Amazon’s operations and transportation and the shopping experience we deliver to customer. Those stakeholders include fulfilment operations and transportation teams; scientists and developers, and product managers. You will understand those teams constraints, to include them in your product; you will discuss with technical teams across the organization to understand the existing tools and assess the opportunity to integrate them in your product.You will engage with fellow scientists across the globe, to discuss the solutions they have implemented and share your peculiar expertise with them. This is a critical role and will require an aptitude for independent initiative and the ability to drive innovation in transportation planning and network design. Successful candidates should be able to design and implement high quality algorithm solutions, using state-of-the art Operations Research and Machine Learning techniques. Key job responsibilities - Engage with stakeholders to understand what prevents them to build a better transportation network for Amazon - Review literature to identify similar problems, or new solving techniques - Build the mathematical model representing your problem - Implement light version of the model, to gather early feed-back from your stakeholders and fellow scientists - Implement the final product, leveraging the highest development standards - Share your work in internal and external conferences - Train on the newest techniques available in your field, to ensure the team stays at the highest bar About the team GTS Research and Applied Science is a team of scientists and engineers whom mission is to build the best decision support tools for strategic decisions. We model and optimize Amazon end-to-end operations. The team is composed of enthusiastic members, that love to discuss any scientific problem, foster new ideas and think out of the box. We are eager to support each others and share our unique knowledge to our colleagues. We are open to hiring candidates to work out of one of the following locations: Luxembourg, LUX
DE, BE, Berlin
Are you excited about developing state-of-the-art computer vision models that revolutionize Amazon’s Fulfillment network? Are you looking for opportunities to apply AI on real-world problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics, we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience — at Amazon scale. To this end, we are looking for an Applied Scientist who will build and deploy models that make smarter decisions on a wide array of multi-modal signals. Together, we will be pushing beyond the state of the art in optimizing one of the most complex systems in the world: Amazon's Fulfillment Network. Key job responsibilities In this role, you will build computer vision and multi-modal deep learning models that understand the state of products and packages flowing through Amazon’s fulfillment network. You will build models that solve challenging problems like product identification and damage detection on Amazon's entire retail catalog (billions of different items, thousands of new items every day). You will primarily work with very large real-world vision datasets, as well as a diverse set of multi-modal datasets, including natural language and structured data. You will face a high level of research ambiguity and problems that require creative, ambitious, and inventive solutions. A day in the life AFT AI delivers the AI solutions that empower Amazon’s fulfillment network to make smarter decisions. You will work on an interdisciplinary team of scientists and engineers with deep expertise in developing cutting-edge AI solutions at scale. You will work with images, videos, natural language, and sequences of events from existing or new hardware. You will adapt state-of-the-art machine learning and computer vision techniques to develop solutions for business problems in the Amazon Fulfillment Network. About the team Amazon Fulfillment Technologies (AFT) powers Amazon’s global fulfillment network. We invent and deliver software, hardware, and science solutions that orchestrate processes, robots, machines, and people. We harmonize the physical and virtual world so Amazon customers can get what they want, when they want it. AFT AI is spread across multiple locations in NA (Bellevue WA and Nashville, TN) and Europe (Berlin, Germany). We are hiring candidates to work out of the Berlin location. Publicly available articles showcasing some of our work: - Damage Detection: https://www.amazon.science/latest-news/the-surprisingly-subtle-challenge-of-automating-damage-detection - Product ID: https://www.amazon.science/latest-news/how-amazon-robotics-is-working-on-new-ways-to-eliminate-the-need-for-barcodes We are open to hiring candidates to work out of one of the following locations: Berlin, BE, DEU
US, CA, Santa Clara
Amazon AI is looking for world class scientists and engineers to join its AWS AI Labs. This group is entrusted with developing core data mining, natural language processing, deep learning, and machine learning algorithms for AWS. You will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and explore conceptually new solutions. You will interact closely with our customers and with the academic community. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA | Santa Clara, CA, USA | Seattle, WA, USA
US, WA, Seattle
Here at Amazon, we embrace our differences. We are committed to furthering our culture of diversity and inclusion of our teams within the organization. How do you get items to customers quickly, cost-effectively, and—most importantly—safely, in less than an hour? And how do you do it in a way that can scale? Our teams of hundreds of scientists, engineers, aerospace professionals, and futurists have been working hard to do just that! We are delivering to customers, and are excited for what’s to come. Check out more information about Prime Air on the About Amazon blog (https://www.aboutamazon.com/news/transportation/amazon-prime-air-delivery-drone-reveal-photos). If you are seeking an iterative environment where you can drive innovation, apply state-of-the-art technologies to solve real world delivery challenges, and provide benefits to customers, Prime Air is the place for you. Come work on the Amazon Prime Air Team! Prime Air is seeking an experienced Research Scientist in the Flight Sciences High-Fidelity Methods (HFM) team within Flight Sciences, you will develop and verify aerodynamics models used for engineering analyses and vehicle simulation. These models are the backbone of every flight simulation performed within Prime Air and are a critical element in the aircraft design, verification and certification process. These models are used to predict many attributes of the vehicle performance including range, maneuverability, tracking error, and aircraft stability. They are a key input to design decisions, vehicle component sizing and flight software algorithm development. The accuracy and reliability of these flight model are critical to the success of Prime Air. For this role we are looking for a scientist to develop surrogate or machine learning models to represent the complex aerodynamic behavior of our drones. This scientist will develop techniques to validate these models using flight testing, quantify the model uncertainty, and assess the impact of this uncertainty on downstream engineering analyses. Key job responsibilities A Research Scientist in this role is responsible for owning the development, deployment, verification, and maintenance of models from end-to-end. This includes the initial gathering of the downstream customer needs, identifying the most suitable modelling approach, coordinating the generation of input data, training models, developing and maintaining software interfaces, and verifying the model accuracy. A Research Scientist in this role is responsible for determining the most suitable modeling approach for a given physical phenomena. They need to possess knowledge of various machine learning techniques, and their respective advantages and limitations. They will need to have a detailed understanding of the types of physics to be modelled including vehicle aerodynamics, multibody dynamics, and atmosphere physics. This role is responsible for designing experiments for generating data used to train and verify surrogate models. They need to have a basic understanding of the methods used to generate high-fidelity aerodynamics predictions including CFD, wind tunnel testing, and flight testing. They will be responsible for validating the models by leveraging uncertainty quantification, system identification, and statical analyses. Export Control License This position may require a deemed export control license for compliance with applicable laws and regulations. Placement is contingent on Amazon’s ability to apply for and obtain an export control license on your behalf. A day in the life A Research Scientist in the High-Fidelity Methods (HFM) team will have the opportunity to work on a wide variety of tasks. The ideal candidate should be adaptable and thrive in an everchanging environment. Depending on the phase of model or vehicle development, a typical day might consist of reading research papers on machine learning techniques, developing test plans for wind tunnel testing, writing code to train and verify models, reviewing flight test results, or writing documentation. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, Santa Clara
AWS AI/ML is looking for world class scientists and engineers to work on foundation models, large-scale representation learning, and distributed learning methods and systems. At AWS AI/ML you will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and innovate on new representation learning solutions. You will interact closely with our customers and with the academic and research communities. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Large-scale foundation models have been the powerhouse in many of the recent advancements in computer vision, natural language processing, automatic speech recognition, recommendation systems, and time series modeling. Developing such models requires not only skillful modeling in individual modalities, but also understanding of how to synergistically combine them, and how to scale the modeling methods to learn with huge models and on large datasets. Join us to work as an integral part of a team that has diverse experiences in this space. We actively work on these areas: Hardware-informed efficient model architecture, training objective and curriculum design Distributed training, accelerated optimization methods Continual learning, multi-task/meta learning Reasoning, interactive learning, reinforcement learning Robustness, privacy, model watermarking Model compression, distillation, pruning, sparsification, quantization A day in the life Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: Santa Clara, CA, USA
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. As a Applied Scientist at the intersection of machine learning and the life sciences, you will participate in developing exciting products for customers. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the cutting edge of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with others teams. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Bellevue
As a Principal Research Scientist in the Amazon Artificial General Intelligence (AGI) Data Services organization, you will be responsible for sourcing and quality of massive datasets powering Amazon's AI. You will play a critical role in driving innovation and advancing the state-of-the-art in natural language processing and machine learning. You will be responsible for developing and implementing cutting-edge algorithms and techniques to extract valuable insights from large-scale data sources. You will work closely with cross-functional teams, including product managers, engineers, and data scientists to ensure that our AI systems are aligned with human policies and preferences. Key job responsibilities - Responsible for sourcing and quality of massive datasets powering Amazon's AI. - Collaborate with cross-functional teams to ensure that Amazon’s AI models are aligned with human preferences. - Develop and implement strategies to improve the efficiency and effectiveness of programs delivering massive datasets. - Identify and prioritize research opportunities that have the potential to significantly impact our AI systems. - Communicate research findings and progress to senior leadership and stakeholders. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA
US, CA, Santa Clara
Amazon AI is looking for world class scientists and engineers to join its AWS AI Labs. This group is entrusted with developing core data mining, natural language processing, deep learning, and machine learning algorithms for AWS. You will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and explore conceptually new solutions. You will interact closely with our customers and with the academic community. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA | Santa Clara, CA, USA | Seattle, WA, USA
US, WA, Seattle
Amazon AI is looking for world class scientists and engineers to join its AWS AI Labs. This group is entrusted with developing core data mining, natural language processing, deep learning, and machine learning algorithms for AWS. You will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and explore conceptually new solutions. You will interact closely with our customers and with the academic community. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA | Santa Clara, CA, USA | Seattle, WA, USA
US, CA, Santa Clara
Amazon AI is looking for world class scientists and engineers to join its AWS AI Labs. This group is entrusted with developing core data mining, natural language processing, deep learning, and machine learning algorithms for AWS. You will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and explore conceptually new solutions. You will interact closely with our customers and with the academic community. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA | Santa Clara, CA, USA | Seattle, WA, USA