Alexa’s text-to-speech research at Interspeech 2022

Highlighted papers focus on transference — of prosody, accent, and speaker identity.

Interspeech, the world’s largest and most comprehensive conference on the science and technology of spoken-language processing, took place last week in Incheon, Korea, with Amazon as a platinum sponsor. Amazon Science asked three of Alexa AI’s leading scientists — in the fields of speech, spoken-language-understanding, and text-to-speech — to highlight some of Amazon’s contributions to the conference.

In this installment, Antonio Bonafonte, a senior applied scientist in the Amazon Text-to-Speech group, highlights work on transference — of prosody, accent, and speaker identity — in text-to-speech.

This year, the Amazon Text-to-Speech organization presented more than a dozen papers at Interspeech 2022. Amazon TTS gives Alexa its voice, working every day to add more expressiveness and conversational awareness. Here we highlight some of papers that illustrate what we are doing in those directions.

Expressive and contextually appropriate prosody

Neural text-to-speech (TTS) techniques have made the speech produced by TTS systems much more natural. To make the prosody of the speech more expressive and context appropriate as well, researchers have done considerable work on learning prosody representations from ground-truth speech.

The paper “CopyCat2: A single model for multi-speaker TTS and many-to-many fine-grained prosody transfer”, by Sri Karlapati and coauthors, proposes a model that learns word-level speaker-independent prosody representations from multispeaker speech. These representations can be used for fine-grained prosody transfer from multiple source speakers to multiple target speakers. Furthermore, predicting the word-level prosody representations from text results in a TTS model with improved naturalness and appropriateness.

CopyCat2.png
The CopyCat2 architecture.

The word-level prosodic representation is split into two components, one for timing and rhythm and a second for other prosodic characteristics. The figure above shows how the second component is learned using a conditional variational autoencoder. The input mel-spectrogram (X), which represents the speech signal as energies in certain frequency bands, is compressed into a sequence of vectors (Z), one per word. Those vectors are then used to reconstruct the mel-spectrogram.

Related content
New voice for Alexa’s Reading Sidekick feature avoids the instabilities common to models with variable prosody.

The decoder is conditioned on the phonemes and the speaker, so it captures speaker-independent prosody information, and a similar approach is used to learn speaker-independent word-level representations of timing aspects.

To use CopyCat2 as a text-to-speech model, the researchers train an additional model to predict the parameters of the prosodic-word-embedding distribution (Z) from BERT embeddings. In tests involving a multispeaker US English dataset of varied styles, including news, facts, and greetings, they compared their approach to a strong TTS baseline with contextually appropriate prosody and copy-synthesized speech. They found that their model reduced the gap in naturalness between synthetic and real speech by 22.79%.

Reducing the data required to build expressive voices

Training a state-of-the-art TTS model is usually a data-intensive process, and building a portfolio of voices in multiple styles and languages compounds the data requirement.

In the paper “Low-data? No problem: low-resource, language-agnostic conversational text-to-speech via F0-conditioned data augmentation”, Giulia Comini et al. propose a methodology to build expressive text-to-speech voices using only one hour of expressive speech from the target speaker. The method requires 8–10 hours of neutral speech — that is, speech with a limited range of expression — from another speaker, a significant reduction from previous methods.

Low data.png
A new approach to building expressive text-to-speech voices can make do with only an hour of expressive speech from the target speaker.

The authors propose to convert the neutral data from the supporting speaker to the target-speaker identity, while maintaining the target speaker’s expressive style. They use a modification of the original CopyCat prosody transfer model. As shown in the figure, the CopyCat parallel decoder regenerates the mel-spectrogram from the speaker embedding; the fundamental frequency (F0), or perceived pitch of individual phonemes; the phonetic representation; and the output of the CopyCat reference encoder. The reference encoder captures the information from the source mel-spectrogram that is not explicitly given to the decoder, — i.e., phonemes, with their duration and F0, and the speaker embedding.

Related content
Users find speech with transferred expression 9% more natural than standard synthesized speech.

The model is trained with the expressive speech of the target speaker and neutral speech from the supporting speaker. Once the model is trained, the mel-spectrogram of the supporting data is transformed into augmented expressive data for the target speaker. The CopyCat decoder is conditioned on the target speaker embedding and on an expressive F0 contour generated from the text and the speaker embedding by an independent model trained with the same data.

The paper shows that the F0 distribution of the augmented data resembles that of the target speaker. They also show that their data augmentation approach improves on one that does not use F0 conditioning.

Alexa multilingual models

Amazon has developed a shared neural TTS model for several speakers and languages that can extend a synthetic voice trained on data in only one language into other languages. For instance, the technology allows the English-language Alexa feminine-sounding voice to speak fluent Spanish in US multilingual homes. Similarly, Alexa’s English-language US masculine-sounding voice already has a British accent in the UK and speaks Spanish in the US, French in Canada, and German in Germany.

Related content
Neural text-to-speech enables new multilingual model to use the same voice for Spanish and English responses.

Alexa communicates on a wide variety of topics, and the style of speech should match the textual content. Transferring styles across languages while maintaining a fixed speaker identity, however, is challenging.

In the paper “Cross-lingual style transfer with conditional Prior VAE and style loss”, Dino Ratcliffe et al. propose an architecture for cross-lingual style transfer. Specifically, they improve the Spanish representation across four styles — newscaster, DJ, excited, and disappointed — while maintaining a single speaker identity for which only English samples are available.

Cross-lingual style transfer.png
A new approach to cross-lingual style transfer groups utterances of the same style together irrespective of language.

They achieve this by using a learned-conditional-prior variational autoencoder (LCPVAE), a hierarchical variational-autoencoder (VAE) approach.

The approach introduces a secondary VAE, which is conditioned on one-hot-encoded style information; that is, the style code has as many bits as there are styles, and a 1 at exactly one spot denotes a particular style. This results in a structured embedding space, which groups together utterances of the same style irrespective of language.

Related content
Papers focus on speech conversion and data augmentation — and sometimes both at once.

As can be seen in the figure, the TTS decoder generates the mel-spectrogram from the speaker embedding, language, phonemes, and the style embedding. During training, the style embeddings are generated by the LCPVAE using the one-hot code and the reference mel-spectrogram; at inference, the style embedding is the centroid of the embeddings for a particular style. The model’s loss function includes a style classification term that steers the generated mel-spectrogram toward the same style as the reference spectrogram.

Based on subjective evaluations (MUSHRA), this approach shows significant improvements on cross-lingual style representation in all four styles, DJ (2.8%), excited (5.3%), disappointed (3.5%) and newscaster (2.3%), without compromising speaker similarity and in-lingual style representation.

Creating new characters

Current TTS technology can produce realistic synthetic speech for sample voice identities seen during training. But speech synthesis with speakers unseen during training, without post-training adaptation, remains a big challenge. Synthesis with a new voice often means creating high-quality data to train a generative model.

Related content
Thanks to a set of simple abstractions, models with different architectures can be integrated and optimized for particular hardware accelerators.

Normalizing flows are generative models with tractable distributions, where sampling and density evaluation can be both exact and efficient. In “Creating new voices using normalizing flows”, Piotr Biliński and his colleagues investigate the ability of normalizing flows in TTS and voice conversion modes to extrapolate from speakers observed during training to unseen speaker identities — without any recordings of those speakers, and therefore without the possibility of target speaker adaptation.

Their approach is based on the Flow-TTS model, but instead of using it to generate synthetic speech of seen speakers, they adapted it to create new voices. Key contributions include adding the ability to sample new speakers, introducing voice conversion mode, and comparing it to TTS mode.

Normalizing flows.png
Instead of using normalizing flows to synthesize the speech of seen speakers, Amazon researchers adapted them to create new voices.

The architecture of the model consists of an invertible transformation based on normalizing flows. The design allows for lossless reconstruction of a mel-spectrogram from a representational space (z) given conditions (θ) such as speaker embedding. In text-to-speech mode, sampling z from the prior distribution and running the inverse transformation allows us to generate the mel-spectrogram given the conditions θ.

To apply the model in voice conversion mode, we map the source mel-spectrogram to a latent representation z using as condition the source-speaker embedding. Then, the latent representation z is converted back to a mel-spectrogram using the speaker embedding of the target speaker. To generate speaker embeddings of new voices, we train a separate neural network that generates plausible speaker embeddings for a given regional English variant.

Extensive evaluations demonstrate that the proposed approach systematically obtains state-of-the-art performance in zero-shot speech synthesis and allows us to create voices distinct from those in the training set. In addition, the authors find that as the level of conditioning to the model is increased, voice conversion and TTS modes can be used interchangeably.

Related content

US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. Do you have a strong machine learning background and want to help build new speech and language technology? Amazon is looking for PhD students who are ready to tackle some of the most interesting research problems on the leading edge of natural language processing. We are hiring in all areas of spoken language understanding: NLP, NLU, ASR, text-to-speech (TTS), and more! A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will develop and implement novel scalable algorithms and modeling techniques to advance the state-of-the-art in technology areas at the intersection of ML, NLP, search, and deep learning. You will work side-by-side with global experts in speech and language to solve challenging groundbreaking research problems on production scale data. The ideal candidate must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. Amazon has positions available for Natural Language Processing & Speech Intern positions in multiple locations across the United States. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. Please visit our website to stay updated with the research our teams are working on: https://www.amazon.science/research-areas/conversational-ai-natural-language-processing
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. The Research team at Amazon works passionately to apply cutting-edge advances in technology to solve real-world problems. Do you have a strong machine learning background and want to help build new speech and language technology? Do you welcome the challenge to apply optimization theory into practice through experimentation and invention? Would you love to help us develop the algorithms and models that power computer vision services at Amazon, such as Amazon Rekognition, Amazon Go, Visual Search, etc? At Amazon we hire research science interns to work in a number of domains including Operations Research, Optimization, Speech Technologies, Computer Vision, Robotics, and more! As an intern, you will be challenged to apply theory into practice through experimentation and invention, develop new algorithms using mathematical programming techniques for complex problems, implement prototypes and work with massive datasets. Amazon has a culture of data-driven decision-making, and the expectation is that analytics are timely, accurate, innovative and actionable. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Amazon Scientist use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science.
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. The Research team at Amazon works passionately to apply cutting-edge advances in technology to solve real-world problems. Do you have a strong machine learning background and want to help build new speech and language technology? Do you welcome the challenge to apply optimization theory into practice through experimentation and invention? Would you love to help us develop the algorithms and models that power computer vision services at Amazon, such as Amazon Rekognition, Amazon Go, Visual Search, etc? At Amazon we hire research science interns to work in a number of domains including Operations Research, Optimization, Speech Technologies, Computer Vision, Robotics, and more! As an intern, you will be challenged to apply theory into practice through experimentation and invention, develop new algorithms using mathematical programming techniques for complex problems, implement prototypes and work with massive datasets. Amazon has a culture of data-driven decision-making, and the expectation is that analytics are timely, accurate, innovative and actionable. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Amazon Scientist use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science.
CA, ON, Toronto
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. Are you a Masters student interested in machine learning, natural language processing, computer vision, automated reasoning, or robotics? We are looking for skilled scientists capable of putting theory into practice through experimentation and invention, leveraging science techniques and implementing systems to work on massive datasets in an effort to tackle never-before-solved problems. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Our scientists use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science.
CA, ON, Toronto
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. Are you a PhD student interested in machine learning, natural language processing, computer vision, automated reasoning, or robotics? We are looking for skilled scientists capable of putting theory into practice through experimentation and invention, leveraging science techniques and implementing systems to work on massive datasets in an effort to tackle never-before-solved problems. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Our scientists use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science.
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. We are looking for Masters or PhD students excited about working on Automated Reasoning or Storage System problems at the intersection of theory and practice to drive innovation and provide value for our customers. AWS Automated Reasoning teams deliver tools that are called billions of times daily. Amazon development teams are integrating automated-reasoning tools such as Dafny, P, and SAW into their development processes, raising the bar on the security, durability, availability, and quality of our products. AWS Automated Reasoning teams are changing how computer systems built on top of the cloud are developed and operated. AWS Automated Reasoning teams work in areas including: Distributed proof search, SAT and SMT solvers, Reasoning about distributed systems, Automating regulatory compliance, Program analysis and synthesis, Security and privacy, Cryptography, Static analysis, Property-based testing, Model-checking, Deductive verification, compilation into mainstream programming languages, Automatic test generation, and Static and dynamic methods for concurrent systems. AWS Storage Systems teams manage trillions of objects in storage, retrieving them with predictable low latency, building software that deploys to thousands of hosts, achieving 99.999999999% (you didn’t read that wrong, that’s 11 nines!) durability. AWS storage services grapple with exciting problems at enormous scale. Amazon S3 powers businesses across the globe that make the lives of customers better every day, and forms the backbone for applications at all scales and in all industries ranging from multimedia to genomics. This scale and data diversity requires constant innovation in algorithms, systems and modeling. AWS Storage Systems teams work in areas including: Error-correcting coding and durability modeling, system and distributed system performance optimization and modeling, designing and implementing distributed, multi-tenant systems, formal verification and strong, practical assurances of correctness, bits-IOPS-Watts: the interplay between computation, performance, and energy, data compression - both general-purpose and domain specific, research challenges with storage media, both existing and emerging, and exploring the intersection between storage and quantum technologies. As an Applied Science Intern, you will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment who is comfortable with ambiguity. Amazon believes that scientific innovation is essential to being the world’s most customer-centric company. Our ability to have impact at scale allows us to attract some of the brightest minds in Automated Reasoning and related fields. Our scientists work backwards to produce innovative solutions that delight our customers. Please visit https://www.amazon.science (https://www.amazon.science/) for more information.
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. We are looking for PhD students excited about working on Automated Reasoning or Storage System problems at the intersection of theory and practice to drive innovation and provide value for our customers. AWS Automated Reasoning teams deliver tools that are called billions of times daily. Amazon development teams are integrating automated-reasoning tools such as Dafny, P, and SAW into their development processes, raising the bar on the security, durability, availability, and quality of our products. AWS Automated Reasoning teams are changing how computer systems built on top of the cloud are developed and operated. AWS Automated Reasoning teams work in areas including: Distributed proof search, SAT and SMT solvers, Reasoning about distributed systems, Automating regulatory compliance, Program analysis and synthesis, Security and privacy, Cryptography, Static analysis, Property-based testing, Model-checking, Deductive verification, compilation into mainstream programming languages, Automatic test generation, and Static and dynamic methods for concurrent systems. AWS Storage Systems teams manage trillions of objects in storage, retrieving them with predictable low latency, building software that deploys to thousands of hosts, achieving 99.999999999% (you didn’t read that wrong, that’s 11 nines!) durability. AWS storage services grapple with exciting problems at enormous scale. Amazon S3 powers businesses across the globe that make the lives of customers better every day, and forms the backbone for applications at all scales and in all industries ranging from multimedia to genomics. This scale and data diversity requires constant innovation in algorithms, systems and modeling. AWS Storage Systems teams work in areas including: Error-correcting coding and durability modeling, system and distributed system performance optimization and modeling, designing and implementing distributed, multi-tenant systems, formal verification and strong, practical assurances of correctness, bits-IOPS-Watts: the interplay between computation, performance, and energy, data compression - both general-purpose and domain specific, research challenges with storage media, both existing and emerging, and exploring the intersection between storage and quantum technologies. As an Applied Science Intern, you will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment who is comfortable with ambiguity. Amazon believes that scientific innovation is essential to being the world’s most customer-centric company. Our ability to have impact at scale allows us to attract some of the brightest minds in Automated Reasoning and related fields. Our scientists work backwards to produce innovative solutions that delight our customers. Please visit https://www.amazon.science (https://www.amazon.science/) for more information.
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. Help us develop the algorithms and models that power computer vision services at Amazon, such as Amazon Rekognition, Amazon Go, Visual Search, and more! We are combining computer vision, mobile robots, advanced end-of-arm tooling and high-degree of freedom movement to solve real-world problems at huge scale. As an intern, you will help build solutions where visual input helps the customers shop, anticipate technological advances, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. You will own the design and development of end-to-end systems and have the opportunity to write technical white papers, create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Amazon Scientist use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. Help us develop the algorithms and models that power computer vision services at Amazon, such as Amazon Rekognition, Amazon Go, Visual Search, and more! We are combining computer vision, mobile robots, advanced end-of-arm tooling and high-degree of freedom movement to solve real-world problems at huge scale. As an intern, you will help build solutions where visual input helps the customers shop, anticipate technological advances, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. You will own the design and development of end-to-end systems and have the opportunity to write technical white papers, create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Amazon Scientist use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science
US, WA, Seattle
To ensure a great internship experience, please keep these things in mind. This is a full time internship and requires an individual to work 40 hours a week for the duration of the internship. Amazon requires an intern to be located where their assigned team is. Amazon is happy to provide relocation and housing assistance if you are located 50 miles or further from the office location. Are you a Masters or PhD student interested in machine learning? We are looking for skilled scientists capable of putting Machine Learning theory into practice through experimentation and invention, leveraging machine learning techniques (such as random forest, Bayesian networks, ensemble learning, clustering, etc.), and implementing learning systems to work on massive datasets in an effort to tackle never-before-solved problems. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Our scientists use our working backwards method to enrich the way we live and work. For more information on the Amazon Science community please visit https://www.amazon.science.