Alexa’s text-to-speech research at Interspeech 2022

Highlighted papers focus on transference — of prosody, accent, and speaker identity.

Interspeech, the world’s largest and most comprehensive conference on the science and technology of spoken-language processing, took place last week in Incheon, Korea, with Amazon as a platinum sponsor. Amazon Science asked three of Alexa AI’s leading scientists — in the fields of speech, spoken-language-understanding, and text-to-speech — to highlight some of Amazon’s contributions to the conference.

In this installment, Antonio Bonafonte, a senior applied scientist in the Amazon Text-to-Speech group, highlights work on transference — of prosody, accent, and speaker identity — in text-to-speech.

This year, the Amazon Text-to-Speech organization presented more than a dozen papers at Interspeech 2022. Amazon TTS gives Alexa its voice, working every day to add more expressiveness and conversational awareness. Here we highlight some of papers that illustrate what we are doing in those directions.

Expressive and contextually appropriate prosody

Neural text-to-speech (TTS) techniques have made the speech produced by TTS systems much more natural. To make the prosody of the speech more expressive and context appropriate as well, researchers have done considerable work on learning prosody representations from ground-truth speech.

The paper “CopyCat2: A single model for multi-speaker TTS and many-to-many fine-grained prosody transfer”, by Sri Karlapati and coauthors, proposes a model that learns word-level speaker-independent prosody representations from multispeaker speech. These representations can be used for fine-grained prosody transfer from multiple source speakers to multiple target speakers. Furthermore, predicting the word-level prosody representations from text results in a TTS model with improved naturalness and appropriateness.

CopyCat2.png
The CopyCat2 architecture.

The word-level prosodic representation is split into two components, one for timing and rhythm and a second for other prosodic characteristics. The figure above shows how the second component is learned using a conditional variational autoencoder. The input mel-spectrogram (X), which represents the speech signal as energies in certain frequency bands, is compressed into a sequence of vectors (Z), one per word. Those vectors are then used to reconstruct the mel-spectrogram.

Related content
New voice for Alexa’s Reading Sidekick feature avoids the instabilities common to models with variable prosody.

The decoder is conditioned on the phonemes and the speaker, so it captures speaker-independent prosody information, and a similar approach is used to learn speaker-independent word-level representations of timing aspects.

To use CopyCat2 as a text-to-speech model, the researchers train an additional model to predict the parameters of the prosodic-word-embedding distribution (Z) from BERT embeddings. In tests involving a multispeaker US English dataset of varied styles, including news, facts, and greetings, they compared their approach to a strong TTS baseline with contextually appropriate prosody and copy-synthesized speech. They found that their model reduced the gap in naturalness between synthetic and real speech by 22.79%.

Reducing the data required to build expressive voices

Training a state-of-the-art TTS model is usually a data-intensive process, and building a portfolio of voices in multiple styles and languages compounds the data requirement.

In the paper “Low-data? No problem: low-resource, language-agnostic conversational text-to-speech via F0-conditioned data augmentation”, Giulia Comini et al. propose a methodology to build expressive text-to-speech voices using only one hour of expressive speech from the target speaker. The method requires 8–10 hours of neutral speech — that is, speech with a limited range of expression — from another speaker, a significant reduction from previous methods.

Low data.png
A new approach to building expressive text-to-speech voices can make do with only an hour of expressive speech from the target speaker.

The authors propose to convert the neutral data from the supporting speaker to the target-speaker identity, while maintaining the target speaker’s expressive style. They use a modification of the original CopyCat prosody transfer model. As shown in the figure, the CopyCat parallel decoder regenerates the mel-spectrogram from the speaker embedding; the fundamental frequency (F0), or perceived pitch of individual phonemes; the phonetic representation; and the output of the CopyCat reference encoder. The reference encoder captures the information from the source mel-spectrogram that is not explicitly given to the decoder, — i.e., phonemes, with their duration and F0, and the speaker embedding.

Related content
Users find speech with transferred expression 9% more natural than standard synthesized speech.

The model is trained with the expressive speech of the target speaker and neutral speech from the supporting speaker. Once the model is trained, the mel-spectrogram of the supporting data is transformed into augmented expressive data for the target speaker. The CopyCat decoder is conditioned on the target speaker embedding and on an expressive F0 contour generated from the text and the speaker embedding by an independent model trained with the same data.

The paper shows that the F0 distribution of the augmented data resembles that of the target speaker. They also show that their data augmentation approach improves on one that does not use F0 conditioning.

Alexa multilingual models

Amazon has developed a shared neural TTS model for several speakers and languages that can extend a synthetic voice trained on data in only one language into other languages. For instance, the technology allows the English-language Alexa feminine-sounding voice to speak fluent Spanish in US multilingual homes. Similarly, Alexa’s English-language US masculine-sounding voice already has a British accent in the UK and speaks Spanish in the US, French in Canada, and German in Germany.

Related content
Neural text-to-speech enables new multilingual model to use the same voice for Spanish and English responses.

Alexa communicates on a wide variety of topics, and the style of speech should match the textual content. Transferring styles across languages while maintaining a fixed speaker identity, however, is challenging.

In the paper “Cross-lingual style transfer with conditional Prior VAE and style loss”, Dino Ratcliffe et al. propose an architecture for cross-lingual style transfer. Specifically, they improve the Spanish representation across four styles — newscaster, DJ, excited, and disappointed — while maintaining a single speaker identity for which only English samples are available.

Cross-lingual style transfer.png
A new approach to cross-lingual style transfer groups utterances of the same style together irrespective of language.

They achieve this by using a learned-conditional-prior variational autoencoder (LCPVAE), a hierarchical variational-autoencoder (VAE) approach.

The approach introduces a secondary VAE, which is conditioned on one-hot-encoded style information; that is, the style code has as many bits as there are styles, and a 1 at exactly one spot denotes a particular style. This results in a structured embedding space, which groups together utterances of the same style irrespective of language.

Related content
Papers focus on speech conversion and data augmentation — and sometimes both at once.

As can be seen in the figure, the TTS decoder generates the mel-spectrogram from the speaker embedding, language, phonemes, and the style embedding. During training, the style embeddings are generated by the LCPVAE using the one-hot code and the reference mel-spectrogram; at inference, the style embedding is the centroid of the embeddings for a particular style. The model’s loss function includes a style classification term that steers the generated mel-spectrogram toward the same style as the reference spectrogram.

Based on subjective evaluations (MUSHRA), this approach shows significant improvements on cross-lingual style representation in all four styles, DJ (2.8%), excited (5.3%), disappointed (3.5%) and newscaster (2.3%), without compromising speaker similarity and in-lingual style representation.

Creating new characters

Current TTS technology can produce realistic synthetic speech for sample voice identities seen during training. But speech synthesis with speakers unseen during training, without post-training adaptation, remains a big challenge. Synthesis with a new voice often means creating high-quality data to train a generative model.

Related content
Thanks to a set of simple abstractions, models with different architectures can be integrated and optimized for particular hardware accelerators.

Normalizing flows are generative models with tractable distributions, where sampling and density evaluation can be both exact and efficient. In “Creating new voices using normalizing flows”, Piotr Biliński and his colleagues investigate the ability of normalizing flows in TTS and voice conversion modes to extrapolate from speakers observed during training to unseen speaker identities — without any recordings of those speakers, and therefore without the possibility of target speaker adaptation.

Their approach is based on the Flow-TTS model, but instead of using it to generate synthetic speech of seen speakers, they adapted it to create new voices. Key contributions include adding the ability to sample new speakers, introducing voice conversion mode, and comparing it to TTS mode.

Normalizing flows.png
Instead of using normalizing flows to synthesize the speech of seen speakers, Amazon researchers adapted them to create new voices.

The architecture of the model consists of an invertible transformation based on normalizing flows. The design allows for lossless reconstruction of a mel-spectrogram from a representational space (z) given conditions (θ) such as speaker embedding. In text-to-speech mode, sampling z from the prior distribution and running the inverse transformation allows us to generate the mel-spectrogram given the conditions θ.

To apply the model in voice conversion mode, we map the source mel-spectrogram to a latent representation z using as condition the source-speaker embedding. Then, the latent representation z is converted back to a mel-spectrogram using the speaker embedding of the target speaker. To generate speaker embeddings of new voices, we train a separate neural network that generates plausible speaker embeddings for a given regional English variant.

Extensive evaluations demonstrate that the proposed approach systematically obtains state-of-the-art performance in zero-shot speech synthesis and allows us to create voices distinct from those in the training set. In addition, the authors find that as the level of conditioning to the model is increased, voice conversion and TTS modes can be used interchangeably.

Research areas

Related content

GB, Cambridge
We are looking for a passionate, talented, and resourceful Applied Scientist with background in Natural Language Processing (NLP), Large Language Models (LLMs), Question Answering, Information Retrieval, Reinforcement Learning, or Recommender Systems to invent and build scalable solutions for a state-of-the-art conversational assistant. The ideal candidate should have a robust foundation in machine learning and a keen interest in advancing the field. The ideal candidate would also enjoy operating in dynamic environments, have the self-motivation to take on challenging problems to deliver big customer impact, and move fast to ship solutions and then iterate on user feedback and interactions. Key job responsibilities * Work collaboratively with scientists and developers to design and implement automated, scalable NLP/ML/QA/IR models for accessing and presenting information; * Drive scalable solutions end-to-end from business requirements to prototyping, engineering, production testing to production; * Drive best practices on the team, deal with ambiguity and competing objectives, and mentor and guide junior members to achieve their career growth potential. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
DE, BE, Berlin
We are looking for a passionate, talented, and resourceful Applied Scientist with background in Natural Language Processing (NLP), Large Language Models (LLMs), Question Answering, Information Retrieval, Reinforcement Learning, or Recommender Systems to invent and build scalable solutions for a state-of-the-art conversational assistant. The ideal candidate should have a robust foundation in machine learning and a keen interest in advancing the field. The ideal candidate would also enjoy operating in dynamic environments, have the self-motivation to take on challenging problems to deliver big customer impact, and move fast to ship solutions and then iterate on user feedback and interactions. Key job responsibilities * Work collaboratively with scientists and developers to design and implement automated, scalable NLP/ML/QA/IR models for accessing and presenting information; * Drive scalable solutions end-to-end from business requirements to prototyping, engineering, production testing to production; * Drive best practices on the team, deal with ambiguity and competing objectives, and mentor and guide junior members to achieve their career growth potential. We are open to hiring candidates to work out of one of the following locations: Berlin, BE, DEU
US, WA, Bellevue
Are you inspired by invention? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Last Mile Solutions Engineering team. WW AMZL Solutions Engineering team is looking to build out our Simulation team to drive innovation across our Last Mile network. We start with the customer and work backwards in everything we do. If you’re interested in joining a rapidly growing team working to build a unique, solutions advisory group with a relentless focus on the customer, you’ve come to the right place. This is a blue-sky role that gives you a chance to roll up your sleeves and dive into big data sets in order to build simulations and experimentation systems at scale, build optimization algorithms and leverage cutting-edge technologies across Amazon. This is an opportunity to think big about how to solve a challenging problem for the customers. As a Simulation Scientist, you are an analytical problem solver who enjoys diving into data from various businesses, is excited about investigations and algorithms, can multi-task, and can credibly interface between scientists, engineers and business stakeholders. Your expertise in synthesizing and communicating insights and recommendations to audiences of varying levels of technical sophistication will enable you to answer specific business questions and innovate for the future. As a simulation scientist, you will apply cutting edge designs and methodologies for complex use cases across Last Mile network to drive innovation. In addition, you will contribute to the end state vision for simulation and experimentation of future delivery stations at Amazon. Key job responsibilities • Design, develop, and simulate engineering solutions for complex material handling challenges considering human/equipment interactions for the Last Mile network • Lead and coordinate simulation efforts for optimal solutions through equipment specification, material flow, process design, ergonomics, associate experience, operational considerations and site layout • The candidate must have the ability to work with diverse customer groups to solve business problems and provide data solutions that are organized and simple to understand. • Working with technical and non-technical customers to design experiments, simulations, and communicate results • Develop, document and update simulation standards, including standard results reports • Create basic to highly advanced models and run "what-if" scenarios to help drive to optimal solutions • Work closely with internal teams to ensure that every detail is thought through and documented using Standard Operating Procedures and/or structured change control • Work closely with vendors, suppliers and other cross functional teams to come up with innovative solutions • Simultaneously manage multiple projects and tasks while effectively influencing, negotiating, and communicating with internal and external business partners • Conduct post-mortem on simulations, after implementation of new designs, in partnering with Safety and Operations A day in the life If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! Benefits: Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan Learn more about our benefits here: https://amazon.jobs/en/internal/benefits/us-benefits-and-stock We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
BR, SP, Sao Paulo
Amazon launched the Generative AI Innovation Center in June 2023 to help AWS customers accelerate innovation and business success with Generative AI (https://press.aboutamazon.com/2023/6/aws-announces- generative -ai -innovation center). This Innovation Center provides opportunities to innovate in a fast-paced organization that contributes to breakthrough projects and technologies that are deployed across devices and the cloud. As a data scientist, you are proficient in designing and developing advanced generative AI solutions to solve diverse customer problems. You'll work with terabytes of text, images, and other types of data to solve real-world problems through Gen AI. You will work closely with account teams and ML strategists to define the use case, and with other ML scientists and engineers on the team to design experiments and find new ways to deliver customer value. The selected person will possess technical and customer-facing skills that will enable you to be part of the AWS technical team within our solution providers ecosystem/environment as well as directly to end customers. You will be able to lead discussion with customer and partner staff and senior management. A day in the life Here at AWS, we embrace our differences. We are committed to promoting our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in more than 190 branches around the world. We have innovative benefit offerings and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon's culture of inclusion is reinforced by our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and build trust. About the team Work/life balance Our team highly values work-life balance. It's not about how many hours you spend at home or at work; it's about the flow you establish that brings energy to both parts of your life. We believe that finding the right balance between your personal and professional life is fundamental to lifelong happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own work-life balance. Mentoring and career growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and mandates and are building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one guidance and thorough but gentle code reviews. We care about your career growth and strive to assign projects based on what will help each team member become a more well-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: Sao Paulo, SP, BRA
MX, DIF, Mexico City
Amazon launched the Generative AI Innovation Center (GAIIC) in Jun 2023 to help AWS customers accelerate the use of Generative AI to solve business and operational problems and promote innovation in their organization (https://press.aboutamazon.com/2023/6/aws-announces-generative-ai-innovation-center). GAIIC provides opportunities to innovate in a fast-paced organization that contributes to game-changing projects and technologies that get deployed on devices and in the cloud. As a Data Science Manager in GAIIC, you'll partner with technology and business teams to build new GenAI solutions that delight our customers. You will be responsible for directing a team of data scientists, deep learning architects, and ML engineers to build generative AI models and pipelines, and deliver state-of-the-art solutions to customer’s business and mission problems. Your team will be working with terabytes of text, images, and other types of data to address real-world problems. The successful candidate will possess both technical and customer-facing skills that will allow them to be the technical “face” of AWS within our solution providers’ ecosystem/environment as well as directly to end customers. You will be able to drive discussions with senior technical and management personnel within customers and partners, as well as the technical background that enables them to interact with and give guidance to data/research/applied scientists and software developers. The ideal candidate will also have a demonstrated ability to think strategically about business, product, and technical issues. Finally, and of critical importance, the candidate will be an excellent technical team manager, someone who knows how to hire, develop, and retain high quality technical talent. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services. A day in the life A day in the life Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. About the team Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: Mexico City, DIF, MEX
US, CA, Palo Alto
The Amazon Search Mission Understanding (SMU) team is at the forefront of revolutionizing the online shopping experience through the Amazon search page. Our ambition extends beyond facilitating a seamless shopping journey; we are committed to creating the next generation of intelligent shopping assistants. Leveraging cutting-edge Large Language Models (LLMs), we aim to redefine navigation and decision-making in e-commerce by deeply understanding our users' shopping missions, preferences, and goals. By developing responsive and scalable solutions, we not only accomplish the shopping mission but also foster unparalleled trust among our customers. Through our advanced technology, we generate valuable insights, providing a guided navigation system into various search missions, ensuring a comprehensive and holistic shopping experience. Our dedication to continuous improvement through constant measurement and enhancement of the shopper experience is crucial, as we strategically navigate the balance between immediate results and long-term business growth. We are seeking an Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. The ideal candidate will have a profound expertise in developing, deploying, and contributing to the next-generation shopping search engine, including but not limited to Retrieval-Augmented Generation (RAG) models, specifically tailored towards enhancing the Rufus application—an integral part of our mission to revolutionize shopping assistance. You will take the lead in conceptualizing, building, and launching groundbreaking models that significantly improve our understanding of and capabilities in enhancing the search experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology. We are looking for individuals who are determined, analytically rigorous, passionate about applied sciences, creative, and possess strong logical reasoning abilities. Join the Search Mission Understanding team, a group of pioneering ML scientists and engineers dedicated to building core ML models and developing the infrastructure for model innovation. As part of Amazon Search, you will experience the dynamic, innovative culture of a startup, backed by the extensive resources of Amazon.com (AMZN), a global leader in internet services. Our collaborative, customer-centric work environment spans across our offices in Palo Alto, CA, and Seattle, WA, offering a unique blend of opportunities for professional growth and innovation. Key job responsibilities Collaborate with cross-functional teams to identify requirements for ML model development, focusing on enhancing mission understanding through innovative AI techniques, including retrieval-Augmented Generation or LLM in general. Design and implement scalable ML models capable of processing and analyzing large datasets to improve search and shopping experiences. Must have a strong background in machine learning, AI, or computational sciences. Lead the management and experiments of ML models at scale, applying advanced ML techniques to optimize science solution. Serve as a technical lead and liaison for ML projects, facilitating collaboration across teams and addressing technical challenges. Requires strong leadership and communication skills, with a PhD in Computer Science, Machine Learning, or a related field. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA | Seattle, WA, USA
US, WA, Seattle
Alexa Personality Fundamentals is chartered with infusing Alexa's trustworthy, reliable, considerate, smart, and playful personality. Come join us in creating the future of personality forward AI here at Alexa. Key job responsibilities As a Data Scientist with Alexa Personality, your work will involve machine learning, Large Language Model (LLM) and other generative technologies. You will partner with engineers, applied scientists, voice designers, and quality assurance to ensure that Alexa can sing, joke, and delight our customers in every interaction. You will take a central role in defining our experimental roadmap, sourcing training data, authoring annotation criteria and building automated benchmarks to track the improvement of our Alexa's personality. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA
US, CA, Palo Alto
The Amazon Search Mission Understanding (SMU) team is at the forefront of revolutionizing the online shopping experience through the Amazon search page. Our ambition extends beyond facilitating a seamless shopping journey; we are committed to creating the next generation of intelligent shopping assistants. Leveraging cutting-edge Large Language Models (LLMs), we aim to redefine navigation and decision-making in e-commerce by deeply understanding our users' shopping missions, preferences, and goals. By developing responsive and scalable solutions, we not only accomplish the shopping mission but also foster unparalleled trust among our customers. Through our advanced technology, we generate valuable insights, providing a guided navigation system into various search missions, ensuring a comprehensive and holistic shopping experience. Our dedication to continuous improvement through constant measurement and enhancement of the shopper experience is crucial, as we strategically navigate the balance between immediate results and long-term business growth. We are seeking an Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. The ideal candidate will have a profound expertise in developing, deploying, and contributing to the next-generation shopping search engine, including but not limited to Retrieval-Augmented Generation (RAG) models, specifically tailored towards enhancing the Rufus application—an integral part of our mission to revolutionize shopping assistance. You will take the lead in conceptualizing, building, and launching groundbreaking models that significantly improve our understanding of and capabilities in enhancing the search experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology. We are looking for individuals who are determined, analytically rigorous, passionate about applied sciences, creative, and possess strong logical reasoning abilities. Join the Search Mission Understanding team, a group of pioneering ML scientists and engineers dedicated to building core ML models and developing the infrastructure for model innovation. As part of Amazon Search, you will experience the dynamic, innovative culture of a startup, backed by the extensive resources of Amazon.com (AMZN), a global leader in internet services. Our collaborative, customer-centric work environment spans across our offices in Palo Alto, CA, and Seattle, WA, offering a unique blend of opportunities for professional growth and innovation. Key job responsibilities Collaborate with cross-functional teams to identify requirements for ML model development, focusing on enhancing mission understanding through innovative AI techniques, including retrieval-Augmented Generation or LLM in general. Design and implement scalable ML models capable of processing and analyzing large datasets to improve search and shopping experiences. Must have a strong background in machine learning, AI, or computational sciences. Lead the management and experiments of ML models at scale, applying advanced ML techniques to optimize science solution. Serve as a technical lead and liaison for ML projects, facilitating collaboration across teams and addressing technical challenges. Requires strong leadership and communication skills, with a PhD in Computer Science, Machine Learning, or a related field. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA | Seattle, WA, USA
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems. Key job responsibilities As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA | Sunnyvale, CA, USA
US, WA, Bellevue
Do you enjoy solving complex problems, driving research innovation, and creating insightful models that tackle real-world challenges? Join Amazon's Modeling and Optimization team. Our science models and data-driven solutions continuously reshape Amazon global supply chain - one of the most sophisticated networks in the world. Key job responsibilities In this role, you will use science to drive measurable improvements across customer experience, network speed, cost efficiency, safety, sustainability, and capital investment returns. You will collaborate with scientists to solve complex problems and with cross-functional teams to analyze systems and drive business value. You will develop optimization, simulation, and predictive models to identify improvement opportunities. You will develop innovative, scalable solutions. You will quantify expected improvements and evaluate trade-offs between competing objectives. You will communicate model insights to stakeholders and influence positive changes in Amazon's systems and operations. A day in the life Collaboration will be key - you will collaborate with scientists to design end-to-end solutions, work with business stakeholders to simplify and streamline processes, and partner with engineers to simplify systems and enhance their performances. The focus is on driving value through scientific thinking, technical knowledge, simplification, and cross-functional teamwork. About the team Our team of scientists specializes in network modeling, optimization, algorithms, control theory, machine learning and related disciplines. Our focus is driving supply chain improvements through applied science. By analyzing data and building insightful models, we identify opportunities and influence positive change across Amazon's end-to-end systems and operations - from vendors to customers. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA