Scalable framework lets multiple text-to-speech models coexist

Thanks to a set of simple abstractions, models with different architectures can be integrated and optimized for particular hardware accelerators.

Voice agents like Alexa often have a variety of different speech synthesizers, which differ in attributes such as expressivity, personality, language, and speaking style. The machine learning models underlying these different applications can have completely different architectures, and integrating those architectures in a single voice service can be a time-consuming and challenging process.

To make that process easier and faster, Amazon’s Text-to-Speech group has developed a universal model integration framework that allows us to customize production voice models in a quick and scalable way.

Model variety

State-of-the-art voice models typically use two large neural networks to synthesize speech from text inputs.

Related content
How Alexa scales machine learning models to millions of customers.

The first network, called an acoustic model, takes text as input and generates a mel-spectrogram, an image that represents acoustic parameters such as pitch and energy of speech over time. The second network, called a vocoder, takes the mel-spectrogram as an input and produces an audio waveform of speech as the final output.

While we have released a universal architecture for the vocoder model that supports a wide variety of speaking styles, we still use different acoustic-model architectures to generate this diversity of speaking styles.

The most common architecture for the acoustic model relies on an attention mechanism, which learns which elements of the input text are most relevant to the current time slice — or “frame” — of the output spectrogram. With this mechanism, the network implicitly models the speech duration of different chunks of the text.

The same model also uses the technique of “teacher-forcing”, where the previously generated frame of speech is used as an input to produce the next one. While such an architecture can generate expressive and natural-sounding speech, it is prone to intelligibility errors such as mumbling or dropping or repeating words, and errors easily compound from one frame to the next.

More-modern architectures address these issues by explicitly modeling the durations of text chunks and generating speech frames in parallel, which is more efficient and stable than relying on previously generated frames as input. To align the text and speech sequences, the model simply “upsamples”, or repeats its encoding of a chunk of text (its representation vector), for as many speech frames as are dictated by the external duration model.

The continuous evolution of complex TTS models employed in different contexts — such as Alexa Q&A, storytelling for children, and smart-home automation — creates the need for a scalable framework that can handle them all.

The challenge of integration

To integrate acoustic models into production, we need a component that takes an input text utterance and returns a mel-spectrogram. The first difficulty is that speech is usually generated in sequential chunks, rather than being synthesized all at once. To minimize latency, our framework should return data as quickly as possible. A naive solution that wraps the whole model in code and processes everything with a single function call will be unacceptably slow.

Related content
Arabic posed unique challenges for speech recognition, language understanding, and speech synthesis.

Another challenge is adjusting the model to work with various hardware accelerators. As an example, to benefit from the high-performance AWS Inferentia runtime, we need to ensure that all tensors have fixed sizes (set once, during the model compilation phase). This means that we need to

  • add logic that splits longer utterances into smaller chunks that fit specific input sizes (depending on the model);
  • add logic that ensures proper padding; and
  • decide which functionality should be handled directly by the model and which by the integration layer.

When we want to run the same model on general-purpose GPUs, we probably don’t need these changes, and it would be useful if the framework could switch back and forth between contexts in an easy way. We therefore decouple the TTS model into a set of more specialized integration components, capable of doing all the required logic.

Integration components

The integration layer encapsulates the model in a set of components capable of transforming an input utterance into a mel-spectrogram. As the model usually operates in two stages — preprocessing data and generating data on demand — it is convenient to use two types of components:

  • a SequenceBlock, which takes an input tensor and returns a transformed tensor (the input can be the result of applying another SequenceBlock), and
  • a StreamableBlock, which generates data (e.g., frames) on demand. As an input it takes the results of another StreamableBlock (blocks can form a pipeline) and/or data generated by a SequenceBlock.

These simple abstractions offer great flexibility in creating variants of acoustic models. Here’s an example:

TTS framework.jpeg
An example of an acoustic model built using the SequenceBlock and StreamableBlock abstractions.

The acoustic model consists of

  • two encoders (SequenceBlocks), which convert the input text embedding into one-dimensional representation tensors, one for encoded text and one for predicted durations;
  • an upsampler (a StreamableBlock, which takes the encoders’ results as an input), which creates intermediary, speech-length sequences, according to the data returned by the encoders; and 
  • a decoder (a StreamableBlock), which generates mel-spectrogram frames.

The whole model is encapsulated in a specialized StreamableBlock called StreamablePipeline, which contains exactly one SequenceBlock and one StreamableBlock:

Related content
According to listener tests, whispers produced by a new machine learning model sound as natural as vocoded human whispers.

  • the SequenceBlockContainer is a specialized SequenceBlock that consists of a set of nested SequenceBlocks capable of running neural-network encoders;
  • the StreamableStack is specialized StreamableBlock that decodes outputs from the upsampler and creates mel-spectrogram frames.

The integration framework ensures that all components are run in the correct order, and depending on the specific versions of components, it allows for the use of various hardware accelerators.

The integration layer

The acoustic model is provided as a plugin, which we call an “addon”. An addon consists of exported neural networks, each represented as a named set of symbols and parameters (encoder, decoder, etc.), along with configuration data. One of the configuration attributes, called “stack”, specifies how integration components should be connected together to build a working integration layer. Here’s the code for the stack attribute that describes the architecture above:

'stack'=[
	{'type' : 'StreamablePipeline, 
	 'sequence_block' : {'type' : 'Encoders'},
	 'streamable_block' : 
		{'type': 'StreamableStack', 
		 'stack' : [ 
			{'type' : 'Upsampler'}, 
			{'type' : 'Decoder'} 
		]} 
	} 
]

This definition will create an integration layer consisting of a StreamablePipeline with

  • All encoders specified in the addon (the framework will automatically create all required components);
  • An upsampler, which generates intermediate data for the decoder; and
  • the decoder specified in the addon, which generates the final frames.

The JSON format allows us to make easy changes. For example, we can create a specialized component that runs all sequence blocks in parallel on a specific hardware accelerator and name it CustomizedEncoders. In this case, the only change in the configuration specification is to replace the name “Encoders” with “CustomizedEncoders”.
Running experiments using components with additional diagnostic or digital-signal-processing effects is also trivial. A new component’s only requirement is to extend one of two generic abstractions; other than that, there are no other restrictions. Even replacing one StreamableBlock with the whole nested sequence-to-sequence stack is perfectly fine, according to the framework design.

This framework is already used in production. It is a vital pillar of our recent, successful integration of state-of-the-art TTS architectures (without attention) and legacy models.

Acknowledgments: Daniel Korzekwa

Related content

US, MA, North Reading
Working at Amazon Robotics Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart, collaborative team of doers that work passionately to apply cutting-edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Position Overview The Amazon Robotics (AR) Software Research and Science team builds and runs simulation experiments and delivers analyses that are central to understanding the performance of the entire AR system. This includes operational and software scaling characteristics, bottlenecks, and robustness to “chaos monkey” stresses -- we inform critical engineering and business decisions about Amazon’s approach to robotic fulfillment. We are seeking an enthusiastic Data Scientist to design and implement state-of-the-art solutions for never-before-solved problems. The DS will collaborate closely with other research and robotics experts to design and run experiments, research new algorithms, and find new ways to improve Amazon Robotics analytics to optimize the Customer experience. They will partner with technology and product leaders to solve business problems using scientific approaches. They will build new tools and invent business insights that surprise and delight our customers. They will work to quantify system performance at scale, and to expand the breadth and depth of our analysis to increase the ability of software components and warehouse processes. They will work to evolve our library of key performance indicators and construct experiments that efficiently root cause emergent behaviors. They will engage with software development teams and warehouse design engineers to drive the evolution of the AR system, as well as the simulation engine that supports our work. Inclusive Team Culture Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have 12 affinity groups (employee resource groups) with more than 87,000 employees across hundreds of chapters around the world. We have innovative benefit offerings and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which reminds team members to seek diverse perspectives, learn and be curious, and earn trust. Flexibility It isn’t about which hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We offer flexibility and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth We care about your career growth too. Whether your goals are to explore new technologies, take on bigger opportunities, or get to the next level, we'll help you get there. Our business is growing fast and our people will grow with it. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! We are open to hiring candidates to work out of one of the following locations: North Reading, MA, USA
US, NY, New York
We are looking for a motivated and experienced Data Scientist with experience in Machine Learning (ML), Artificial Intelligence (AI), Big Data, and Service Oriented Architecture with deep understanding in advertising businesses, to be part of a team of talented scientists and engineers to innovate, iterate, and solve real world problem with cutting-edge AWS technologies. In this role, you will take a leading role in defining the problem, innovating the ML/AI solutions, and information the tech roadmap. You will join a cross-functional, fun-loving team, working closely with scientists and engineers on a daily basis. You will innovate on behalf of our customers by prototyping, delivering functional proofs of concept (POCs), and partnering with our engineers to productize and scale successful POCs. If you are passionate about creating the future, come join us as we have fun, and make history. Key job responsibilities - Define and execute a research & development roadmap that drives data-informed decision making for marketers and advertisers - Establish and drive data hygiene best practices to ensure coherence and integrity of data feeding into production ML/AI solutions - Collaborate with colleagues across science and engineering disciplines for fast turnaround proof-of-concept prototyping at scale - Partner with product managers and stakeholders to define forward-looking product visions and prospective business use cases - Drive and lead of culture of data-driven innovations within and outside across Amazon Ads Marketing orgs About the team Marketing Decision Science provides science products to enable Amazon Ads Marketing to deliver relevant and compelling guidance across marketing channels to prospective and active advertisers for success on Amazon. We own the product, technology and deployment roadmap for AI- and analytics-powered products across Amazon Ads Marketing. We analyze the needs, experiences, and behaviors of Amazon advertisers at petabytes scale, to deliver the right marketing communications to the right advertiser at the right team to help them make the data-informed advertising decisions. Our science-based products enable applications and synergies across Ads organization, spanning marketing, product, and sales use cases. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA
DE, Berlin
AWS AI is looking for passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Conversational AI Systems. Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Understanding (NLU), Dialog Systems including Generative AI with Large Language Models (LLMs) and Applied Machine Learning (ML). As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use language technology. You will gain hands on experience with Amazon’s heterogeneous text, structured data sources, and large-scale computing resources to accelerate advances in language understanding. We are hiring in all areas of human language technology and code generation. We are open to hiring candidates to work out of one of the following locations: Berlin, DEU
CN, Shanghai
亚马逊云科技上海人工智能实验室OpenSearch 研发团队正在招募应用科学实习生-多模态检索与生成方向实习生。OpenSearch是一个开源的搜索和数据分析套件, 它旨在为数据密集型应用构建解决方案,内置高性能、开发者友好的工具,并集成了强大的机器学习、数据处理功能,可以为客户提供灵活的数据探索、丰富和可视化功能,帮助客户从复杂的数据中发现有价值的信息。OpenSearch是现有AWS托管服务(AWS OpenSearch)的基础,OpenSearch核心团队负责维护OpenSearch代码库,他们的目标是使OpenSearch安全、高效、可扩展、可扩展并永远开源。 点击下方链接查看申请手册获得更多信息: https://amazonexteu.qualtrics.com/CP/File.php?F=F_55YI0e7rNdeoB6e Key job responsibilities 在这个实习期间,你将有机会: 1. 研究最新的搜索相关性人工智能算法。 2. 探索大模型技术在数据分析与可视化上的应用。 3. 了解主流搜索引擎Lucene的原理和应用。深入了解前沿自然语言处理技术和底层索引性能调优的结合。 4. 学习亚马逊云上的各种云服务。 5. 参与产品需求讨论,提出技术实现方案。 6. 与国内外杰出的开发团队紧密合作,学习代码开发和审查的流程。 We are open to hiring candidates to work out of one of the following locations: Shanghai, CHN
CN, Shanghai
亚马逊云科技上海人工智能实验室OpenSearch 研发团队正在招募应用科学家实习,方向是服务器端开发。OpenSearch是一个开源的搜索和数据分析套件, 它旨在为数据密集型应用构建解决方案,内置高性能、开发者友好的工具,并集成了强大的机器学习、数据处理功能,可以为客户提供灵活的数据探索、丰富和可视化功能,帮助客户从复杂的数据中发现有价值的信息。OpenSearch是现有AWS托管服务(AWS OpenSearch)的基础,OpenSearch核心团队负责维护OpenSearch代码库,他们的目标是使OpenSearch安全、高效、可扩展、可扩展并永远开源。 点击下方链接查看申请手册获得更多信息: https://amazonexteu.qualtrics.com/CP/File.php?F=F_55YI0e7rNdeoB6e Key job responsibilities 在这个实习期间,你将有机会: 1. 使用Java/Kotlin等服务器端技术编写高质量,高性能,安全,可维护和可测试的代码。 2. 了解主流搜索引擎Lucene的原理和应用。 3. 学习亚马逊云上的各种云服务。 4. 参与产品需求讨论,提出技术实现方案。 5. 与国内外杰出的开发团队紧密合作,学习代码开发和审查的流程。 6. 应用先进的人工智能和机器学习技术提升用户体验。 We are open to hiring candidates to work out of one of the following locations: Shanghai, CHN
CN, Shanghai
亚马逊云科技上海人工智能实验室OpenSearch 研发团队正在招募应用科学家实习,方向是服务器端开发。OpenSearch是一个开源的搜索和数据分析套件, 它旨在为数据密集型应用构建解决方案,内置高性能、开发者友好的工具,并集成了强大的机器学习、数据处理功能,可以为客户提供灵活的数据探索、丰富和可视化功能,帮助客户从复杂的数据中发现有价值的信息。OpenSearch是现有AWS托管服务(AWS OpenSearch)的基础,OpenSearch核心团队负责维护OpenSearch代码库,他们的目标是使OpenSearch安全、高效、可扩展、可扩展并永远开源。 点击下方链接查看申请手册获得更多信息: https://amazonexteu.qualtrics.com/CP/File.php?F=F_55YI0e7rNdeoB6e Key job responsibilities 在这个实习期间,你将有机会: • 使用HTML、CSS和TypeScript/Javascript等前端技术开发用户界面。 • 学习使用Node.js 为用户界面提供服务接口。 • 了解并实践工业级前端产品的开发/部署/安全审查/发布流程。 • 了解并实践前端框架React的使用。 • 参与产品需求讨论,提出技术实现方案。 • 与国内外杰出的开发团队紧密合作,学习代码开发和审查的流程。 • 编写高质量,高性能,安全,可维护和可测试的代码。 • 应用先进的人工智能和机器学习技术提升用户体验。 We are open to hiring candidates to work out of one of the following locations: Shanghai, CHN
US, WA, Bellevue
Are you excited about developing generative AI, reinforcement learning and foundation models? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics, we are on a mission to build high-performance autonomous decision systems that perceive and act to further improve our world-class customer experience - at Amazon scale. We are looking for an Applied Scientist who will help us build next level simulation and optimization systems with the help of generative AI and LLMs. Together, we will be pushing beyond the state of the art in simulation and optimization of one of the most complex systems in the world: Amazon's Fulfillment Network. Key job responsibilities In this role, you will dive deep into our fulfillment network, understand complex processes and channel your insights to build large scale machine learning models (LLMs, graph neural nets and reinforcement learning) that will be able to understand and optimize the state and future of our buildings, network and orders. You will face a high level of research ambiguity and problems that require creative, ambitious, and inventive solutions. You will work with and in a team of applied scientists to solve cutting edge problems going beyond the published state of the art that will drive transformative change on a truly global scale. A day in the life In this role, you will dive deep into our fulfillment network, understand complex processes and channel your insights to build large scale machine learning models (LLMs, graph neural nets and reinforcement learning) that will be able to understand and optimize the state and future of our buildings, network and orders. You will face a high level of research ambiguity and problems that require creative, ambitious, and inventive solutions. You will work with and in a team of applied scientists to solve cutting edge problems going beyond the published state of the art that will drive transformative change on a truly global scale. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team Amazon Fulfillment Technologies (AFT) powers Amazon’s global fulfillment network. We invent and deliver software, hardware, and data science solutions that orchestrate processes, robots, machines, and people. We harmonize the physical and virtual world so Amazon customers can get what they want, when they want it. The AFT AI team has deep expertise developing cutting edge AI solutions at scale and successfully applying them to business problems in the Amazon Fulfillment Network. These solutions typically utilize machine learning and computer vision techniques, applied to text, sequences of events, images or video from existing or new hardware. We influence each stage of innovation from inception to deployment, developing a research plan, creating and testing prototype solutions, and shepherding the production versions to launch. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
LU, Luxembourg
Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz We are open to hiring candidates to work out of one of the following locations: Luxembourg, LUX
US, VA, Arlington
Amazon launched the Generative AI Innovation Center (GenAIIC) in Jun 2023 to help AWS customers accelerate the use of Generative AI to solve business and operational problems and promote innovation in their organization (https://press.aboutamazon.com/2023/6/aws-announces-generative-ai-innovation-center). GenAIIC provides opportunities to innovate in a fast-paced organization that contributes to game-changing projects and technologies that get deployed on devices and in the cloud. As an Applied Science Manager in GenAIIC, you'll partner with technology and business teams to build new GenAI solutions that delight our customers. You will be responsible for directing a team of data/research/applied scientists, deep learning architects, and ML engineers to build generative AI models and pipelines, and deliver state-of-the-art solutions to customer’s business and mission problems. Your team will be working with terabytes of text, images, and other types of data to address real-world problems. The successful candidate will possess both technical and customer-facing skills that will allow you to be the technical “face” of AWS within our solution providers’ ecosystem/environment as well as directly to end customers. You will be able to drive discussions with senior technical and management personnel within customers and partners, as well as the technical background that enables them to interact with and give guidance to data/research/applied scientists and software developers. The ideal candidate will also have a demonstrated ability to think strategically about business, product, and technical issues. Finally, and of critical importance, the candidate will be an excellent technical team manager, someone who knows how to hire, develop, and retain high quality technical talent. About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Sales, Marketing and Global Services (SMGS) AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Atlanta, GA, USA | Austin, TX, USA | Houston, TX, USA | New York, NY, USA | San Francisco, CA, USA | San Jose, CA, USA | Santa Clara, CA, USA | Seattle, WA, USA
US, VA, Arlington
Machine learning (ML) has been strategic to Amazon from the early years. We are pioneers in areas such as recommendation engines, product search, eCommerce fraud detection, and large-scale optimization of fulfillment center operations. The Generative AI team helps AWS customers accelerate the use of Generative AI to solve business and operational challenges and promote innovation in their organization. As an applied scientist, you are proficient in designing and developing advanced ML models to solve diverse challenges and opportunities. You will be working with terabytes of text, images, and other types of data to solve real-world problems. You'll design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for talented scientists capable of applying ML algorithms and cutting-edge deep learning (DL) and reinforcement learning approaches to areas such as drug discovery, customer segmentation, fraud prevention, capacity planning, predictive maintenance, pricing optimization, call center analytics, player pose estimation, event detection, and virtual assistant among others. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services. Key job responsibilities The primary responsibilities of this role are to: - Design, develop, and evaluate innovative ML models to solve diverse challenges and opportunities across industries - Interact with customer directly to understand their business problems, and help them with defining and implementing scalable Generative AI solutions to solve them - Work closely with account teams, research scientist teams, and product engineering teams to drive model implementations and new solution About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Atlanta, GA, USA | Austin, TX, USA | Houston, TX, USA | New York, NJ, USA | New York, NY, USA | San Francisco, CA, USA | Santa Clara, CA, USA | Seattle, WA, USA