Scalable framework lets multiple text-to-speech models coexist

Thanks to a set of simple abstractions, models with different architectures can be integrated and optimized for particular hardware accelerators.

Voice agents like Alexa often have a variety of different speech synthesizers, which differ in attributes such as expressivity, personality, language, and speaking style. The machine learning models underlying these different applications can have completely different architectures, and integrating those architectures in a single voice service can be a time-consuming and challenging process.

To make that process easier and faster, Amazon’s Text-to-Speech group has developed a universal model integration framework that allows us to customize production voice models in a quick and scalable way.

Model variety

State-of-the-art voice models typically use two large neural networks to synthesize speech from text inputs.

Related content
How Alexa scales machine learning models to millions of customers.

The first network, called an acoustic model, takes text as input and generates a mel-spectrogram, an image that represents acoustic parameters such as pitch and energy of speech over time. The second network, called a vocoder, takes the mel-spectrogram as an input and produces an audio waveform of speech as the final output.

While we have released a universal architecture for the vocoder model that supports a wide variety of speaking styles, we still use different acoustic-model architectures to generate this diversity of speaking styles.

The most common architecture for the acoustic model relies on an attention mechanism, which learns which elements of the input text are most relevant to the current time slice — or “frame” — of the output spectrogram. With this mechanism, the network implicitly models the speech duration of different chunks of the text.

The same model also uses the technique of “teacher-forcing”, where the previously generated frame of speech is used as an input to produce the next one. While such an architecture can generate expressive and natural-sounding speech, it is prone to intelligibility errors such as mumbling or dropping or repeating words, and errors easily compound from one frame to the next.

More-modern architectures address these issues by explicitly modeling the durations of text chunks and generating speech frames in parallel, which is more efficient and stable than relying on previously generated frames as input. To align the text and speech sequences, the model simply “upsamples”, or repeats its encoding of a chunk of text (its representation vector), for as many speech frames as are dictated by the external duration model.

The continuous evolution of complex TTS models employed in different contexts — such as Alexa Q&A, storytelling for children, and smart-home automation — creates the need for a scalable framework that can handle them all.

The challenge of integration

To integrate acoustic models into production, we need a component that takes an input text utterance and returns a mel-spectrogram. The first difficulty is that speech is usually generated in sequential chunks, rather than being synthesized all at once. To minimize latency, our framework should return data as quickly as possible. A naive solution that wraps the whole model in code and processes everything with a single function call will be unacceptably slow.

Related content
Arabic posed unique challenges for speech recognition, language understanding, and speech synthesis.

Another challenge is adjusting the model to work with various hardware accelerators. As an example, to benefit from the high-performance AWS Inferentia runtime, we need to ensure that all tensors have fixed sizes (set once, during the model compilation phase). This means that we need to

  • add logic that splits longer utterances into smaller chunks that fit specific input sizes (depending on the model);
  • add logic that ensures proper padding; and
  • decide which functionality should be handled directly by the model and which by the integration layer.

When we want to run the same model on general-purpose GPUs, we probably don’t need these changes, and it would be useful if the framework could switch back and forth between contexts in an easy way. We therefore decouple the TTS model into a set of more specialized integration components, capable of doing all the required logic.

Integration components

The integration layer encapsulates the model in a set of components capable of transforming an input utterance into a mel-spectrogram. As the model usually operates in two stages — preprocessing data and generating data on demand — it is convenient to use two types of components:

  • a SequenceBlock, which takes an input tensor and returns a transformed tensor (the input can be the result of applying another SequenceBlock), and
  • a StreamableBlock, which generates data (e.g., frames) on demand. As an input it takes the results of another StreamableBlock (blocks can form a pipeline) and/or data generated by a SequenceBlock.

These simple abstractions offer great flexibility in creating variants of acoustic models. Here’s an example:

TTS framework.jpeg
An example of an acoustic model built using the SequenceBlock and StreamableBlock abstractions.

The acoustic model consists of

  • two encoders (SequenceBlocks), which convert the input text embedding into one-dimensional representation tensors, one for encoded text and one for predicted durations;
  • an upsampler (a StreamableBlock, which takes the encoders’ results as an input), which creates intermediary, speech-length sequences, according to the data returned by the encoders; and 
  • a decoder (a StreamableBlock), which generates mel-spectrogram frames.

The whole model is encapsulated in a specialized StreamableBlock called StreamablePipeline, which contains exactly one SequenceBlock and one StreamableBlock:

Related content
According to listener tests, whispers produced by a new machine learning model sound as natural as vocoded human whispers.

  • the SequenceBlockContainer is a specialized SequenceBlock that consists of a set of nested SequenceBlocks capable of running neural-network encoders;
  • the StreamableStack is specialized StreamableBlock that decodes outputs from the upsampler and creates mel-spectrogram frames.

The integration framework ensures that all components are run in the correct order, and depending on the specific versions of components, it allows for the use of various hardware accelerators.

The integration layer

The acoustic model is provided as a plugin, which we call an “addon”. An addon consists of exported neural networks, each represented as a named set of symbols and parameters (encoder, decoder, etc.), along with configuration data. One of the configuration attributes, called “stack”, specifies how integration components should be connected together to build a working integration layer. Here’s the code for the stack attribute that describes the architecture above:

'stack'=[
	{'type' : 'StreamablePipeline, 
	 'sequence_block' : {'type' : 'Encoders'},
	 'streamable_block' : 
		{'type': 'StreamableStack', 
		 'stack' : [ 
			{'type' : 'Upsampler'}, 
			{'type' : 'Decoder'} 
		]} 
	} 
]

This definition will create an integration layer consisting of a StreamablePipeline with

  • All encoders specified in the addon (the framework will automatically create all required components);
  • An upsampler, which generates intermediate data for the decoder; and
  • the decoder specified in the addon, which generates the final frames.

The JSON format allows us to make easy changes. For example, we can create a specialized component that runs all sequence blocks in parallel on a specific hardware accelerator and name it CustomizedEncoders. In this case, the only change in the configuration specification is to replace the name “Encoders” with “CustomizedEncoders”.
Running experiments using components with additional diagnostic or digital-signal-processing effects is also trivial. A new component’s only requirement is to extend one of two generic abstractions; other than that, there are no other restrictions. Even replacing one StreamableBlock with the whole nested sequence-to-sequence stack is perfectly fine, according to the framework design.

This framework is already used in production. It is a vital pillar of our recent, successful integration of state-of-the-art TTS architectures (without attention) and legacy models.

Acknowledgments: Daniel Korzekwa

Related content

US, NY, New York
Amazon is looking for a Senior Applied Scientist to help build the next generation of sourcing and vendor experience systems. The Optimal Sourcing Systems (OSS) owns the optimization of inventory sourcing and the orchestration of inbound flows from vendors worldwide. We source inventory from thousands of vendors for millions of products globally while orchestrating the inbound flow for billions of units. Our goals are to increase reliable access to supply, improve supply chain-driven vendor experience, and reduce end-to-end supply chain costs, all in service of maximizing Long-Term Free Cash Flow (LTFCF) for Amazon. As a Senior Applied Scientist, you will work with software engineers, product managers, and business teams to understand the business problems and requirements, distill that understanding to crisply define the problem, and design and develop innovative solutions to address them. Our team is highly cross-functional and employs a wide array of scientific tools and techniques to solve key challenges, including optimization, causal inference, and machine learning/deep learning. Some critical research areas in our space include modeling buying decisions under high uncertainty, vendors' behavior and incentives, supply risk and enhancing visibility and reliability of inbound signals. Key job responsibilities You will be a science tech leader for the team. As a Applied Scientist you will: - Set the scientific strategic vision for the team. You - - lead the decomposition of problems and development of roadmaps to execute on it. - Set an example for other scientists with exemplary scientific analyses; maintainable, extensible, and well-tested code; and simple, intuitive, and effective solutions. - Influence team business and engineering strategies. - Exercise sound judgment to prioritize between short-term vs. long-term and business vs. technology needs. - Communicate clearly and effectively with stakeholders to drive alignment and build consensus on key initiatives. - Foster collaborations between scientists across Amazon researching similar or related problems. - Actively engage in the development of others, both within and outside the team. - Engage with the broader scientific community through presentations, publications, and patents.
US, CA, Palo Alto
We’re working to improve shopping on Amazon using the conversational capabilities of large language models, and are searching for pioneers who are passionate about technology, innovation, and customer experience, and are ready to make a lasting impact on the industry. You’ll be working with talented scientists, engineers, and technical program managers (TPM) to innovate on behalf of our customers. If you’re fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey!
US, WA, Redmond
Project Kuiper is an initiative to increase global broadband access through a constellation of 3,236 satellites in low Earth orbit (LEO). Its mission is to bring fast, affordable broadband to unserved and underserved communities around the world. Project Kuiper will help close the digital divide by delivering fast, affordable broadband to a wide range of customers, including consumers, businesses, government agencies, and other organizations operating in places without reliable connectivity. The Kuiper Global Capacity Planning team owns designing, implementing, and operating systems that support the planning, management, and optimization of Kuiper network resources worldwide. We are looking for a talented principal scientist to lead Kuiper’s long-term vision and strategy for capacity simulations and inventory optimization. This effort will be instrumental in helping Kuiper execute on its business plans globally. As one of our valued team members, you will be obsessed with matching our standards for operational excellence with a relentless focus on delivering results. Key job responsibilities In this role, you will: Work cross-functionally with product, business development, and various technical teams (engineering, science, R&D, simulations, etc.) to establish the long-term vision, strategy, and architecture for capacity simulations and inventory optimization. Design and deliver modern, flexible, scalable solutions to complex optimization problems for operating and planning satellite resources. Lead short and long terms technical roadmap definition efforts to predict future inventory availability and key operational and financial metrics across the network. Design and deliver systems that can keep up with the rapid pace of optimization improvements and simulating how they interact with each other. Analyze large amounts of satellite and business data to identify simulation and optimization opportunities. Synthesize and communicate insights and recommendations to audiences of varying levels of technical sophistication to drive change across Kuiper. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
GB, London
Disrupting the way Amazon fulfills our customers’ orders. Amazon operations is changing the way we improve Customer Experience through flawless fulfillment focused on 1) successful on-time delivery, 2) at speed and 3) at the lowest possible cost. Being the engine of Amazon Operational excellence, driving zero defects through ideal operation, being the heart of the Fulfillment network and its center of excellence, being proactive and aspiring for zero defects across the network with 100% organizational engagement. For example, our applied science team leverage a variety of advanced machine learning and cloud computing techniques to power Amazon's operations performance management. This includes building algorithms and cloud services using LLMs, deep neural networks, and other ML approaches to make root cause analysis of incidents and defects better. They develop machine learning models to predict inbound capacity forecasts and select the optimal order of unloading and stowing the incoming items in the Fulfilment Center. The teams also utilize Langchain, Amazon Bedrock, Amazon Textract, ElasticCache Redis, Opensearch and Kubernetes to extract insights from big data and deliver recommendations to operations managers, continuously improving through offline analysis and impact evaluation. Underpinning these efforts are unique technical challenges, such as operating at unprecedented scale (100k requests per second with SNS/SQS and <1ms latency with Redis) while respecting privacy and customer trust guarantees, and solving a wide variety of complex computational operational problems related to inbound management for unloading and stowing before stow time SLA, outbound for picking and packing before SLAM PAD time and shipping for staging and loading before Critical Pull Time. Key job responsibilities GOX team is looking for a Senior Applied Scientist to support our vision of giving our customers the best fulfillment experience in the world, and our mission of delighting our customers by providing capabilities, tools and mechanisms to fulfillment operations. As Skynet Sr. APSCI, you would be providing resources and expertise for all data related reports (dashboard, scorecards…), analysis (statistical approach), and Machine Learning products and tools development. On top of your internal customers within GOX team you would be supporting more widely with your experience and skills all across the org, partnering with a wide range of departments within Ops Integration (Packaging, Sustainability) within the company mainly with ICQA, ATS, AMZL, GTS… on several projects. You will be part of the community of Scientists within Amazon Operations including other AS, BIEs, SDEs, … split across the different departments. You will be part of projects requiring your close collaboration and interactions with Operations that require you to have a good understanding of product flow and process all along the distribution chain. The GOX team is now recognized for its expertise and excellence in creating tools that improve massively the customer experience. Several of them now rolled out in other regions with some of these tools becoming worldwide standard. Reporting to the GOX Senior Manager, you will be responsible for developing the data-driven decision process from historical data and ML based predictive analysis and maintaining accurate and reliable data infrastructure. You will work across the entire business, and be exposed to a wide range of functions from Operations, Finance, Technology, and Change management. The successful candidate will be able to work with minimal instruction and oversight, manage multiple tasks and support projects simultaneously. Maintaining your relationships with the customers in operations and within the team, while owning deliverables end-to-end is expected. Critical to the success of this role is your ability to work with big data, develop insightful analysis, communicate findings in a clear and compelling way and work effectively as part of the team, raising the bar and insisting on high standards. About the team GOX DEA team is the engine of Amazon Operational excellence at the heart of the fulfillment network operations, aspiring zero defects. It is our purpose to improve Customer Experience through flawless fulfillment focused on 1) successful on-time delivery, 2) at speed and 3) at the lowest possible cost. Our Solutions support on-time delivery of billions of packages to our customers across the globe leveraging AI & Generative AI technology.
US, WA, Bellevue
Amazon’s Last Mile Team is looking for a passionate individual with strong optimization and analytical skills to join its Last Mile Science team in the endeavor of designing and improving the most complex planning of delivery network in the world. Last Mile builds global solutions that enable Amazon to attract an elastic supply of drivers, companies, and assets needed to deliver Amazon's and other shippers' volumes at the lowest cost and with the best customer delivery experience. Last Mile Science team owns the core decision models in the space of jurisdiction planning, delivery channel and modes network design, capacity planning for on the road and at delivery stations, routing inputs estimation and optimization. Our research has direct impact on customer experience, driver and station associate experience, Delivery Service Partner (DSP)’s success and the sustainable growth of Amazon. Optimizing the last mile delivery requires deep understanding of transportation, supply chain management, pricing strategies and forecasting. Only through innovative and strategic thinking, we will make the right capital investments in technology, assets and infrastructures that allows for long-term success. Our team members have an opportunity to be on the forefront of supply chain thought leadership by working on some of the most difficult problems in the industry with some of the best product managers, scientists, and software engineers in the industry. Key job responsibilities Candidates will be responsible for developing solutions to better manage and optimize delivery capacity in the last mile network. The successful candidate should have solid research experience in one or more technical areas of Operations Research or Machine Learning. These positions will focus on identifying and analyzing opportunities to improve existing algorithms and also on optimizing the system policies across the management of external delivery service providers and internal planning strategies. They require superior logical thinkers who are able to quickly approach large ambiguous problems, turn high-level business requirements into mathematical models, identify the right solution approach, and contribute to the software development for production systems. To support their proposals, candidates should be able to independently mine and analyze data, and be able to use any necessary programming and statistical analysis software to do so. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs.
US, NY, New York
Amazon is investing heavily in building a world class advertising business and developing a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses for driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. Sponsored Products DP Experience and Market place org is looking for a strong Senior Applied Scientist who has a track-record of performing deep analysis and is passionate about applying advanced ML and statistical techniques to solve real-world, ambiguous and complex challenges to optimize and improve the product performance, and who is motivated to achieve results in a fast-paced environment. The position offers an exceptional opportunity to grow your technical and non-technical skills and make a real difference to the Amazon Advertising business. As a Senior Applied Scientist on this team, you will: * Be the technical leader in Machine Learning; lead efforts within this team and collaborate across teams * Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, perform hands-on analysis and modeling of enormous data sets to develop insights that improve shopper experiences and merchandise sales * Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. * Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. * Research new and innovative machine learning approaches. * Promote the culture of experimentation and applied science at Amazon About the team Sponsored Products (SP) is Amazon's largest and fastest growing business. Over the last few years we grown to a multi-billion dollar business. SP ads are shown prominently throughout search and detail pages, allowing shoppers to seamlessly discover products sold on Amazon. Ad experience and market place is one of the highest impact decisions we make. This role has unparalleled opportunity to grow our marketplace and deliver value for advertisers and shoppers. You will invent new experiences and influence customer-facing shopping experiences; this is your opportunity to work within the fastest-growing businesses across all of Amazon!
US, VA, Herndon
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply cutting edge Generative AI algorithms to solve real world problems with significant impact? The Generative AI Innovation Center at AWS is a new strategic team that helps AWS customers implement Generative AI solutions and realize transformational business opportunities. This is a team of strategists, data scientists, engineers, and solution architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, select and train and fine tune the right models, define paths to navigate technical or business challenges, develop proof-of-concepts, and make plans for launching solutions at scale. The GenAI Innovation Center team provides guidance on best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for Data Scientists capable of using GenAI and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. This position requires that the candidate selected be a US Citizen. Key job responsibilities As an Data Scientist, you will * Collaborate with AI/ML scientists and architects to Research, design, develop, and evaluate cutting-edge generative AI algorithms to address real-world challenges * Interact with customers directly to understand the business problem, help and aid them in implementation of generative AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production * Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder * Provide customer and market feedback to Product and Engineering teams to help define product direction About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Sales, Marketing and Global Services (SMGS) AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services.
US, WA, Redmond
Have you ever wanted to be part of a team that is building industry changing technology? Amazon’s Project Kuiper is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband network connectivity to unserved and underserved communities around the world. The Kuiper Business Solutions team owns a suite of products and services to operate and scale Kuiper. We are looking for a passionate, talented, and inventive Data Scientist with a background in AI, Gen AI, Machine Learning, NLP, to lead delivering best in class automated customer service and business analytic solutions for Kuiper Customer Service. As a Data Scientist, you will be responsible for the development, fine-tuning, and evaluation of AI models that power our chatbot and IVR solutions. Your work will ensure the chatbot and IVR is accurate, reliable, and continually improving to meet customer needs. This role involves collaborating with cross-functional teams to integrate AI solutions into our customer service platform, monitor their performance, and implement ongoing enhancements. The ideal candidate has experience in successfully building chat bots using AI technologies, measuring their performance and delivering ongoing improvements. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities * Build and validate data pipelines for training and evaluating the LLMs * Extensively clean and explore the datasets * Train and evaluate LLMs in a robust manner * Design and conduct A/B tests to validate model performance * Automate model inference on AWS infrastructure
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.