Amazon Text-to-Speech group's research at ICASSP 2022

Papers focus on speech conversion and data augmentation — and sometimes both at once.

The automatic conversion of text to speech is crucial to Alexa: it’s how Alexa communicates with customers. The models developed by the Amazon Text-to-Speech group are also available to Amazon Web Services (AWS) customers through Polly, the AWS text-to-speech service.

The Text-to-Speech (TTS) group has four papers at this year’s International Conference on Acoustics, Speech, and Signal Processing (ICASSP), all of which deal with either voice conversion (preserving prosodic features while converting one synthetic voice to another), data augmentation, or both.

More ICASSP coverage on Amazon Science

In “Voice Filter: Few-shot text-to-speech speaker adaptation using voice conversion as a post-processing module”, the Amazon TTS group addresses the problem of few-shot speaker adaptation, or learning a new synthetic voice from just a handful of training examples. The paper reformulates the problem as learning a voice conversion model that’s applied to the output of a high-quality TTS model, a conceptual shift from the existing few-shot-TTS paradigm.

In “Cross-speaker style transfer for text-to-speech using data augmentation”, the team shows how to build a TTS model capable of expressive speech, even when the only available training data for the target voice consists of neutral speech. The idea is to first train a voice conversion model, which converts samples of expressive speech in other voices into the target voice, and then use the converted speech as additional training data for the TTS model.

In “Distribution augmentation for low-resource expressive text-to-speech”, the TTS group expands the range of texts used to train a TTS model by recombining excerpts from existing examples to produce new examples. The trick is to maintain the syntactic coherence of the synthetic examples, so that the TTS model won’t waste resources learning improbable sequences of phonemes. (This is the one data augmentation paper that doesn’t rely on voice conversion.)

Parse substitution.png
In this example of data augmentation through recombination of existing training examples, the verb phrase “shook her head”, as identified by a syntactic parse, is substituted for the verb phrase “lied” in the sentence “he never lied”. The original acoustic signals (bottom row) are cut and spliced at the corresponding points. From "Distribution augmentation for low-resource expressive text-to-speech".

Finally, in “Text-free non-parallel many-to-many voice conversion using normalising flows”, the team adapts the concept of normalizing flows, which have been used widely for TTS, to the problem of voice conversion. Like most deep-learning models, normalizing flows learn functions that produce vector representations of input data. The difference is that the functions are invertible, so the inputs can be recovered from the representations. The team hypothesized that preserving more information from the input data would yield better voice conversion, and early experiments bear that hypothesis out.

Voice filter

The idea behind “Voice Filter: Few-shot text-to-speech speaker adaptation using voice conversion as a post-processing module” is that for few-shot learning, it’s easier to take the output of an existing, high-quality TTS model — a voice spectrogram — and adapt that to a new target voice than it is to adapt the model itself.

The key to the approach is that the voice filter, which converts the TTS model’s output to a new voice, is trained on synthetic data created by the TTS model itself.

Voice filter.png
The training procedure for the voice filter.

The TTS model is duration controllable, meaning that the input text is encoded to indicate the duration that each phoneme should have in the output speech. This enables the researchers to create two parallel corpora of training data. One corpus consists of real training examples, from 120 different speakers. The other corpus is synthetic speech generated by the TTS model, but with durations that match those of the multispeaker examples.

The voice filter is trained on the parallel corpora, and then, for few-shot learning, the researchers simply fine-tune it on a new speaker. In experiments, the researchers found that this approach produced speech whose quality was comparable to that produced by conventional models trained on 30 times as much data.

Cross-speaker style transfer

The voice conversion model that the researchers use in “Cross-speaker style transfer for text-to-speech using data augmentation” is based on the CopyCat model previously reported on the Amazon Science blog. The converted expressive data is added to the neutral data to produce the dataset used to train the TTS model

The TTS model takes two inputs: a text sequence and a style vector. During training, the text sequence passes to the TTS model, and the spectrogram of the target speech sample passes to a reference encoder, which produces the style embedding. At inference time, of course, there is no input spectrogram. But the researchers show that they can control the style of the TTS model’s output by feeding it a precomputed style embedding.

Cross-speaker style transfer.png
The voice conversion model (left) and text-to-speech model (right) used for cross-speaker style transfer. The reference encoders are used only during training. From "Cross-speaker style transfer for text-to-speech using data augmentation".

The researchers assessed the model based on human evaluation using the MUSHRA perception scale. Human evaluators reported that, relative to a benchmark model, the new model reduced the gap in perceived style similarity between synthesized and real speech by an average of 58% across 14 different speakers.

Distribution augmentation

Distribution augmentation for low-resource expressive text-to-speech” considers the case in which training data for a new voice is lacking. The goal is to permute the texts of the existing examples, producing new examples, and recombine excerpts from the corresponding speech samples to produce new samples. This does not increase the acoustic diversity of the training targets, but it does increase the linguistic diversity of the training inputs.

To ensure that the synthetic training examples do not become too syntactically incoherent, the researchers construct parse trees for the input texts and then swap syntactically equivalent branches across trees (see figure, above). Swapping the corresponding sections of the acoustic signal requires good alignment between text and signal, which is accomplished by existing forced-alignment models.

During training, to ensure that the resulting TTS model doesn’t become overbiased toward the synthetic examples, the researchers also include a special input token to indicate points at which two existing samples have been fused together. The expectation is that the model will learn to privilege phonemic sequences internal to the real samples over phonemic sequences that cross boundaries between fused samples. At inference time, the value of the token is simply set to 0 across all inputs.

Fused-sample token.png
The “augmentation tag” marks the boundary between acoustic signals taken from two different training examples, to prevent overbiasing the TTS model toward synthetic data. From "Distribution augmentation for low-resource expressive text-to-speech".

The quality of the model’s speech output was assessed by 60 human evaluators, who compared it to speech output by a baseline model, on five different datasets. Across the board, the output of the new model received better scores than the output of the benchmark model.

Normalizing flows

A normalizing flow learns to map input data to a representational space in a way that maximizes the approximation of some prior distribution. The word “flow” indicates that the mapping can be the result of passing the data through a series of invertible transformations, and the enforcement of the distribution imposes the normalization.

In “Text-free non-parallel many-to-many voice conversion using normalising flows”, Amazon TTS researchers consider a flow whose inputs are a source spectrogram, a phoneme embedding, a speaker identity embedding, the fundamental frequency of the acoustic signal, and a flag denoting whether a frame of input audio is voiced or unvoiced. The flow maps the inputs to a distribution of phoneme frequencies in a particular application domain.

Typically, a normalizing flow will learn both the distribution and the mapping from the training data. But here, the researchers pretrain the flow on a standard TTS task, for which training data is plentiful, to learn the distribution in advance.

Because the flow is reversible, a vector in the representational space can be mapped back to a set of source inputs, provided that the other model inputs (phoneme embedding, speaker ID, and so on) are available. To use normalizing flows to perform speech conversion, the researchers simply substitute one speaker for another during this reverse mapping.

Normalizing flow.png
An overview of TTS researchers' use of normalizing flows to do voice conversion. From "Text-free non-parallel many-to-many voice conversion using normalising flows".

The researchers examine two different experimental setting, one in which the voice conversion model takes both text sequences and spectrograms as inputs and one in which it takes spectrograms only. In the second case, the pretrained normalizing-flow model significantly outperformed the benchmarks. A normalizing-flow model that learned the phoneme distribution directly from the training data didn’t fare as well, indicating the importance of the pretraining step.

Related content

CA, BC, Vancouver
Machine learning (ML) has been strategic to Amazon from the early years. We are pioneers in areas such as recommendation engines, product search, eCommerce fraud detection, and large-scale optimization of fulfillment center operations. The Amazon ML Solutions Lab team helps AWS customers accelerate the use of machine learning to solve business and operational challenges and promote innovation in their organization. We are looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background to help develop solutions by pushing the envelope in Time Series, Automatic Speech Recognition (ASR), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV).Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. As a ML Solutions Lab Applied Scientist, you are proficient in designing and developing advanced ML models to solve diverse challenges and opportunities. You will be working with terabytes of text, images, and other types of data and develop novel models to solve real-world problems. You'll design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. You will apply classical ML algorithms and cutting-edge deep learning (DL) approaches to areas such as drug discovery, customer segmentation, fraud prevention, capacity planning, predictive maintenance, pricing optimization, call center analytics, player pose estimation, and event detection among others. The primary responsibilities of this role are to: Design, develop, and evaluate innovative ML/DL models to solve diverse challenges and opportunities across industriesInteract with customer directly to understand their business problems, and help them with defining and implementing scalable ML/DL solutions to solve themWork closely with account teams, research scientist teams, and product engineering teams to drive model implementations and new algorithmsThis position requires travel of up to 20%.
US, WA, Seattle
Are you a Ph.D. interested in the fields of machine learning, deep learning, automated reasoning, speech, robotics, computer vision, optimization, or quantum computing? Do you enjoy diving deep into hard technical problems and coming up with solutions that enable successful products that improve the lives of people in a meaningful way? If this describes you, come join our science teams at Amazon. As an Applied Scientist, you will have access to large datasets with billions of images and video to build large-scale systems. Additionally, you will analyze and model terabytes of text, images, and other types of data to solve real-world problems and translate business and functional requirements into quick prototypes or proofs of concept. We are looking for smart scientists capable of using a variety of domain expertise to invent, design, evangelize, and implement state-of-the-art solutions for never-before-solved problems.
US, WA, Seattle
Amazon internships are full-time (40 hours/week) for 12 consecutive weeks with start dates in May - July 2023. Our internship program provides hands-on learning and building experiences for students who are interested in a career in hardware engineering. This role will be based in Seattle, and candidates must be willing to work in-person.Corporate Projects (CPT) is a team that sits within the broader Corporate Development organization at Amazon. We seek to bring net-new, strategic projects to life by working together with customers and evolving projects from ZERO-to-ONE. To do so, we deploy our resources towards proofs-of-concept (POCs) and pilot programs and develop them from high-level ideas (the ZERO) to tangible short-term results that provide validating signal and a path to scale (the ONE). We work with our customers to develop and create net-new opportunities by relentlessly scouring all of Amazon and finding new and innovative ways to strengthen and/or accelerate the Amazon Flywheel.CPT seeks an Applied Science intern to work with a diverse, cross-functional team to build new, innovative customer experiences. Within CPT, you will apply both traditional and novel scientific approaches to solve and scale problems and solutions. We are a team where science meets application. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems.
US, IL, Chicago
MULTIPLE POSITIONS AVAILABLECompany: AMAZON.COM SERVICES LLCPosition Title: Data Scientist ILocation: Chicago, IllinoisPosition Responsibilities:Build the core intelligence, insights, and algorithms that support the real estate acquisition strategies for Amazon physical stores. Tackle cutting-edge, complex problems such as predicting the optimal location for new Amazon stores by bringing together numerous data assets, and using best-in-class modeling solutions to extract the most information out of them. Work with business stakeholders, software development engineers, and other data scientists across multiple teams to develop innovative solutions at massive scale.Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation #0000
US, WA, Seattle
Note that this posting is for a handful of teams within Amazon Robotics. Teams include: Robotics, Computer Vision, Machine Learning, Optimization, and more.Are you excited about building high-performance robotic systems that can perceive and learn to help deliver for customers? The Amazon Robotics team is creating new science products and technologies that make this possible, at Amazon scale. We work at the intersection of computer vision, machine learning, robotic manipulation, navigation, and human-robot interaction.Amazon Robotics is seeking broad, curious applied scientists and engineering interns to join our diverse, full-stack team. In addition to designing, building, and delivering end-to-end robotic systems, our team is responsible for core infrastructure and tools that serve as the backbone of our robotic applications, enabling roboticists, applied scientists, software and hardware engineers to collaborate and deploy systems in the lab and in the field. We will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Come join us!A day in the lifeAs an intern you will develop a new algorithm to solve one of the challenging computer vision and manipulation problems in Amazon's robotic warehouses. Your project will fit your academic research experience and interests. You will code and test out your solutions in increasingly realistic scenarios and iterate on the idea with your mentor to find the best solution to the problem.
US, WA, Seattle
Are you excited about building high-performance robotic systems that can perceive, learn, and act intelligently alongside humans? The Robotics AI team is creating new science products and technologies that make this possible, at Amazon scale. We work at the intersection of computer vision, machine learning, robotic manipulation, navigation, and human-robot interaction.The Amazon Robotics team is seeking broad, curious applied scientists and engineering interns to join our diverse, full-stack team. In addition to designing, building, and delivering end-to-end robotic systems, our team is responsible for core infrastructure and tools that serve as the backbone of our robotic applications, enabling roboticists, applied scientists, software and hardware engineers to collaborate and deploy systems in the lab and in the field. Come join us!
US, WA, Bellevue
Employer: Amazon.com Services LLCPosition: Research Scientist IILocation: Bellevue, WA Multiple Positions Available1. Research, build and implement highly effective and innovative methods in Statistical Modeling, Machine Learning, and other quantitative techniques such as operational research and optimization to deliver algorithms that solve real business problems.2. Take initiative to scope and plan research projects based on roadmap of business owners and enable data-driven solutions. Participate in shaping roadmap for the research team.3. Ensure data quality throughout all stages of acquisition and processing of the data, including such areas as data sourcing/collection, ground truth generation, data analysis, experiment, evaluation and visualization etc.4. Navigate a variety of data sources, understand the business reality behind large-scale data and develop meaningful science solutions.5. Partner closely with product or/and program owners, as well as scientists and engineers in cross-functional teams with a clear path to business impact and deliver on demanding projects.6. Present proposals and results in a clear manner backed by data and coupled with conclusions to business customers and leadership team with various levels of technical knowledge, educating them about underlying systems, as well as sharing insights.7. Perform experiments to validate the feature additions as requested by domain expert teams.8. Some telecommuting benefits available.The pay range for this position in Bellevue, WA is $136,000-$184,000 (yr); however, base pay offered may vary depending on job-related knowledge, skills, and experience. A sign-on bonus and restricted stock units may be provided as part of the compensation package, in addition to a full range of medical, financial, and/or other benefits, dependent on the position offered. This information is provided by the Washington Equal Pay Act. Base pay information is based on market location. Applicants should apply via Amazon's internal or external careers site.#0000
US, VA, Arlington
The Central Science Team within Amazon’s People Experience and Technology org (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, well-being, and the value of work to Amazonians. We are an interdisciplinary team, which combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. As Director for PXT Central Science Technology, you will be responsible for leading multiple teams through rapidly evolving complex demands and define, develop, deliver and execute on our science roadmap and vision. You will provide thought leadership to scientists and engineers to invent and implement scalable machine learning recommendations and data driven algorithms supporting flexible UI frameworks. You will manage and be responsible for delivering some of our most strategic technical initiatives. You will design, develop and operate new, highly scalable software systems that support Amazon’s efforts to be Earth’s Best Employer and have a significant impact on Amazon’s commitment to our employees and communities where we both serve and employ 1.3 million Amazonians. As Director of Applied Science, you will be part of the larger technical leadership community at Amazon. This community forms the backbone of the company, plays a critical role in the broad business planning, works closely with senior executives to develop business targets and resource requirements, influences our long-term technical and business strategy, helps hire and develop engineering leaders and developers, and ultimately enables us to deliver engineering innovations.This role is posted for Arlington, VA, but we are flexible on location at many of our offices in the US and Canada.
US, VA, Arlington
Employer: Amazon.com Services LLCPosition: Data Scientist IILocation: Arlington, VAMultiple Positions Available1. Manage and execute entire projects or components of large projects from start to finish including data gathering and manipulation, synthesis and modeling, problem solving, and communication of insights and recommendations.2. Oversee the development and implementation of data integration and analytic strategies to support population health initiatives.3. Leverage big data to explore and introduce areas of analytics and technologies.4. Analyze data to identify opportunities to impact populations.5. Perform advanced integrated comprehensive reporting, consultative, and analytical expertise to provide healthcare cost and utilization data and translate findings into actionable information for internal and external stakeholders.6. Oversee the collection of data, ensuring timelines are met, data is accurate and within established format.7. Act as a data and technical resource and escalation point for data issues, ensuring they are brought to resolution.8. Serve as the subject matter expert on health care benefits data modeling, system architecture, data governance, and business intelligence tools. #0000
US, TX, Dallas
Employer: Amazon.com Services LLCPosition: Data Scientist II (multiple positions available)Location: Dallas, TX Multiple Positions Available:1. Assist customers to deliver Machine Learning (ML) and Deep Learning (DL) projects from beginning to end, by aggregating data, exploring data, building and validating predictive models, and deploying completed models to deliver business impact to the organization;2. Apply understanding of the customer’s business need and guide them to a solution using AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances;3. Use Deep Learning frameworks like MXNet, PyTorch, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our customers build DL models;4. Research, design, implement and evaluate novel computer vision algorithms and ML/DL algorithms;5. Work with data architects and engineers to analyze, extract, normalize, and label relevant data;6. Work with DevOps engineers to help customers operationalize models after they are built;7. Assist customers with identifying model drift and retraining models;8. Research and implement novel ML and DL approaches, including using FPGA;9. Develop computer vision and machine learning methods and algorithms to address real-world customer use-cases; and10. Design and run experiments, research new algorithms, and work closely with engineers to put algorithms and models into practice to help solve customers' most challenging problems.11. Approximately 15% domestic and international travel required.12. Telecommuting benefits are available.#0000