Using large language models (LLMs) to synthesize training data

Prompt engineering enables researchers to generate customized training examples for lightweight “student” models.

The machine learning models that power conversational agents like Alexa are typically trained on labeled data, but data collection and labeling are expensive and complex, creating a bottleneck in the development process.

Large language models (LLMs) such as the 20-billion-parameter Alexa Teacher Model (AlexaTM 20B) might look like a way to break that bottleneck, since they excel in few-shot settings — i.e., when only a handful of labeled examples are available. But their size and computational costs are unsuitable for runtime systems, which require low latency and support high traffic volumes.

To enable models that are lightweight enough for runtime use, even when real training data is scarce, we propose teaching via data (TvD), in which we use an LLM-based “teacher” model to generate synthetic training data for a specific task, then use the generated data to fine-tune a smaller “student” model.

Related content
With an encoder-decoder architecture — rather than decoder only — the Alexa Teacher Model excels other large language models on few-shot tasks such as summarization and machine translation.

This blog post covers two of our recent papers on TvD. LINGUIST, published at the 2022 International Conference on Computational Linguistics (COLING), generates training data for joint intent classification and slot tagging (IC+ST). CLASP, published at the 2022 Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (AACL), generates training data for semantic parsing. Both tasks are core components of conversational AI.

We show that LINGUIST data generation improves on popular multilingual IC+ST benchmarks by 2 to 4 points absolute, while CLASP data generation improves multilingual semantic parsing by 5 to 6 points absolute.

The AlexaTM 20B model used in CLASP is now available on AWS JumpStart.

LINGUIST

Conversational-AI agents use intent classification and slot tagging (IC+ST) to understand the intent of a speaker’s request and identify the entities relevant to fulfilling that request. For example, when an agent is asked to “play ‘Wake Me Up’ by Avicii”, it might identify the intent as PlayMusic, with the slot value “wake me up” assigned to the slot Song and “Avicii” assigned to Artist. (Slot tagging in this context is also known as named-entity recognition, or NER.)

NLU example.png
An example of intent classification and slot tagging in natural-language understanding.

With real-world agents, the set of intents and slots grows over time as developers add support for new use cases. Furthermore, multilingual agents such as Alexa seek to maintain parity across languages when new intents and slots are developed, creating an additional bottleneck during development.

Suppose, for example, that we’re enabling a multilingual agent to understand the new intent GetWeather. To begin with, the intent may have only two associated utterances, in English and no other languages, annotated with the slots City and DayOfWeek. These two utterances alone are not enough to build a strong multilingual IC+ST model, so we need to obtain more training data.

GetWeather intent.png
Sample starter utterances for the GetWeather intent.

A simple baseline approach to expanding this dataset to a new language is to translate the text. Here is an example using AlexaTM 20B with an in-context one-shot prompt. The text in the yellow box is the input to the model, and we can sample as many outputs from the model as we want, shown in the blue boxes.

One-shot translation.png
Alternate translations sampled from AlexaTM 20B.

To get more examples in the original English, we can either translate these French outputs back to English (back-translation) or directly use a paraphrasing model, such as, again, AlexaTM 20B with an in-context prompt:

One-shot paraphrase.png
Using AlexTM 20B as a paraphrase generator.

While these approaches go a long way, they have two key limitations: (1) the outputs don’t have the slot tags labeled, so we need to use a separate model (e.g., one that does word alignment) to guess which output words are City and which DayOfWeek, a process that introduces noise; and (2) we cannot control the outputs — say, by restricting them to specific slot types and values.

Related content
Dialogue simulator and conversations-first modeling architecture provide ability for customers to interact with Alexa in a natural and conversational manner.

To address these two problems, we propose LINGUIST: language model instruction tuning to generate annotated utterances for intent classification and slot tagging. To control outputs, we design an instruction prompt whose syntax resembles that of web markup languages like HTML/XML, which the language model is likely to have encountered during pretraining.

We also introduce an output format with brackets and numbers that enables the model to produce synthetic data with the slots already tagged. In the output “[1 boston ]”, for instance, the numeral “1” indicates the slot tag City. We then fine-tune the teacher model on prompts and targets from existing data — either from other intents or from a separate public dataset like MASSIVE.

When developing a new intent or slot with only a few examples, we can now instruct the LINGUIST model to generate the data we are looking for. For instance, we can generate data for the GetWeather intent that always uses “Dallas” as the City, tagged with the number 1. For the DayOfWeek slot, tagged as number 2, we can use the special wildcard instruction “*”, telling the model to fill in an appropriate value, and it will produce novel values like “Saturday” and “Thursday”, which did not appear in the original examples.

Basic LINGUIST prompt.png
By designing prompts that exploit regularities in the syntax of web markup languages like HTML/XML, we can fine-tune AlexaTM sequence-to-sequence models to generate labeled data with constrained slot values.

We also built a mechanism to control the output language: by simply changing the prompt to indicate “French” instead of English, we get outputs in French.

LINGUIST translation.png
Simply changing the word "English" to "French" in the prompt changes the model's output language.

Finally, LINGUIST can generate annotated training data even when we have zero examples to start with, by attending to natural-language label names like “GetWeather”, “City”, and “DayOfWeek”. In this case, there is less information on the input side, so the output contains more noise. However, the generated data is still useful for building a model for new intents and slots.

LINGUIST zero-shot.png
LINGUIST can produce coherent outputs even with zero examples.

In the paper, we show that LINGUIST outperforms state-of-the-art baselines like translation and paraphrasing by 2-4 points absolute on the public datasets SNIPS and mATIS++ across seven languages.

CLASP

While intent classification and slot tagging cover many interactions with conversational agents, they are limited in scope. For more complex queries, we instead apply semantic parsing (SP). Here is an example from the PIZZA dataset: “large pizza with extra cheese and pineapple hold the ham and two sprites please”. We need SP to recover relevant information like the value of the implicit Number slot, the scope of the modifiers Quantity and Not, and the association between multiple intents and slots.

PIZZA label example.png
An example of the labeling in the PIZZA dataset.

SP is even more difficult to annotate than IC+ST, so the training datasets tend to be smaller, especially in languages other than English; we don’t have a MASSIVE dataset for semantic parsing. For example, the PIZZA dataset has only 348 real examples to train on (and in our experiments, we also explore the lower-resource setting of 16 examples).

Related content
Traditionally, Alexa has interpreted customer requests according to their intents and slots. If you say, “Alexa, play ‘What’s Going On?’ by Marvin Gaye,” the intent should be PlayMusic, and “‘What’s Going On?’” and “Marvin Gaye” should fill the slots SongName and ArtistName.

Again adopting the teaching-via-data (TvD) approach, we propose CLASP: few-shot cross-lingual data augmentation for semantic parsing. CLASP consists of four strategies to prompt LLMs like AlexaTM 20B to generate SP training data.

The first two strategies, CLASP-RS (replace slots) and CLASP-TS (translate slots), modify an existing parse by replacing the slots with other values, either from a catalogue of options or via translation to a new language. Then the model generates text to match the new parse.

CLASP-RS.png
An example of how CLASP-RS uses prompt engineering to convert parses with substitute slot values into natural language.

The other two strategies, CLASP-GB (generate both) and CLASP-TB (translate both), give the model more flexibility, instructing it to generate both the parse and the text, in either the same language or a new language.

CLASP-TB.png
CLASP-TB uses prompt engineering to generate both parses and texts in new languages.

AlexaTM 20B can perform these generation tasks quite reliably from only a few in-context examples, which is remarkable given that it was pretrained only on public text from the web and is not specialized for semantic parsing.

For our experiments on data generation for semantic parsing, the baselines we selected include grammar sampling (drawback: unrealistic examples) and translation with alignment (drawback: alignment is challenging and introduces noise).

MTOP results.png
CLASP results on the mTOP dataset.

Using English-language examples from the PIZZA dataset, in the low-resource setting with only 16 real examples, we improve exact-match accuracy by 5 points absolute, topping 85%. On the popular mTOP dataset, we improve over machine translation by 6 points absolute across four new languages, by leveraging only one annotated example from each language.

At Amazon Alexa AI, we continue to explore TvD for tasks such as question answering and dialogue and for additional languages. We have just scratched the surface of what’s possible and are optimistic about the future of TvD. We look forward to continuing to invent methods to improve our models and make our customers’ lives better and easier every day.

Research areas

Related content

IT, Turin
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models, speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer. Throughout your internship journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of Quantum Computing and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Quantum Research Science and Applied Science Internships in Santa Clara, CA and Pasadena, CA. We are particularly interested in candidates with expertise in any of the following areas: superconducting qubits, cavity/circuit QED, quantum optics, open quantum systems, superconductivity, electromagnetic simulations of superconducting circuits, microwave engineering, benchmarking, quantum error correction, etc. In this role, you will work alongside global experts to develop and implement novel, scalable solutions that advance the state-of-the-art in the areas of quantum computing. You will tackle challenging, groundbreaking research problems, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. - We are pioneering the development of robotics dexterous hands that: - Enable unprecedented generalization across diverse tasks - Are compliant but at the same time impact resistant - Can enable power grasps with the same reliability as fine dexterity and nonprehensile manipulation - Can naturally cope with the uncertainty of the environment - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement novel highly dexterous and reliable robotic dexterous hand morphologies - Develop parallel paths for rapid finger design and prototyping combining different actuation and sensing technologies as well as different finger morphologies - Develop new testing and validation strategies to support fast continuous integration and modularity - Build and test full hand prototypes to validate the performance of the solution - Create hybrid approaches combining different actuation technologies, under-actuation, active and passive compliance - Hand integration into rest of the embodiment - Partner with cross-functional teams to rapidly create new concepts and prototypes - Work with Amazon's robotics engineering and operations teams to grasp their requirements and develop tailored solutions - Document the designs, performance, and validation of the final system
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, WA, Bellevue
Are you excited about customer-facing research and reinventing the way people think about long-held assumptions? At Amazon, we are constantly inventing and re-inventing to be the most customer-centric company in the world. To get there, we need exceptionally talented, bright, and driven people. Amazon is one of the most recognizable brand names in the world and we distribute millions of products each year to our loyal customers. A day in the life The ideal candidate will be responsible for quantitative data analysis, building models and prototypes for supply chain systems, and developing state-of-the-art optimization algorithms to scale. This team plays a significant role in various stages of the innovation pipeline from identifying business needs, developing new algorithms, prototyping/simulation, to implementation by working closely with colleagues in engineering, product management, operations, retail and finance. As a senior member of the research team, you will play an integral part on our Supply Chain team with the following technical and leadership responsibilities: * Interact with engineering, operations, science and business teams to develop an understanding and domain knowledge of processes, system structures, and business requirements * Apply domain knowledge and business judgment to identify opportunities and quantify the impact aligning research direction to business requirements and make the right judgment on research project prioritization * Develop scalable mathematical models to derive optimal or near-optimal solutions to existing and new supply chain challenges * Create prototypes and simulations to test devised solutions * Advocate technical solutions to business stakeholders, engineering teams, as well as executive-level decision makers * Work closely with engineers to integrate prototypes into production system * Create policy evaluation methods to track the actual performance of devised solutions in production systems, identify areas with potential for improvement and work with internal teams to improve the solution with new features * Mentor team members for their career development and growth * Present business cases and document models, analyses, and their results in order to influence important decisions About the team Our organization leads the innovation of Amazon’s ultra-fast grocery product initiatives. Our key vision is to transform the online grocery experience and provide a wide grocery selection in order to be the primary destination to fulfill customer’s food shopping needs. We are a team of passionate tech builders who work endlessly to make life better for our customers through amazing, thoughtful, and creative new grocery shopping experiences. To succeed, we need senior technical leaders to forge a path into the future by building innovative, maintainable, and scalable systems.
LU, Luxembourg
Are you a MS student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment, building and deploying machine learning models to drive step-change innovation and scale it to the EU/worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have following key job responsibilities: • Work closely with scientists and engineers to architect and develop new algorithms to implement scientific solutions for Amazon problems. • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and Deliver Machine Learning projects that can be quickly applied starting locally and scaled to EU/worldwide • Build and deploy Machine Learning models using large data-sets and cloud technology. • Create and share with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Poland, Romania, Spain and the UK). Please note these are not remote internships.
US, WA, Redmond
Amazon Leo is Amazon’s low Earth orbit satellite broadband network. Its mission is to deliver fast, reliable internet to customers and communities around the world, and we’ve designed the system with the capacity, flexibility, and performance to serve a wide range of customers, from individual households to schools, hospitals, businesses, government agencies, and other organizations operating in locations without reliable connectivity. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. We are searching for a senior manager with expertise in the spaceflight aerospace engineering domain of Flight Dynamics, including Mission Design of LEO Constellations, Trajectory, Maneuver Planning, and Navigation. This role will be responsible for the research and development of core spaceflight algorithms that enable the Amazon Leo mission. This role will manage the team responsible for designing and developing flight dynamics innovations for evolving constellation mission needs. Key job responsibilities This position requires expertise in simulation and analysis of astrodynamics models and spaceflight trajectories. This position requires demonstrated achievement in managing technology research portfolios. A strong candidate will have demonstrated achievement in managing spaceflight engineering Guidance, Navigation, and Control teams for full mission lifecycle including design, prototype development and deployment, and operations. Working with the Leo Flight Dynamics Research Science team, you will manage, guide, and direct staff to: • Implement high fidelity modeling techniques for analysis and simulation of large constellation concepts. • Develop algorithms for station-keeping and constellation maintenance. • Perform analysis in support of multi-disciplinary trades within the Amazon Leo team. • Formulate solutions to address collision avoidance and conjunction assessment challenges. • Develop the Leo ground system’s evolving Flight Dynamics System functional requirements. • Work closely with GNC engineers to manage on-orbit performance and develop flight dynamics operations processes About the team The Flight Dynamics Research Science team is staffed with subject matter experts of various areas within the Flight Dynamics domain. It also includes a growing Position, Navigation, and Timing (PNT) team.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.