Domain data trumps teacher knowledge for distilling NLU models

On natural-language-understanding tasks, student models trained only on task-specific data outperform those trained on a mix that includes generic data.

Knowledge distillation is a popular technique for compressing large machine learning models into manageable sizes, to make them suitable for low-latency applications such as voice assistants. During distillation, a lightweight model (referred to as a student) is trained to mimic a source model (referred to as the teacher) over a specific data set (the transfer set).

The choice of the transfer set is crucial to producing high-quality students, but how to make that choice is far from obvious. In natural-language-understanding (NLU) applications, teacher models are usually pretrained on generic corpora, which can differ from the task-specific corpora used for fine-tuning. This raises a natural question: Should the student be distilled over the generic corpora, so as to learn from high-quality teacher predictions, or over the task-specific corpora that aligns better with fine-tuning?

Related content
Private aggregation of teacher ensembles (PATE) leads to word error rate reductions of more than 26% relative to standard differential-privacy techniques.

In a paper we presented at the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), we explored this question and showed that models distilled using only task-specific data perform better on their target tasks than those distilled on a mix of task-specific and generic data. In other words, distilling over target domain data provides better performance than banking solely on teacher knowledge.

We confirmed, however, that even distillation on mixed data is beneficial, with students outperforming similar-sized models trained from scratch. We also investigated distillation after the teacher model had been pretrained but before fine-tuning, so that only the student model is fine-tuned. We found that the more costly strategy of adapting the teacher to the transfer set before distillation produces the best students.

Distillation diversity

In our experiments, we distilled a set of multilingual students from a large multilingual teacher model, using generic and task-specific data mixed in three different ratios:

  • Ratio 1: generic-only (baseline)
  • Ratio 2: 7:3 generic to task-specific (mimicking a low-resource setting)
  • Ratio 3: task-specific-only

So what are generic and task-specific data? Generic data is usually publicly available, non-annotated data unrelated to any specific task. Model training on unannotated data typically involves self-supervised learning; in our case, that means masking out words of a text and training the model to supply them (masked language modeling).

Related content
With an encoder-decoder architecture — rather than decoder only — the Alexa Teacher Model excels other large language models on few-shot tasks such as summarization and machine translation.

Task-specific data is data that has been annotated to indicate the proper performance of a task. In our case, we explored two downstream tasks, domain classification (DC) and joint intent classification and named-entity recognition (ICNER), and our task-specific data is annotated accordingly.

We evaluated our models on two types of test sets — test and tail_test — and four languages of interest, namely German, French, Italian, and Spanish. The set test comprises the full test split, while tail_test is the subset of data points within test that have a frequency of occurrence of three or less. The tail_test set allows us to measure the generalizability of our models to data that they have rarely seen during training.

Knowledge distillation models.16x9.png
A schematic of the two baseline and four experimental models that we investigated and how they were evaluated.

All our experimental and baseline models had the same number of parameters. The generic-distilled baseline was created by distilling a student using only generic data (Ratio 1). The directly pretrained baseline was pretrained from scratch using the generic data and fine-tuned on the task-specific data.

Related content
Self-supervised training, distributed training, and knowledge distillation have delivered remarkable results, but they’re just the tip of the iceberg.

We created four distilled student encoders, two of which were directly distilled using Ratio 2 and Ratio 3 datasets. The remaining two were created in the same way, but the teacher was fine-tuned with the task-specific datasets for a million steps each before distillation. This enabled benchmarking teacher adaptation to the target task.

When evaluating performance for the DC and ICNER tasks, we added either a DC or ICNER decoder to each encoder. Change in F1 score (which factors in both false-negative and false-positive rate) relative to baseline was taken as the improvement for DC, and the change in semantic error rate (SemER) relative to baseline was taken as the improvement for ICNER.

Distillation results 1.png
The percentage improvements for each distilled encoder and each language against the generic distilled baseline. Positive is better for change in F1 score.
Distillation results 2.png
The results for the joint ICNER task. In this case, negative is better.

On the DC task, our results show improvements across the board when task-specific data is included in the transfer sets, with the greatest improvement coming from using only task-specific data. We see similar results in the case of ICNER, where improvements are greater for encoders distilled using only task-specific data.

Acknowledgements: We would like to acknowledge our coauthors in the paper for their contributions to this work: Lizhen Tan, Turan Gojayev, Pan Wei, and Gokmen Oz.

Related content

GB, London
Amazon Advertising is looking for a Senior Applied Scientist to join its brand new initiative that powers Amazon’s contextual advertising product. Advertising at Amazon is a fast-growing multi-billion dollar business that spans across desktop, mobile and connected devices; encompasses ads on Amazon and a vast network of hundreds of thousands of third party publishers; and extends across US, EU and an increasing number of international geographies. We are looking for a dynamic, innovative and accomplished Senior Applied Scientist to work on machine learning and data science initiatives for contextual data processing and classification that power our contextual advertising solutions. Are you excited by the prospect of analyzing terabytes of data and leveraging state-of-the-art data science and machine learning techniques to solve real world problems? Do you like to own business problems/metrics of high ambiguity where yo get to define the path forward for success of a new initiative? As an applied scientist, you will invent ML and Artificial General Intelligence based solutions to power our contextual classification technology. As this is a new initiative, you will get an opportunity to act as a thought leader, work backwards from the customer needs, dive deep into data to understand the issues, conceptualize and build algorithms and collaborate with multiple cross-functional teams. Key job responsibilities * Design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both analysis and business judgment. * Collaborate with software engineering teams to integrate successful experiments into large-scale, highly complex Amazon production systems. * Promote the culture of experimentation and applied science at Amazon. * Demonstrated ability to meet deadlines while managing multiple projects. * Excellent communication and presentation skills working with multiple peer groups and different levels of management * Influence and continuously improve a sustainable team culture that exemplifies Amazon’s leadership principles. About the team The Supply Quality organization has the charter to solve optimization problems for ad-programs in Amazon and ensure high-quality ad-impressions. We develop advanced algorithms and infrastructure systems to optimize performance for our advertisers and publishers. We are focused on solving a wide variety of problems in computational advertising like Contextual data processing and classification, traffic quality prediction (robot and fraud detection), Security forensics and research, Viewability prediction, Brand Safety and experimentation. Our team includes experts in the areas of distributed computing, machine learning, statistics, optimization, text mining, information theory and big data systems. We are open to hiring candidates to work out of one of the following locations: London, GBR
ES, M, Madrid
At Amazon, we are committed to being the Earth’s most customer-centric company. The International Technology group (InTech) owns the enhancement and delivery of Amazon’s cutting-edge engineering to all the varied customers and cultures of the world. We do this through a combination of partnerships with other Amazon technical teams and our own innovative new projects. You will be joining the Tools and Machine learning (Tamale) team. As part of InTech, Tamale strives to solve complex catalog quality problems using challenging machine learning and data analysis solutions. You will be exposed to cutting edge big data and machine learning technologies, along to all Amazon catalog technology stack, and you'll be part of a key effort to improve our customers experience by tackling and preventing defects in items in Amazon's catalog. We are looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading machine learning solutions. We strongly value your hard work and obsession to solve complex problems on behalf of Amazon customers. Key job responsibilities We look for applied scientists who possess a wide variety of skills. As the successful applicant for this role, you will with work closely with your business partners to identify opportunities for innovation. You will apply machine learning solutions to automate manual processes, to scale existing systems and to improve catalog data quality, to name just a few. You will work with business leaders, scientists, and product managers to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable distributed services. You will be part of team of 5 scientists and 13 engineers working on solving data quality issues at scale. You will be able to influence the scientific roadmap of the team, setting the standards for scientific excellence. You will be working with state-of-the-art models, including image to text, LLMs and GenAI. Your work will improve the experience of millions of daily customers using Amazon in Europe and in other regions. You will have the chance to have great customer impact and continue growing in one of the most innovative companies in the world. You will learn a huge amount - and have a lot of fun - in the process! This position will be based in Madrid, Spain We are open to hiring candidates to work out of one of the following locations: Madrid, M, ESP
US, WA, Seattle
Join us in the evolution of Amazon’s Seller business! The Selling Partner Recruitment and Success organization is the growth and development engine for our Store. Partnering with business, product, and engineering, we catalyze SP growth with comprehensive and accurate data, unique insights, and actionable recommendations and collaborate with WW SP facing teams to drive adoption and create feedback loops. We strongly believe that any motivated SP should be able to grow their businesses and reach their full potential by using our scaled, automated, and self-service tools. We aim to accelerate the growth of Sellers by providing tools and insights that enable them to make better and faster decisions at each step of selection management. To accomplish this, we offer intelligent insights that are both detailed and actionable, allowing Sellers to introduce new products and engage with customers effectively. We leverage extensive structured and unstructured data to generate science-based insights about their business. Furthermore, we provide personalized recommendations tailored to individual Sellers' business objectives in a user-friendly format. These insights and recommendations are integrated into our products, including Amazon Brand Analytics (ABA), Product Opportunity Explorer (OX), and Manage Your Growth (MYG). We are looking for a talented and passionate Sr. Research Scientist to lead our research endeavors and develop world-class statistical and machine learning models. The successful candidate will work closely with Product Managers (PM), User Experience (UX) designers, engineering teams, and Seller Growth Consulting teams to provide actionable insights that drive improvements in Seller businesses. Key job responsibilities You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. About the team The Seller Growth science team aims to provide data and science solutions to drive Seller growth and create better Seller experiences. We structure our science domain with three key themes and two horizontal components. We discover the opportunity space by identifying opportunities with unrealized potential, then generate actionable analytics to identify high value actions (HVAs) that unlock the opportunity space, and finally, empower Sellers with personalized Growth Plans and differentiated treatment that help them realize their potential. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
IN, KA, Bangalore
Appstore Quality tech team builds tools, using AI and engineering techniques to provide the best quality apps to Amazon Appstore users. We are a team of highly-motivated, engaged, and responsive professionals who enable the core testing and quality infrastructure of Amazon Appstore. Come join our team and be a part of history as we deliver results for our customers. Appstore Quality team's mission is to automate all types of functional, non functional, and compliance checks on apps submitted by appstore app developers to enable north star vision of publishing apps in under 5 hours. Our team uses various ML/AI/Generative AI techniques to automatically detect violations in images and text metadata submitted by developers. We are working on ambitious project AI projects such as building LLM, auto navigate a mobile app to detect inside app issues and violations. We are seeking an innovative and technically strong data scientist with a background in optimization, machine learning, and statistical modeling/analysis. This role requires a team member to have strong quantitative modeling skills and the ability to apply optimization/statistical/machine learning methods to complex decision-making problems, with data coming from various data sources. The candidate should have strong communication skills, be able to work closely with stakeholders and translate data-driven findings into actionable insights. The successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and ability to work in a fast-paced and ever-changing environment. This role involves working closely with Sr Data Scientist, Principal engineer, and engineering team to build ML and AL based solutions in meeting our north start vision. Key job responsibilities • Implement statistical methods to solve specific business problems utilizing code (Python, Scala, etc.). • Improve upon existing methodologies by developing new data sources, testing model enhancements, and fine-tuning model parameters. • Collaborate with program management, product management, software developers, data engineering, and business leaders to provide science support, and communicate feedback; develop, test and deploy a wide range of statistical, econometric, and machine learning models. • Build customer-facing reporting tools to provide insights and metrics which track model performance and explain variance. • Communicate verbally and in writing to business customers with various levels of technical knowledge, educating them about our solutions, as well as sharing insights and recommendations. • Earn the trust of your customers by continuing to constantly obsess over their needs and helping them solve their problems by leveraging technology • Excellent prompt engineering skillset with a deep knowledge of LLMs, embeddings, transformer models. • Work with distributed machine learning and statistical algorithms to harness enormous volumes of data at scale to serve our customers About the team In Appstore, “We entertain, and delight, hundreds of millions of people across devices with a vast selection of relevant apps, games, and services by making it trivially easy for developers to deliver”. Appstore team enables the customer and developer flywheel on devices by enabling developers to seamlessly launch and manage their apps/ in-app content on Amazon. It helps customers discover, buy and engage with these apps on Fire TV, Fire Tablets and mobile devices. The technologies we build on vary from device software, to high scale services, to efficient tools for developers. We are open to hiring candidates to work out of one of the following locations: Bangalore, KA, IND
US, NJ, Newark
Employer: Audible, Inc. Title: Data Scientist II Location: One Washington Park, Newark, NJ, 07102 Duties: Design and implement scalable and reliable approaches to support or automate decision making throughout the business. Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult business problems and cases in which the solution approach is unclear. Acquire data by building the necessary SQL / ETL queries. Import processes through various company specific interfaces for accessing RedShift, and S3 / edX storage systems. Build relationships with stakeholders and counterparts, and communicate model outputs, observations, and key performance indicators (KPIs) to the management to develop sustainable and consumable products. Explore and analyze data by inspecting univariate distributions and multivariate interactions, constructing appropriate transformations, and tracking down the source and meaning of anomalies. Build production-ready models using statistical modeling, mathematical modeling, econometric modeling, machine learning algorithms, network modeling, social network modeling, natural language processing, or genetic algorithms. Validate models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. Implement models that comply with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production. Position reports into Newark, NJ office; however, telecommuting from a home office may be allowed. Requirements: Requires a Master’s in Statistics, Computer Science, Data Science, Machine Learning, Applied Math, Operations Research, Economics, or a related field plus two (2) years of experience as a Data Scientist, Data Engineer, or other occupation/position/job title involving research and data analysis. Experience may be gained concurrently and must include one (1) year in each of the following: - Building statistical models and machine learning models using large datasets from multiple resources - Working with Customer, Content, or Product data modeling and extraction - Using database technologies such as SQL or ETL - Applying specialized modelling software including Python, R, SAS, MATLAB, or Stata. Alternatively, will accept a Bachelor's and four (4) years of experience. Multiple positions. Apply online: www.amazon.jobs Job Code: ADBL157. We are open to hiring candidates to work out of one of the following locations: Newark, NJ, USA
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a highly-skilled Senior Applied Scientist, to lead the development and implementation of cutting-edge algorithms and push the boundaries of efficient inference for Generative Artificial Intelligence (GenAI) models. As a Senior Applied Scientist, you will play a critical role in driving the development of GenAI technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Design and execute experiments to evaluate the performance of different decoding algorithms and models, and iterate quickly to improve results - Develop deep learning models for compression, system optimization, and inference - Collaborate with cross-functional teams of engineers and scientists to identify and solve complex problems in GenAI - Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA | New York, NY, USA | Sunnyvale, CA, USA
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a highly-skilled Senior Applied Scientist, to lead the development and implementation of cutting-edge algorithms and push the boundaries of efficient inference for Generative Artificial Intelligence (GenAI) models. As a Senior Applied Scientist, you will play a critical role in driving the development of GenAI technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Design and execute experiments to evaluate the performance of different decoding algorithms and models, and iterate quickly to improve results - Develop deep learning models for compression, system optimization, and inference - Collaborate with cross-functional teams of engineers and scientists to identify and solve complex problems in GenAI - Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA | New York, NY, USA | Sunnyvale, CA, USA
US, WA, Seattle
How often have you had an opportunity to be an early member of a team that is tasked with solving a huge customer need through disruptive, innovative technology, reinventing an industry? Do you apply Machine Learning to big data problems? Are you excited by analyzing and modeling terabytes of data that solve real world problems? We love data and have lots of it. We’re looking for an engineer capable of using machine learning and statistical techniques to create solutions for non-trivial, and arguably, unsolved problems. We are working on revolutionizing the way Amazonians work and collaborate. Our team is on a mission to transform productivity through the power of advanced generative AI technologies. In pursuit of this mission we are seeking a motivated Machine Learning Engineer to join our team. The successful candidate will be responsible for developing, implementing, and optimizing machine learning models that will drive our generative AI initiative. This role involves close collaboration with data scientists, software engineers, and UX/UI designers to create a seamless and context-aware AI solution that enhances productivity across various user personas within Amazon. You will join a highly motivated, collaborative and fun-loving team with an entrepreneurial spirit and bias for action. The role will challenge you to think differently, hone your skills, and invent at scale. We're looking for engineers who obsess over technical details but can delight customers by continually learning and building the right products. You will help to invent the future of advertising. Technical Skills needed:- - Programming Languages: Proficiency in Python, including libraries such as TensorFlow, PyTorch, and scikit-learn. - Experience with R or Java is a plus. - Machine Learning and AI: Strong understanding of machine learning algorithms and frameworks. - Experience with natural language processing (NLP) techniques and models. - Familiarity with reinforcement learning and its applications. - Knowledge of supervised and unsupervised learning methods. - Data Preprocessing and Analysis: Expertise in data cleaning, normalization, and transformation. Ability to perform feature engineering and selection. Proficiency in data analysis tools and techniques. - Model Development and Evaluation: Experience in developing, training, and fine-tuning machine learning models. Knowledge of model evaluation metrics such as accuracy, precision, recall, F1-score, and AUC-ROC. Familiarity with cross-validation techniques. - Big Data Technologies: Experience with big data tools and frameworks like Hadoop, Spark, or Kafka. Proficiency in handling large datasets and optimizing data pipelines. - API and Microservices Development: Experience in developing and deploying RESTful APIs. Familiarity with microservices architecture and related technologies. - Cloud Platforms: Experience with cloud platforms such as AWS. Proficiency in using cloud-based machine learning and data storage services. - Security and Privacy: Understanding of data privacy regulations and best practices. Experience with data anonymization techniques and secure data handling. Key job responsibilities 1. Model Development: Design, develop, and implement machine learning models, particularly focusing on natural language processing (NLP) and reinforcement learning techniques. 2. Data Preprocessing: Perform data cleaning, normalization, and feature engineering to prepare datasets for model training. 3. Model Training: Train and fine-tune machine learning models to achieve high accuracy and robustness. 4. Integration: Work with the software engineering team to integrate ML models into the middleware that interfaces with Amazon’s GenAI offerings. 5. Performance Evaluation: Use cross-validation and various performance metrics (e.g., precision, recall, F1-score) to evaluate model performance and ensure their reliability. 6. Continuous Improvement: Implement reinforcement learning strategies to ensure the AI system continuously learns and improves from user interactions. 7. Collaboration: Collaborate with data scientists, software engineers, and UX/UI designers to ensure the models meet user requirements and integrate seamlessly with existing tools. 8. Documentation: Document model architectures, training processes, and evaluation results to ensure transparency and reproducibility. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians on a mission to develop a fault-tolerant quantum computer. You will be joining a team located in Pasadena, CA that conducts materials research to improve the performance of quantum processors. We are looking to hire a Quantum Research Scientist who will apply their expertise in materials characterization to the optimization of fabricated superconducting quantum devices. In this role, you are expected to lead and assist research projects that are aligned with our Center’s technical roadmap. You will develop new ideas and design experiments aimed at identifying the most promising material systems, characterization techniques, and integration processes for superconducting circuit applications. Key job responsibilities - Conduct experimental studies on the fundamental properties of superconducting, semiconducting, and dielectric thin films - Develop and implement multi-technique materials characterization workflows for thin films and devices, with a focus on the surfaces and interfaces - Work closely with other research scientists on the Materials team to develop material processes directed toward optimizing thin film properties, controlling the surface chemistry and morphology, and impacting device performance - Identify materials properties (chemical, structural, electronic, electrical) that can be a reliable proxy for the performance of superconducting qubits and microwave resonators - Communicate engineering and scientific findings to teammates, the broader CQC and, when appropriate, publish findings in scientific journals A day in the life AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. About the team Our team contributes to the fabrication of processors and other hardware that enable quantum computing technologies. Doing that necessitates the development of materials with tailored properties for superconducting circuits. Research Scientists and Engineers on the Materials team operate deposition and characterization systems in order to develop and optimize thin film processes for use in these devices. They work alongside other Research Scientists and Engineers to help deliver fabricated devices for quantum computing experiments. We are open to hiring candidates to work out of one of the following locations: Pasadena, CA, USA
US, CA, Sunnyvale
Help re-invent how millions of people watch TV! Fire TV remains the #1 best-selling streaming media player in the US. Our goal is to be the global leader in delivering entertainment inside and outside the home, with the broadest selection of content, devices and experiences for customers. Our science team works at the intersection of Recommender Systems, Information Retrieval, Machine Learning and Natural Language Understanding. We leverage techniques from all these fields to create novel algorithms that allow our customers to engage with the right content at the right time. Our work directly contributes to making our devices delightful to use and indispensable for the household. Key job responsibilities - Drive new initiatives applying Machine Learning techniques to improve our recommendation, search and entity matching algorithms - Perform hands-on data analysis and modeling with large data sets to develop insights that increase device usage and customer experience - Design and run A/B experiments, evaluate the impact of your optimizations and communicate your results to various business stakeholders - Work closely with product managers and software engineers to design experiments and implement end-to-end solutions - Setup and monitor alarms to detect anomalous data patterns and perform root cause analyses to explain and address them - Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences - Help attract and recruit technical talent; mentor junior scientists We are open to hiring candidates to work out of one of the following locations: Sunnyvale, CA, USA