Differential privacy for deep learning at GPT scale

Two papers from Amazon Web Services AI present algorithms that alleviate the intensive hyperparameter search and fine-tuning required by privacy-preserving deep learning at very large scales.

Deep-learning models are data driven, and that data may contain sensitive information that requires privacy protection. Differential privacy (DP) is a formal framework for ensuring the privacy of individuals in datasets, so that adversarial users can’t learn whether any given data sample was or was not used to train a machine learning model. Employing DP in deep learning typically means capping the contribution that each training sample makes to the model’s parameter adjustments, an approach known as per-sample gradient clipping.

Per-sample gradient clipping, however, makes deep learning much more time consuming than it would be otherwise, impeding the development of large DP models — for instance, at the level of the GPT language models, with billions of parameters.

In 2022, in workshops at the International Conference on Machine Learning (ICML) and the Conference on Neural Information Processing Systems (NeurIPS), we presented two papers that advance DP for deep learning. In the first paper, "Automatic clipping: Differentially private deep learning made easier and stronger", we described an automatic method that improves the efficiency of tuning the gradient-clipping process by an order of magnitude (say, 5-10 times).

Typically, gradient clipping involves an expensive ablation study to select a clipping threshold above which a data sample’s contribution to the model’s parameter adjustments is cut off, or clipped. Our approach instead uses normalization, completely eliminating the tuning of the clipping threshold.

Related content
Technique that mixes public and private training data can meet differential-privacy criteria while cutting error increase by 60%-70%.

In the second paper, "Differentially private bias-term only fine-tuning of foundation models" (DP-BiTFiT), which won the Best Paper Award at the NeurIPS Workshop on Trustworthy and Socially Responsible Machine Learning (TSRML), we introduced BiTFiT, a parameter-efficient method for doing fine-tuning during DP learning.

Generally speaking, a neural network has two types of parameters: the weights, which constitute more than 99% of the parameters and capture most of the information from the training data, and the biases, which shift (offset) the model output. We show that privately fine-tuning the bias terms alone is enough to achieve high accuracy under DP constraints, make DP learning 2 to 30 times faster, reduce memory use by 50% to 88%, and incur only 1/1000 the communication cost in the distributed environment.

Together, these two techniques have made fine-tuning a DP-GPT-2 as efficient as fine-tuning a standard GPT-2 in a parameter-efficient manner. We have made both methods publicly available, to encourage researchers to experiment with and benefit from faster DP deep learning.

Automatic clipping

The deep-learning process includes a tunable hyperparameter called the learning rate, which determines the degree to which the model weights can change during updates. The per-sample gradient clipping threshold is similar, but it imposes a limit on a per-sample basis. The existing approach to DP training requires an ablation study to simultaneously tune the clipping threshold and the learning rate. As such, if K (say, five, in practice) different clipping thresholds are evaluated, this makes the model’s hyperparameter tuning stage K times more expensive.

Ablation studies.png
Two sample ablation studies, considering different learning rates and per-gradient clipping thresholds. Left: GPT-2’s BLEU scores on the E2E dataset, trained with DP-AdamW. Right: Classification accuracy of ResNet18 on the ImageNet dataset, trained with DP-SGD. The different patterns of results illustrate the need to tune both hyperparameters simultaneously.
Related content
Calibrating noise addition to word density in the embedding space improves utility of privacy-protected text.

To solve this problem, we introduced automatic clipping, using gradient normalization instead of per-sample gradient clipping. This (1) eliminates the clipping threshold, (2) enlarges the small gradients that were not clipped, and (3) provably optimizes performance. Equipped with our automatic clipping, the DP stochastic-gradient-descent-optimization algorithm (DP-SGD) has the same asymptotic convergence rate as the standard (non-DP) SGD, even in the nonconvex-optimization setting, where the deep learning optimization lies.

Our experiments across several computer vision and language tasks show that automatic clipping can achieve state-of-the-art DP accuracy, on par with per-sample clipping methods, without sacrificing the training efficiency or the privacy guarantee.

Performance on E2E dataset.png
Performance of GPT-2 on the E2E dataset, measured by BLEU and ROUGE scores under DP and non-DP settings (higher is better). We compare full fine-tuning with automatic clipping to state-of-the-art fine-tuning methods such as LoRA. Additional performance measures are included in the full paper. The best two GPT-2 models for each row are marked in bold.

DP-BiTFiT

The first advantage of differentially private bias-term fine-tuning (DP-BiTFiT) is that it’s model-agnostic; we can apply it to any model by simply freezing all weights during fine-tuning, updating only the bias terms. In sharp contrast, prior alternatives such as low-rank adaption (LoRA) and adapter are applicable exclusively to transformers and involve extra tuning of the adaption ranks.

The second advantage of DP-BiTFiT is its parameter efficiency. In a study that spanned a range of foundation models, we found that the bias terms constitute only around 0.1% of model parameters. This means that DP-BiTFiT provides large efficiency improvements in terms of training time, memory footprint, and communication cost in the distributed-learning setting.

Parameter efficiency.png
Parameter efficiency of DP-BiTFiT. The last two columns count the total number of parameters and the percentage of trainable parameters. Note that DP-BiTFiT optimizes only about 0.1% of the total parameters.

The third advantage of DP-BiTFiT is its computational advantage over other parameter-efficient approaches, such as DP-LoRA. Even if both approaches fine-tune roughly the same number of parameters, DP-BiTFiT still enjoys a great advantage in memory saving, as it does not need to store and access expensive activation tensors when computing the bias gradients; that’s unavoidable when computing weight gradients. We verify this rigorously through the chain rules of the back-propagation, where DP-BiTFiT has a much simpler computation graph because the activation tensors are not used.

Computation graphs.png
The same computation graph of back-propagation (black) with modifications by three different DP procedures (red). Because DP-BiTFiT (lower right) modifies only the model biases, it requires far less computational overhead than prior approaches (left: GhostClip; top right: Opacus) and consequently has a simpler computation graph.

Empirically, we have observed a substantial boost in efficiency when switching from DP full fine-tuning to DP-BiTFiT, while still maintaining state-of-the-art accuracy on large foundation models such as GPT-2-large, ResNet 152, RoBERTa-large, and Vision Transformers. For instance, we compare DP-BiTFiT to DP full fine-tuning and observe a four- to tenfold speedup and a two- to tenfold memory saving on GPT-2.

Throughput and batch size.png
Maximum throughput and batch size by different fine-tuning methods. At left: E2E dataset with GPT2-small/medium/large. At right: 50,000 images of 512x512 pixels with ResNet 50/101/152. The speed and memory saving offered by DP-BiTFiT is substantial, especially on large models.

Acknowledgements: We would like to acknowledge our coauthors on the papers for their contributions: Sheng Zha and George Karypis. We thank Huzefa Rangwala for reviewing this post.

Related content

US, WA, Seattle
The Amazon Economics Team is hiring Economist Interns. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets to solve real-world business problems. Some knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL, UNIX, Sawtooth, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, data scientists and MBAʼs. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with future job market placement. Roughly 85% of interns from previous cohorts have converted to full-time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
GB, Cambridge
Our team undertakes research together with multiple organizations to advance the state-of-the-art in speech technologies. We not only work on giving Alexa, the ground-breaking service that powers Echo, her voice, but we also develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video. Do you want to be part of the team developing the latest technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Senior Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language and Video technology. As a Senior Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and modelling techniques to drive the state of the art in speech and vocal arts synthesis. Position Responsibilities: - Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. - Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. - Research and implement novel ML and statistical approaches to add value to the business. - Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
US, VA, Arlington
The People eXperience and Technology Central Science Team (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. We are an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. We are looking for economists who are able to apply economic methods to address business problems. The ideal candidate will work with engineers and computer scientists to estimate models and algorithms on large scale data, design pilots and measure their impact, and transform successful prototypes into improved policies and programs at scale. We are looking for creative thinkers who can combine a strong technical economic toolbox with a desire to learn from other disciplines, and who know how to execute and deliver on big ideas as part of an interdisciplinary technical team. Ideal candidates will work in a team setting with individuals from diverse disciplines and backgrounds. They will work with teammates to develop scientific models and conduct the data analysis, modeling, and experimentation that is necessary for estimating and validating models. They will work closely with engineering teams to develop scalable data resources to support rapid insights, and take successful models and findings into production as new products and services. They will be customer-centric and will communicate scientific approaches and findings to business leaders, listening to and incorporate their feedback, and delivering successful scientific solutions. Key job responsibilities Use reduced-form causal analysis and/or structural economic modeling methods to evaluate the impact of policies on employee outcomes, and examine how external labor market and economic conditions impact Amazon's ability to hire and retain talent. A day in the life Work with teammates to apply economic methods to business problems. This might include identifying the appropriate research questions, writing code to implement a DID analysis or estimate a structural model, or writing and presenting a document with findings to business leaders. Our economists also collaborate with partner teams throughout the process, from understanding their challenges, to developing a research agenda that will address those challenges, to help them implement solutions. About the team We are a multidisciplinary team that combines the talents of science and engineering to develop innovative solutions to make Amazon Earth's Best Employer. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA
US, WA, Seattle
We are expanding our Global Risk Management & Claims team and insurance program support for Amazon’s growing risk portfolio. This role will partner with our risk managers to develop pricing models, determine rate adequacy, build underwriting and claims dashboards, estimate reserves, and provide other analytical support for financially prudent decision making. As a member of the Global Risk Management team, this role will provide actuarial support for Amazon’s worldwide operation. Key job responsibilities ● Collaborate with risk management and claims team to identify insurance gaps, propose solutions, and measure impacts insurance brings to the business ● Develop pricing mechanisms for new and existing insurance programs utilizing actuarial skills and training in innovative ways ● Build actuarial forecasts and analyses for businesses under rapid growth, including trend studies, loss distribution analysis, ILF development, and industry benchmarks ● Design actual vs expected and other metrics dashboards to assist decision makings in pricing analysis ● Create processes to monitor loss cost and trends ● Propose and implement loss prevention initiatives with impact on insurance pricing in mind ● Advise underwriting decisions with analysis on driver risk profile ● Support insurance cost budgeting activities ● Collaborate with external vendors and other internal analytics teams to extract insurance insight ● Conduct other ad hoc pricing analyses and risk modeling as needed We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | New York, NY, USA | Seattle, WA, USA
US, NY, New York
The Amazon SCOT Forecasting team seeks a Senior Applied Scientist to join our team. Our research team conducts research into the theory and application of reinforcement learning. This research is shared in top journals and conferences and has a significant impact on the field. Through our launch of several Deep RL models into production, our work also affects decision making in the real world. Members of our group have varied interests—from the mathematical foundations of reinforcement learning, to language modeling, to maintaining the performance of generative models in the face of copyrights, and more. Recent work has focused on sample efficiency of RL algorithms, treatment effect estimation, and RL agents integrating real-world constraints, as applied in supply chains. Previous publications include: - Linear Reinforcement Learning with Ball Structure Action Space - Meta-Analysis of Randomized Experiments with Applications to Heavy-Tailed Response Data - A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation - Deep Inventory Management - What are the Statistical Limits of Offline RL with Linear Function Approximation? Working collaboratively with a group of fellow scientists and engineers, you will identify complex problems and develop solutions in the RL space. We encourage collaboration across teammates and their areas of specialty, leading to creative and ambitious projects with the goal of publication and production. Key job responsibilities - Drive collaborative research and creative problem solving - Constructively critique peer research; mentor junior scientists - Create experiments and prototype implementations of new algorithms and techniques - Collaborate with engineering teams to design and implement software built on these new algorithms - Contribute to progress of the Amazon and broader research communities by producing publications We are open to hiring candidates to work out of one of the following locations: New York, NY, USA
US, CA, Virtual Location - California
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate and grow their personal interests and passions. We're always live at Twitch. About the Role: As a Data Scientist, Analytics member of the Data Platform - Insights team, you'll provide data analysis and support for platform, service, and operational engineering teams at Twitch, shaping the way success is measured. Defining what questions should be asked and scaling analytics methods and tools to support our growing business. Additionally, you will help support the vision for business analytics, solutions architecture for data related business constructs, as well as tactical execution such as experiment analysis and campaign performance reporting. You are paving the way for high-quality, high-velocity decisions and will report to the Manager, Data Science. For this role, we're looking for an experienced data staff who will oversee data instrumentation, dashboard/report building, metrics reviews, inform team investments, guidance on success/failure metrics and ad-hoc analysis. You will also work with technical and non-technical staff members throughout the company, and your effort will have an impact on hundreds of partners at Twitch You Will: - Work with members of Platforms & Services to guide them towards better decision making from the available data. - Promote data knowledge and insights through managing communications with partners and other teams, collaborate with colleagues to complete data projects and ensure all parties can use the insights to further improve. - Maintain a customer-centric focus while being a domain and product expert through data, develop trust amongst peers, and ensure that the teams and programs have access to data to make decisions - Manage ambiguous problems and adapt tools to answer complicated questions. - Identify the trade-offs between speed and quality of different approaches. - Create analytical frameworks to measure team success by partnering with teams to establish success metrics, create approaches to track the data and troubleshoot errors, measure and evaluate the data to develop a common language for all colleagues to understand these metrics. - Operationalize data processes to provide partners with ad-hoc analysis, automated dashboards, and self-service reporting tools so that everyone gets a good sense of the state of the business Perks: - Medical, Dental, Vision & Disability Insurance - 401(k), Maternity & Parental Leave - Flexible PTO - Commuter Benefits - Amazon Employee Discount - Monthly Contribution & Discounts for Wellness Related Activities & Programs (e.g., gym memberships, off-site massages), -Breakfast, Lunch & Dinner Served Daily - Free Snacks & Beverages We are open to hiring candidates to work out of one of the following locations: Irvine, CA, USA | Seattle, WA, USA | Virtual Location - CA
US, WA, Bellevue
Have you ever ordered a product on Amazon and when that box with the smile arrived you wondered how it got to you so fast? Have you wondered where it came from and how much it cost Amazon to deliver it to you? Have you also wondered what are different ways that the transportation assets can be used to delight the customer even more. If so, the Amazon transportation Services, Product and Science is for you . We manage the delivery of tens of millions of products every week to Amazon’s customers, achieving on-time delivery in a cost-effective manner. We are looking for an enthusiastic, customer obsessed Applied Scientist with strong scientific thinking, good software and statistics experience, skills to help manage projects and operations, improve metrics, and develop scalable processes and tools. The primary role of an Applied Scientist within Amazon is to address business challenges through building a compelling case, and using data to influence change across the organization. This individual will be given responsibility on their first day to own those business challenges and the autonomy to think strategically and make data driven decisions. Decisions and tools made in this role will have significant impact to the customer experience, as it will have a major impact on how we operate the middle mile network. Ideal candidates will be a high potential, strategic and analytic graduate with a PhD in (Operations Research, Statistics, Engineering, and Supply Chain) ready for challenging opportunities in the core of our world class operations space. Great candidates have a history of operations research, machine learning , and the ability to use data and research to make changes. This role requires robust skills in research and implementation of scalable products and models . This individual will need to be able to work with a team, but also be comfortable making decisions independently, in what is often times an ambiguous environment. Responsibilities may include: - Develop input and assumptions based preexisting models to estimate the costs and savings opportunities associated with varying levels of network growth and operations - Creating metrics to measure business performance, identify root causes and trends, and prescribe action plans - Managing multiple projects simultaneously - Working with technology teams and product managers to develop new tools and systems to support the growth of the business - Communicating with and supporting various internal stakeholders and external audiences We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, CA, Los Angeles
The Alexa team is looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background, to help build industry-leading Speech and Language technology. Key job responsibilities As an Applied Scientist with the Alexa team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art in spoken language understanding. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding. About the team The Alexa team has a mission to push the envelope in Automatic Speech Recognition (ASR), Natural Language Understanding (NLU), and Audio Signal Processing, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Los Angeles, CA, USA
US, WA, Seattle
Are you fascinated by the power of Natural Language Processing (NLP) and Large Language Models (LLM) to transform the way we interact with technology? Are you passionate about applying advanced machine learning techniques to solve complex challenges in the e-commerce space? If so, Amazon's International Seller Services team has an exciting opportunity for you as an Applied Scientist. At Amazon, we strive to be Earth's most customer-centric company, where customers can find and discover anything they want to buy online. Our International Seller Services team plays a pivotal role in expanding the reach of our marketplace to sellers worldwide, ensuring customers have access to a vast selection of products. As an Applied Scientist, you will join a talented and collaborative team that is dedicated to driving innovation and delivering exceptional experiences for our customers and sellers. You will be part of a global team that is focused on acquiring new merchants from around the world to sell on Amazon’s global marketplaces around the world. The position is based in Seattle but will interact with global leaders and teams in Europe, Japan, China, Australia, and other regions. Join us at the Central Science Team of Amazon's International Seller Services and become part of a global team that is redefining the future of e-commerce. With access to vast amounts of data, cutting-edge technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way sellers engage with our platform and customers worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Please visit https://www.amazon.science for more information Key job responsibilities - Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language-related challenges in the international seller services domain. - Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. - Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. - Continuously explore and evaluate state-of-the-art NLP techniques and methodologies to improve the accuracy and efficiency of language-related systems. - Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, Palo Alto
We’re working to improve shopping on Amazon using the conversational capabilities of large language models. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA