Large language models, the highest-profile machine learning (ML) models used today, are trained on huge corpora of public data. But many ML models are trained on smaller, proprietary datasets, which can be highly sensitive and should be kept private. Examples include a hospital fine-tuning a diagnostic model on patient radiology scans, a bank training a fraud detector on transaction histories, or a pharmaceutical company building a drug interaction model from clinical trial records. In each case, the training data itself is the asset that must be protected, but a well-constructed attack on these models can potentially extract information about their underlying training data.
Such attacks are possible when the attacker is restricted to submitting adversarial inference queries to a model trained by a single data owner. Alternatively, when multiple data owners collaborate to train a model through federated learning (FL), in which a central server produces a global model by aggregating model updates generated from siloed datasets (instead of collocating the raw data), there exist attacks in which an adversarial server can reconstruct training data from the model updates. Consider three hospitals collaborating to train a shared cancer-screening model without pooling patient records. If the aggregation server can reconstruct one hospital's training images, then the privacy promise of federated learning is broken, and so is each hospital's compliance with patient consent agreements. Finally, an adversarial FL participant could even potentially reconstruct an honest participant's private training data from the global model.
These risks are not hypothetical.
These risks are not hypothetical. A 2023 paper from Google DeepMind demonstrated that GPT-3.5-turbo could be prompted to regurgitate verbatim training data, including personally identifiable information. Smaller, domain-specific models trained on concentrated, sensitive datasets are even more vulnerable. As organizations increasingly train models on sensitive financial records, patient health data, and proprietary business intelligence, the attack surface grows proportionally. A successful attack against a healthcare model could reveal whether a specific patient's records were used in training, a violation of regulations such as the US Health Insurance Portability and Accountability Act (HIPAA) and the EU's General Data Protection Regulation (GDPR). An attack against a federated-learning system could reconstruct raw training samples that should never have left their source. For any organization training on private data, understanding and mitigating these threats is no longer optional; it is necessary for responsible AI deployment.
In this post, we walk through three escalating attack scenarios: membership inference against a single model, data reconstruction from federated-learning gradients, and training-data extraction from a shared global model. We show how differential privacy and secure multiparty computation defeat each one.
An attack on model inference
Anyone with query access to a model can potentially determine whether a specific record was used to train it, an attack known as membership inference. Imagine that a hospital deploys a diagnostic model as an API for referring physicians. A malicious actor could probe the API to determine whether a particular patient's records were included in the training data. This would confirm that the patient was treated at the hospital and reveal details about their medical history.
In a 2023 paper at the Conference on Neural Information Processing Systems (NeurIPS), Amazon Web Services researchers showed how this works in practice. A trained model tends to produce higher-confidence predictions for inputs it was trained on, a form of overfitting the attacker can exploit. The attacker first generates a dataset that approximates the distribution of the model's training data, then records the model's confidence scores on those samples. Using these scores as labels, the attacker trains a proxy model that learns a confidence-score cutoff separating training data from non-training data.
Given a candidate record, the attacker evaluates the proxy model to obtain a cutoff, then queries the target model. If the target model's confidence score exceeds the cutoff, the record was likely in the training set. The authors demonstrated this against a ResNet-50 model trained on ImageNet-1k: 97% of records their attack flagged as training data were indeed training data.
Mitigation through differential privacy
We’ll show how to mitigate such membership inference attacks with differential privacy (DP), a mathematical framework for computing aggregate statistics (e.g., an average) while bounding how much any single input can influence the result. The core idea: if we can randomize the function so that adding or removing one record from the dataset barely changes the distribution of the function output, an attacker cannot confidently determine whether that record was included.
Formally, a randomized function is differentially private if, for any single record added to or removed from the input dataset, the probability of any given output changes by at most a factor of eε, where e is the base of the natural logarithm and ε is the privacy budget. A smaller ε means tighter privacy but more noise in the computation, and vice versa. While NIST guidance suggests that ε < 1 will generally enforce a low enough privacy risk, many real-world deployments operate between 1 and 10, with situation-dependent privacy outcomes. Empirical studies indicate that ε as high as 3 can still provide meaningful data privacy against attacks like membership inference, though our understanding of the effective guarantees of DP against such attacks continues to evolve.
DP defeats membership inference because the attack relies on a gap between the model's confidence on training data and on unseen data. DP narrows that gap by ensuring the model would have learned nearly the same parameters whether or not any particular record was included in its training data.
How can this approach be applied to ML? Neural networks are trained using stochastic gradient descent (SGD), in which the difference between the model’s output on a training sample and the target output for the sample is propagated back through the model, and the model parameters are adjusted to reduce the difference; the adjustment corresponding to the sample is called a gradient. In practice, the model parameters are typically adjusted according to a batch gradient — the average of sample-specific gradients for a batch of samples.
In a landmark 2016 paper, Google researchers introduced DP-SGD, which adds calibrated Gaussian noise to each batch gradient during training. We implemented DP-SGD and trained a neural network on the EMNIST handwritten-letter dataset. The DP model achieved 78% test accuracy at ε = 1.5 and 82% at ε = 3.0, compared to 90% without DP.
DP addresses attacks on a single model, but what happens when multiple organizations collaborate to train one? Federated learning introduces a different attack surface, one that targets the training process itself.
Data leakage from federated learning
Federated learning is a method of decentralized ML in which a global model is trained on datasets distributed across multiple parties, without direct sharing of the datasets. Each party trains an initial model on a local training batch, obtaining a local gradient. The local gradients are then sent to a central server, which averages them into a global gradient. The parties then produce copies of the global model by updating their local models with the global gradient.
The gradients that federated learning was designed to share instead of raw data turn out to leak that data anyway.
However, in a 2019 NeurIPS paper, a team of MIT researchers demonstrated a surprising result: the parties' local gradients leak information about the training samples from which they're computed, enabling model inversion attacks in which the server can reconstruct the parties' training samples. Even in scenarios in which the server is not viewed as adversarial, this attack demonstrates that the gradients leak the parties' training data, defeating the privacy goals of FL.
This attack relies on the observation that a gradient directly contains data about the sample from which it is computed. Consequently, a sample can generally be reconstructed from its gradient, and two semantically distinct training batches are unlikely to admit the same batch gradient. Therefore, the attacker frames the problem of reconstructing a party's batch samples from its local gradient as an optimization problem: find the training batch whose gradient is minimally distant from the target gradient. The attacker can then approximately compute the solution (the training batch) by applying SGD. In our experiments on the EMNIST dataset, the attack recovered single-sample batches exactly and three samples from a batch of size seven.
Preventing this data leakage requires ensuring that no party, including the server, ever sees another party's gradient in the clear.
Mitigation through secure multiparty computation
Secure multiparty computation (MPC) is a cryptographic protocol that lets multiple parties jointly compute a function over their private inputs, without revealing anything beyond the function's output. Intuitively, the parties exchange only encrypted intermediate values, so no party ever sees another's raw input.
A simple example illustrates the core idea: suppose three parties hold private values x, y, and z. Each party splits its value into three random shares that sum to it, then distributes one share to each party. Each party sums the shares it receives. The resulting sums are themselves random, but they add up to x + y + z. After exchanging these sums, all parties learn the total but nothing about each other's individual inputs.
Private federated learning (PFL) applies this secure-sum technique to FL: instead of sending raw local gradients to a server, the parties secret-share their gradients and aggregate them via MPC, so the server only ever sees the summed result. More efficient PFL protocols exist, including one presented in a 2023 paper coauthored by Amazon senior principal scientist Tal Rabin, but the core security principle is the same.
We ran our model inversion attack against a party's local gradient computed under our PFL protocol, again using the EMNIST dataset. The attack was unable to reconstruct any training samples.
MPC protects the gradients exchanged during FL, but the global model itself is shared with all participants. Can an adversarial participant exploit the model to recover others' data? We’ll explore this problem in the next section.
An attack on FL global models and mitigation with DP
We've seen how PFL enables n parties to securely compute a global FL model. However, the 2022 paper of Fowl et al. and 2025 paper of Shi et al. together describe an attack that enables an adversarial FL participant to reconstruct another participant's training data from the global model itself.
In this attack, the attacker adds a preprocessing layer with ReLU activation (a common neural-network activation function that outputs positive inputs verbatim but outputs zeros for negative inputs) to the model. That layer consists of nB neurons, where B is the batch size. This is because each of the n parties produces a local gradient that is an average of B sample-specific gradients, so the global FL gradient is an average of nB sample-specific gradients; each of the nB neurons in the preprocessing layer will be used to reconstruct a distinct training sample.
The attacker carefully crafts the preprocessing layer's parameters so that ReLU activates the signals of all samples in the first neuron of the global gradient, all but one sample in the second neuron of the global gradient, all but two samples in the third neuron of the global gradient, etc. Therefore, the attacker simply examines the entries of the global gradient corresponding to the nB neurons and successively subtracts the components between adjacent neurons to tease apart the nB sample-specific gradients. As we mentioned earlier, a training sample can be directly recovered from its gradient.
In our experiments on the EMNIST dataset, the attack recovered all but one of the parties' local batch samples from the global gradient.
But after altering our private FL protocol to instead output a differentially private global gradient — computed via DP-SGD with privacy budget of 1.5 — the attack failed to recover any meaningful information from the global gradient.
Taken together, DP and MPC form complementary layers of defense: MPC protects what is exchanged during training, and DP protects what the final model reveals.
Building defenses before attacks scale
The experiments above have clear implications: attacks on ML training data are practical today, and the private-computing tools to defeat them are mature enough to deploy. The privacy-utility tradeoff is real: our DP-SGD models retained 78–82% accuracy at meaningful privacy budgets, compared to 90% without DP.
It is worth noting that the accuracy impact of DP depends heavily on the task and dataset. Our EMNIST experiments used a relatively small model on handwritten letters, where the noise has an outsized effect. In practice, larger models trained on richer datasets absorb DP noise more gracefully. NIST SP 800-226 notes that large models pretrained on public data show strong privacy-utility tradeoffs when fine-tuned with DP-SGD. For many production use cases, such as fraud detection or clinical risk scoring, a modest accuracy reduction is an acceptable cost when the alternative is exposing protected data to the attacks described above. The right privacy budget is ultimately application dependent: a model screening radiology scans may tolerate less accuracy loss than one flagging suspicious transactions, and organizations should calibrate ε to their specific risk and regulatory requirements.
Attacks on ML training data are practical today, and the private-computing tools to defeat them are mature enough to deploy.
These techniques are already in use at Amazon. We are building private-computing capabilities — differentially private training pipelines and secure aggregation for federated learning across organizational boundaries — into production systems. For instance, our fraud prevention teams use differentially private training to protect customer financial data while maintaining detection accuracy.
If your organization trains models on sensitive data, we invite you to explore AWS's privacy-preserving ML capabilities and connect with our team.