Preserving the privacy of AI training data

How we reproduced three attacks that extract private training data from AI models and the cryptographic defenses that stop them.

Overview by Amazon Nova
ML models trained on sensitive data are vulnerable to attacks that can reveal whether specific records were used in training, reconstruct raw samples from federated learning gradients, or extract training data from shared global models. Amazon researchers reproduce all three attacks and demonstrate that differential privacy and secure multiparty computation provide effective, deployable defenses.
Was this answer helpful?

Large language models, the highest-profile machine learning (ML) models used today, are trained on huge corpora of public data. But many ML models are trained on smaller, proprietary datasets, which can be highly sensitive and should be kept private. Examples include a hospital fine-tuning a diagnostic model on patient radiology scans, a bank training a fraud detector on transaction histories, or a pharmaceutical company building a drug interaction model from clinical trial records. In each case, the training data itself is the asset that must be protected, but a well-constructed attack on these models can potentially extract information about their underlying training data.

Such attacks are possible when the attacker is restricted to submitting adversarial inference queries to a model trained by a single data owner. Alternatively, when multiple data owners collaborate to train a model through federated learning (FL), in which a central server produces a global model by aggregating model updates generated from siloed datasets (instead of collocating the raw data), there exist attacks in which an adversarial server can reconstruct training data from the model updates. Consider three hospitals collaborating to train a shared cancer-screening model without pooling patient records. If the aggregation server can reconstruct one hospital's training images, then the privacy promise of federated learning is broken, and so is each hospital's compliance with patient consent agreements. Finally, an adversarial FL participant could even potentially reconstruct an honest participant's private training data from the global model.

These risks are not hypothetical.

These risks are not hypothetical. A 2023 paper from Google DeepMind demonstrated that GPT-3.5-turbo could be prompted to regurgitate verbatim training data, including personally identifiable information. Smaller, domain-specific models trained on concentrated, sensitive datasets are even more vulnerable. As organizations increasingly train models on sensitive financial records, patient health data, and proprietary business intelligence, the attack surface grows proportionally. A successful attack against a healthcare model could reveal whether a specific patient's records were used in training, a violation of regulations such as the US Health Insurance Portability and Accountability Act (HIPAA) and the EU's General Data Protection Regulation (GDPR). An attack against a federated-learning system could reconstruct raw training samples that should never have left their source. For any organization training on private data, understanding and mitigating these threats is no longer optional; it is necessary for responsible AI deployment.

In this post, we walk through three escalating attack scenarios: membership inference against a single model, data reconstruction from federated-learning gradients, and training-data extraction from a shared global model. We show how differential privacy and secure multiparty computation defeat each one.

Three escalating attack vectors against machine learning training data and the cryptographic defenses against them. Each panel shows the attack flow (top) and the defense that defeats it (bottom).
Three escalating attack vectors against machine learning training data and the cryptographic defenses against them. Each panel shows the attack flow (top) and the defense that defeats it (bottom).

An attack on model inference

Anyone with query access to a model can potentially determine whether a specific record was used to train it, an attack known as membership inference. Imagine that a hospital deploys a diagnostic model as an API for referring physicians. A malicious actor could probe the API to determine whether a particular patient's records were included in the training data. This would confirm that the patient was treated at the hospital and reveal details about their medical history.

In a 2023 paper at the Conference on Neural Information Processing Systems (NeurIPS), Amazon Web Services researchers showed how this works in practice. A trained model tends to produce higher-confidence predictions for inputs it was trained on, a form of overfitting the attacker can exploit. The attacker first generates a dataset that approximates the distribution of the model's training data, then records the model's confidence scores on those samples. Using these scores as labels, the attacker trains a proxy model that learns a confidence-score cutoff separating training data from non-training data.

Given a candidate record, the attacker evaluates the proxy model to obtain a cutoff, then queries the target model. If the target model's confidence score exceeds the cutoff, the record was likely in the training set. The authors demonstrated this against a ResNet-50 model trained on ImageNet-1k: 97% of records their attack flagged as training data were indeed training data.

Mitigation through differential privacy

We’ll show how to mitigate such membership inference attacks with differential privacy (DP), a mathematical framework for computing aggregate statistics (e.g., an average) while bounding how much any single input can influence the result. The core idea: if we can randomize the function so that adding or removing one record from the dataset barely changes the distribution of the function output, an attacker cannot confidently determine whether that record was included.

Formally, a randomized function is differentially private if, for any single record added to or removed from the input dataset, the probability of any given output changes by at most a factor of eε, where e is the base of the natural logarithm and ε is the privacy budget. A smaller ε means tighter privacy but more noise in the computation, and vice versa. While NIST guidance suggests that ε < 1 will generally enforce a low enough privacy risk, many real-world deployments operate between 1 and 10, with situation-dependent privacy outcomes. Empirical studies indicate that ε as high as 3 can still provide meaningful data privacy against attacks like membership inference, though our understanding of the effective guarantees of DP against such attacks continues to evolve.

DP defeats membership inference because the attack relies on a gap between the model's confidence on training data and on unseen data. DP narrows that gap by ensuring the model would have learned nearly the same parameters whether or not any particular record was included in its training data.

How can this approach be applied to ML? Neural networks are trained using stochastic gradient descent (SGD), in which the difference between the model’s output on a training sample and the target output for the sample is propagated back through the model, and the model parameters are adjusted to reduce the difference; the adjustment corresponding to the sample is called a gradient. In practice, the model parameters are typically adjusted according to a batch gradient — the average of sample-specific gradients for a batch of samples.

In a landmark 2016 paper, Google researchers introduced DP-SGD, which adds calibrated Gaussian noise to each batch gradient during training. We implemented DP-SGD and trained a neural network on the EMNIST handwritten-letter dataset. The DP model achieved 78% test accuracy at ε = 1.5 and 82% at ε = 3.0, compared to 90% without DP.

DP addresses attacks on a single model, but what happens when multiple organizations collaborate to train one? Federated learning introduces a different attack surface, one that targets the training process itself.

Data leakage from federated learning

Federated learning is a method of decentralized ML in which a global model is trained on datasets distributed across multiple parties, without direct sharing of the datasets. Each party trains an initial model on a local training batch, obtaining a local gradient. The local gradients are then sent to a central server, which averages them into a global gradient. The parties then produce copies of the global model by updating their local models with the global gradient.

The gradients that federated learning was designed to share instead of raw data turn out to leak that data anyway.

However, in a 2019 NeurIPS paper, a team of MIT researchers demonstrated a surprising result: the parties' local gradients leak information about the training samples from which they're computed, enabling model inversion attacks in which the server can reconstruct the parties' training samples. Even in scenarios in which the server is not viewed as adversarial, this attack demonstrates that the gradients leak the parties' training data, defeating the privacy goals of FL.

This attack relies on the observation that a gradient directly contains data about the sample from which it is computed. Consequently, a sample can generally be reconstructed from its gradient, and two semantically distinct training batches are unlikely to admit the same batch gradient. Therefore, the attacker frames the problem of reconstructing a party's batch samples from its local gradient as an optimization problem: find the training batch whose gradient is minimally distant from the target gradient. The attacker can then approximately compute the solution (the training batch) by applying SGD. In our experiments on the EMNIST dataset, the attack recovered single-sample batches exactly and three samples from a batch of size seven.

Our original EMNIST batch of seven samples. Our attack was fed a batch gradient generated from this batch. .png
Our original EMNIST batch of seven samples. Our attack was fed a batch gradient generated from this batch.
The gradients that federated learning.png
The gradients that federated learning was designed to share instead of raw data turn out to leak that data anyway. Our attack against an FL gradient reconstructed three samples from a gradient generated on an EMNIST training batch of size seven, demonstrating that the batch gradient leaks information about the training samples outside of its data silo.

Preventing this data leakage requires ensuring that no party, including the server, ever sees another party's gradient in the clear.

Mitigation through secure multiparty computation

Secure multiparty computation (MPC) is a cryptographic protocol that lets multiple parties jointly compute a function over their private inputs, without revealing anything beyond the function's output. Intuitively, the parties exchange only encrypted intermediate values, so no party ever sees another's raw input.

A simple example illustrates the core idea: suppose three parties hold private values x, y, and z. Each party splits its value into three random shares that sum to it, then distributes one share to each party. Each party sums the shares it receives. The resulting sums are themselves random, but they add up to x + y + z. After exchanging these sums, all parties learn the total but nothing about each other's individual inputs.

Private federated learning (PFL) applies this secure-sum technique to FL: instead of sending raw local gradients to a server, the parties secret-share their gradients and aggregate them via MPC, so the server only ever sees the summed result. More efficient PFL protocols exist, including one presented in a 2023 paper coauthored by Amazon senior principal scientist Tal Rabin, but the core security principle is the same.

We ran our model inversion attack against a party's local gradient computed under our PFL protocol, again using the EMNIST dataset. The attack was unable to reconstruct any training samples.

Our original EMNIST batch of seven samples-second.png
Our original EMNIST batch of seven samples. Our attack was fed an encrypted batch gradient generated from the private FL protocol on this batch.
Private federated learning thwarts the model inversion attack against federated-learning gradients.png
Private federated learning thwarts the model inversion attack against federated-learning gradients. Our attack was unable to reconstruct any training sample from an encrypted gradient produced by the protocol, which was generated from a training batch of seven EMNIST samples.

MPC protects the gradients exchanged during FL, but the global model itself is shared with all participants. Can an adversarial participant exploit the model to recover others' data? We’ll explore this problem in the next section.

An attack on FL global models and mitigation with DP

We've seen how PFL enables n parties to securely compute a global FL model. However, the 2022 paper of Fowl et al. and 2025 paper of Shi et al. together describe an attack that enables an adversarial FL participant to reconstruct another participant's training data from the global model itself.

In this attack, the attacker adds a preprocessing layer with ReLU activation (a common neural-network activation function that outputs positive inputs verbatim but outputs zeros for negative inputs) to the model. That layer consists of nB neurons, where B is the batch size. This is because each of the n parties produces a local gradient that is an average of B sample-specific gradients, so the global FL gradient is an average of nB sample-specific gradients; each of the nB neurons in the preprocessing layer will be used to reconstruct a distinct training sample.

The attacker carefully crafts the preprocessing layer's parameters so that ReLU activates the signals of all samples in the first neuron of the global gradient, all but one sample in the second neuron of the global gradient, all but two samples in the third neuron of the global gradient, etc. Therefore, the attacker simply examines the entries of the global gradient corresponding to the nB neurons and successively subtracts the components between adjacent neurons to tease apart the nB sample-specific gradients. As we mentioned earlier, a training sample can be directly recovered from its gradient.

In our experiments on the EMNIST dataset, the attack recovered all but one of the parties' local batch samples from the global gradient.

The parties raw training samples.png
The parties’ raw training samples. In our experiment, three parties each trained a local batch of size three, meaning nine total samples were represented in the FL global gradient.
Our model inversion attack against the FL global gradient.png
Our model inversion attack against the FL global gradient recovered all but one of the parties’ local training-batch samples.

But after altering our private FL protocol to instead output a differentially private global gradient — computed via DP-SGD with privacy budget of 1.5 — the attack failed to recover any meaningful information from the global gradient.

Our model inversion attack against a differentially private federated-learning .png
Our model inversion attack against a differentially private federated-learning global gradient computed from the EMNIST dataset with a privacy budget of 1.5 failed to recover any meaningful information about any party’s local batch samples. In our experiment, three parties each trained a local batch of size three, producing nine total samples represented in the global gradient.

Taken together, DP and MPC form complementary layers of defense: MPC protects what is exchanged during training, and DP protects what the final model reveals.

Building defenses before attacks scale

The experiments above have clear implications: attacks on ML training data are practical today, and the private-computing tools to defeat them are mature enough to deploy. The privacy-utility tradeoff is real: our DP-SGD models retained 78–82% accuracy at meaningful privacy budgets, compared to 90% without DP.

It is worth noting that the accuracy impact of DP depends heavily on the task and dataset. Our EMNIST experiments used a relatively small model on handwritten letters, where the noise has an outsized effect. In practice, larger models trained on richer datasets absorb DP noise more gracefully. NIST SP 800-226 notes that large models pretrained on public data show strong privacy-utility tradeoffs when fine-tuned with DP-SGD. For many production use cases, such as fraud detection or clinical risk scoring, a modest accuracy reduction is an acceptable cost when the alternative is exposing protected data to the attacks described above. The right privacy budget is ultimately application dependent: a model screening radiology scans may tolerate less accuracy loss than one flagging suspicious transactions, and organizations should calibrate ε to their specific risk and regulatory requirements.

Attacks on ML training data are practical today, and the private-computing tools to defeat them are mature enough to deploy.

These techniques are already in use at Amazon. We are building private-computing capabilities — differentially private training pipelines and secure aggregation for federated learning across organizational boundaries — into production systems. For instance, our fraud prevention teams use differentially private training to protect customer financial data while maintaining detection accuracy.

If your organization trains models on sensitive data, we invite you to explore AWS's privacy-preserving ML capabilities and connect with our team.

Related content

US, WA, Seattle
Are you interested in building Agentic AI solutions that solve complex builder experience challenges with significant global impact? The Security Tooling team designs and builds high-performance AI systems using LLMs and machine learning that identify builder bottlenecks, automate security workflows, and optimize the software development lifecycle—empowering engineering teams worldwide to ship secure code faster while maintaining the highest security standards. As a Data Scientist on our Security Tooling team, you will focus on building state-of-the-art ML models to enhance builder experience and productivity. You will identify builder bottlenecks and pain points across the software development lifecycle, design and apply experiments to study developer behavior, and measure the downstream impacts of security tooling on engineering velocity and code quality. Our team rewards curiosity while maintaining a laser-focus on bringing products to market that empower builders while maintaining security excellence. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the forefront of both academic and applied research in builder experience and security automation, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with other teams. This role offers a unique opportunity to work on projects that could fundamentally transform how builders interact with security tools and how organizations balance security requirements with developer productivity. Key job responsibilities • Design and implement novel AI/ML solutions for complex security challenges and improve builder experience • Balance theoretical knowledge with practical implementation • Navigate ambiguity and create clarity in early-stage product development • Collaborate with cross-functional teams while fostering innovation in a collaborative work environment to deliver impactful solutions • Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results • Establish best practices for ML experimentation, evaluation, development and deployment About the team Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores. Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
GB, MLN, Edinburgh
Do you want to make a real difference to real people's lives? Want to design and build fair and explainable systems which automate recruitment processes across Amazon? Come and be part of a team that develops new machine learning (ML) technologies, which help Amazon scale for its customers by recruiting diverse teams. Join our Recommendations team within Intelligent Talent Acquisition (ITA) where you’ll build machine learning products that transform how job seekers find opportunities and recruiters discover talent. You’ll develop sophisticated recommendation systems powering both Amazon Jobs and internal hiring platforms, operating at global scale to match the right people with the right positions. Using techniques including representation learning, reinforcement learning, and probabilistic modeling, your work will directly improve efficiency for recruiters and help candidates find their ideal roles. This position offers the chance to solve complex problems with significant impact by creating systems that make Amazon’s entire hiring ecosystem more effective while collaborating with scientists across the organization. Key job responsibilities - Design and implement machine learning models that power recommendation systems for job seekers and recruiters, ensuring high performance, scalability, and reliability at global scale. Our ideal candidate has a strong scientific foundation and experience of statistical analysis and model building and has a passion for fairness and explainability in ML systems. - Collaborate with engineers, scientists, and product managers to define requirements, create solutions, and deliver products that improve the hiring experience. - Participate in the full software development lifecycle including scoping, design, coding, testing, documentation, deployment, and maintenance of recommendation systems and ML models. - Solve complex ML problems using optimal data structures and algorithms, making thoughtful trade-offs between efficiency and maintainability. - Stay current with scientific literature and develop novel approaches that address business challenges in talent acquisition. You will have the opportunity to provide feedback on scientific work across the organization helping the entire Intelligent Talent Acquisition organization improve. A day in the life You might spend the morning reviewing a colleague’s code for a new recommendation algorithm feature, then collaborate with product managers to refine requirements for an upcoming enhancement. After lunch, you’ll dive into model development, analyzing performance metrics from recent A/B tests and implementing improvements to the job-seeker recommendation pipeline. Throughout the day, you’ll participate in scientific discussions with peers across the organization, providing valuable feedback while continuing to refine your expertise. About the team The Recommendations team is a hybrid group of software engineers and applied scientists located in Edinburgh. We build tools that match people to jobs and jobs to people, optimizing experiences for both recruiters and candidates. Our work directly impacts Amazon’s ability to find and hire exceptional talent globally. The team maintains a collaborative environment with regular knowledge sharing and mentorship opportunities. We work closely with our product teams to understand business needs and develop innovative scientific solutions that improve hiring outcomes across both industry and student requisitions worldwide.
US, NY, New York
The PXT (People Experience and Technology) AMX Research is seeking a highly skilled and motivated Research Scientist to join our team. You will be leading manager experience research space to support the PXT talent evaluation/talent management initiatives. If you enjoy innovating, thinking big and want to contribute directly to the success of a growing team, you may be a prime candidate for this position. Key job responsibilities Design experiments, test hypotheses, and build actionable models Conduct quantitative analyses of talent management data and trends Conduct qualitative data collection and analysis Partner closely and drive effective collaborations across multi-disciplinary research and product teams Consult on appropriate analytic methodologies and scope research requests
US, MA, Boston
We are looking for researchers who aim to build super-intelligent AI systems that leverage proof assistants to guide learning and reasoning. Our neuro-symbolic AI technology is applied across a wide range of science and engineering domains within Amazon, and you will join the team at the forefront of this research. As a Principal Applied Scientist, you will play a pivotal role in shaping the definition, vision, and development of product features from beginning to end. You will: - Define and implement new neuro-symbolic applications that employ scalable and efficient approaches to solve complex problems. - Work in an agile, startup-like development environment, where you are always working on the most important stuff. - Deliver high-quality scientific artifacts. About the team We work closely with academia. Our team includes an Amazon Scholar in mathematics, and we maintain active research collaborations with faculty at leading CS departments (MIT, Berkeley, CMU).
US, MA, N.reading
Amazon is on a mission to redefine the future of automation — and we're looking for exceptional talent to help lead the way. We are building the next generation of advanced robotic systems that seamlessly blend cutting-edge AI, sophisticated control systems, and novel mechanical design to create adaptable, intelligent automation solutions capable of operating safely alongside humans in dynamic, real-world environments. At Amazon, we leverage the power of machine learning, artificial intelligence, and advanced robotics to solve some of the most complex operational challenges at a scale unlike anywhere else in the world. Our fleet of robots spans hundreds of facilities globally, working in sophisticated coordination to deliver on our promise of customer excellence — and we're just getting started. As an Applied Scientist in Robot Perception, you will be at the forefront of this transformation. You will develop and deploy state-of-the-art perception algorithms that enable robots to truly understand and interact with the physical world — bridging the gap between theoretical research and real-world impact. Bringing deep expertise in Computer Vision and a nuanced understanding of the capabilities and limitations of modern Vision-Language Models (VLMs), you will innovate boldly and push the boundaries of what's possible. Our vision for the Perception layer is ambitious: to enable seamless, intelligent interaction between the user, the robot, and its environment. This is a rare opportunity to work at the intersection of deep learning, large language models, and robotics — contributing to research that doesn't just advance the field, but reshapes it. You will collaborate with world-class teams pioneering breakthroughs in dexterous manipulation, locomotion, and human-robot interaction, all at an unprecedented scale. Join us in building intelligent robotic systems that will define the future of automation and human-robot collaboration. Key job responsibilities - Design, develop, and deploy perception algorithms for robotics systems, including object detection, segmentation, tracking, depth estimation, and scene understanding - Contribute to research initiatives in computer vision, sensor fusion and 3D perception - Collaborate with cross-functional teams including robotics engineers, software engineers, and product managers to define and deliver perception capabilities - Drive end-to-end ownership of ML models — from data collection and labeling strategy to training, evaluation, and deployment - Define and track key metrics to measure perception system performance in real-world environments - Publish research findings in top-tier venues (CVPR, ICCV, ECCV, ICRA, NeurIPS, etc.) and contribute to patents A day in the life - Train ML models for deployment in simulation and real-world robots, identify and document their limitations post-deployment - Contribute to technical discussions within your team and with key stakeholders to develop innovative solutions to address identified limitations - Actively contribute to brainstorming sessions on adjacent topics, bringing fresh perspectives that help peers grow and succeed — and in doing so, build lasting trust across the team
US, WA, Bellevue
Do you want to join an innovative team applying machine learning, advanced optimization techniques, and Large Language Models (LLMs) to transform the delivery of heavy and bulky items for Amazon customers? Are you excited about working with large-scale operational data and developing models that solve real-world logistics and fulfillment challenges? If so, the Amazon Extra Large (AMXL) Science team may be the right fit for you. AMXL is Amazon's specialized business for delivering heavy and bulky items, including appliances, furniture, fitness equipment, and mattresses, with a premium customer experience that includes room-of-choice delivery, at-home installations, and assembly services. We are seeking an Applied Scientist to help develop scalable machine learning and optimization solutions that improve delivery efficiency, capacity planning, network design, and customer experience across our rapidly growing network. In this role, you will partner with senior scientists and engineers to translate complex operational problems into data-driven solutions, build and evaluate models, and contribute to next-generation fulfillment and logistics systems. Key job responsibilities Apply machine learning, statistical techniques, time series modeling, and operations research to build and improve models for delivery routing, capacity planning, demand forecasting, workforce scheduling, and network optimization Analyze large-scale historical and real-time operational data to identify efficiency patterns, bottlenecks, and emerging trends across the AMXL network Develop, validate, and deploy innovative models under the guidance of senior scientists to improve cost-to-serve and customer experience Experiment with emerging technologies, including Generative AI and LLMs, to enhance automation, scheduling, and operational decision-making Collaborate closely with software engineers to implement models in real-time production systems Partner with operations, product, and business teams to translate operational insights into actionable improvements Build scalable, automated pipelines for data analysis, model training, and validation Monitor model performance and provide clear reporting on key operational and business metrics Research and prototype new modeling approaches to improve system performance and delivery quality A day in the life You will be working within a dynamic, diverse, and supportive group of scientists who share your passion for innovation and excellence in logistics and fulfillment science. You will work closely with business partners, operations teams, and engineering teams to create end-to-end scalable machine learning solutions that address real-world challenges across AMXL's heavy and bulky delivery network, including demand forecasting, capacity planning, routing optimization, and customer experience improvement. You will build scalable, efficient, and automated processes for large-scale data analyses, model development, model validation, and model implementation in production systems. You will also provide clear and compelling reports on your solutions to both technical and non-technical stakeholders, and contribute to the ongoing innovation and knowledge-sharing that are central to the team's success. About the team The AMXL (Amazon Extra Large) Worldwide Science team is a multidisciplinary organization of data scientists, applied scientists, and product managers dedicated to solving some of the most complex supply chain and logistics challenges in Amazon's heavy bulky business. The team's mission is to leverage advanced analytics, machine learning, and optimization science to drive measurable improvements across the AMXL end-to-end supply chain — from inbound fulfillment and middle-mile transportation to last-mile delivery of heavy and bulky items. The science team transforms complex operational data into actionable intelligence that directly impacts customer experience, cost efficiency, and delivery performance at a worldwide scale.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Quantum Research Scientist in the Processor Test and Measurement group. You will join a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers working at the forefront of quantum computing. You should have a deep and broad knowledge of experimental measurement techniques. Candidates with a track record of original scientific contributions will be preferred. We are looking for candidates with strong engineering principles, resourcefulness and a bias for action, superior problem solving, and excellent communication skills. Working effectively within a team environment is essential. As a research scientist you will be expected to work on new ideas and stay abreast of the field of experimental quantum computation. Key job responsibilities We are looking to hire a Research Scientist to develop and test novel calibration and optimization tools for Quantum Error Correction on large scale quantum processors. You will be on a team of engineers and scientists at the frontier of quantum processor control and error correction. You are expected to take part in high-impact research projects that intersect with our engineering roadmap. We are looking for candidates with strong engineering principles and resourcefulness. Organization and communication skills are essential. A day in the life About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
JP, 13, Tokyo
We are seeking an exceptional Senior Data Scientist to join our JP Seller Services team, where you will play a pivotal role in enabling seller growth and success on Amazon Marketplace through innovative products, technology, and data-driven solutions. As a key member of JP Seller Services, you will collaborate with cross-functional stakeholders across Amazon to develop sophisticated AI-native science solutions and innovative problem-solving products through advanced analytics, machine learning, statistical modeling and generative AI. These solutions will enable seller business growth on Amazon Marketplace and deliver key strategic decisions impacting our entire business. The ideal candidate combines strong technical depth with the strategic thinking to address complex business problems at scale. Key job responsibilities (1) Implement AI-driven solutions to streamline and accelerate the science model development and evaluation cycle, enabling faster iteration and impact delivery. (2) Develop science-based solutions to optimize seller engagement channel strategies. (3) Build and scale end-to-end AI-native recommendation models using generative AI and ML to identify critical seller challenges and unlock business growth opportunities. (4) Collaborate with stakeholders to transform business insights into rigorous scientific solutions.
IN, KA, Bengaluru
Alexa+ is Amazon’s next-generation, AI-powered assistant. Building on the original Alexa, it uses generative AI to deliver a more conversational, personalized, and effective experience. The Trust CX Innovations team is looking for an Applied Scientist with strong background in Generative AI space to build solutions that help in upholding customer trust for Alexa+. As an Applied Scientist in Trust CX innovations, you will be at the forefront of developing innovative solutions to critical challenges in AI trust and privacy. You'll lead research in trust-preserving machine learning techniques. We are working on revolutionizing the way Amazonians work and collaborate. You will help us achieve new heights of productivity through the power of advanced generative AI technologies. Key job responsibilities - Lead research initiatives in generative AI, focusing on LLMs, multimodal models, and frontier AI capabilities - Develop innovative approaches for model optimization, including prompt engineering, few-shot learning, and efficient fine-tuning - Pioneer new methods for AI safety, alignment, and responsible AI development - Design and execute sophisticated experiments to evaluate model performance and behavior - Lead the development of production-ready AI solutions that scale efficiently - Collaborate with product teams to translate research innovations into practical applications - Guide engineering teams in implementing AI models and systems at scale - Author technical papers for top-tier conferences - File patents for novel AI technologies and applications A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test scientific proposal/solutions to improve our trust-preserving experiences. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. You work closely with partner teams across Alexa to deliver platform features that require cross-team leadership. About the team Who We Are: Trust CX Innovations is a strategic innovation team within Amazon Devices & Services that focuses on advancing AI technology while prioritizing customer trust and experience. Our team operates at the intersection of artificial intelligence, privacy engineering and customer-centric design. Our Mission: To pioneer trustworthy AI innovations that delight customers while setting new standards for privacy and responsible technology development. We aim to transform how Amazon builds AI products by creating solutions that balance innovation with customer trust.
US, WA, Seattle
Advertising is a complex, multi-sided market with many technologies at play within the industry. The industry is rapidly growing and evolving as viewers are shifting from traditional TV viewing to streaming video and publishers are increasingly adding video content to their online experiences. Amazon’s video advertising is a rising competitor in this industry. Amazon’s service has differentiated assets in our customer & audience insights, exclusive video content, and associated inventory that position us well as an end-to-end service for advertisers and agencies. We are innovating at the intersection of advertising, e-commerce, and entertainment. Amazon Publisher Monetization (APM) is looking for a a passionate and experienced scientist who is adept at a variety of skills; especially in generative AI, computer vision, and large language models that will accelerate our plans to maximize yield via AI-driven contextual targeting, Ads syndication and more. The ideal candidate will be an inventor at heart, they will provide science expertise, rapidly prototype, iterate, and launch, foster the spirit of collaboration and innovation within our larger sister teams and their scientists, and execute against a compelling product roadmap designed to bring AI-led science innovation to solve one of the most challenging problems in advertising. Key job responsibilities This role is focused on shaping our approach to the solving the trifecta of advertising - serving the right ad to the right viewer at the right moment - delivering engaging ads for viewers, improved performance for advertisers, and maximizing the yield of our supply inventory. Responsibilities include: * Partner deeply with Product and Engineering to develop AI-based solutions to generating contextual signals across both video (VOD and Live) and display ads. * Drive end-to-end applied science projects that have a high degree of ambiguity, scale, complexity. * Provide technical/science leadership related to computer vision, large language models and contextual targeting. * Research new and innovative machine learning approaches. * Partner with Applied Scientists across the broader org to make the most of prior art and contribute back to this community the innovation that you come up with.