Verifying and optimizing post-quantum cryptography at Amazon

How automated reasoning reconciles the demands of security, performance, and maintainability.

Overview by Amazon Nova
  • Mlkem-native, a high-assurance, high-performance C implementation of ML-KEM, combines the simplicity of the reference implementation with research optimizations and formal verification.
  • Automated tools like CBMC and SLOTHY are used to ensure memory safety, type safety, and functional correctness, enabling aggressive assembly optimizations with mathematical certainty.
  • Mlkem-native achieves significant performance gains over the ML-KEM reference implementation, with operations per second increasing by factors of 2.0 to 2.4 on different EC2 instances, while maintaining security and maintainability.
Was this answer helpful?

Today, secure online communication is enabled by public-key cryptography, primarily RSA and elliptic-curve cryptography (ECC), whose security depends on the assumption that certain computational problems are intractable. However, while believed to be intractable for conventional computers, the problems underlying RSA and ECC may be tractable for sufficiently large quantum computers. “Store now, decrypt later” attacks — which intercept encrypted information and hold onto it until quantum computers can decrypt it — require protection against these attacks long before they become technically feasible.

Post-quantum cryptography (PQC) is cryptography running on classical computers but secure in the face of quantum computing. In 2024, following an eight-year standardization effort, the US National Institute of Standards and Technology (NIST) published standard FIPS-203, which specifies the Module-Lattice-Based Key Encapsulation Mechanism, or ML-KEM, as a mechanism for key agreement believed to be secure against attacks from quantum computers.

In this post, we describe how Amazon’s Automated Reasoning Group, AWS Cryptography, and the open-source community have collaborated to create an open-source, formally verified, and optimized implementation of ML-KEM, protecting customers against store-now-decrypt-later attacks with the highest assurance and minimal cost.

What is good cryptographic engineering?

In keeping with Amazon’s customer obsession, we prioritize three goals when working on cryptographic solutions:

  • The security of the customer’s data: Cryptography is notoriously hard to implement securely, and any flaw can endanger the customer’s privacy;
  • The customer experience: Cryptography is a computational tax that we minimize to ensure the lowest cost and best experience for our customers;
  • Our ability to maintain the solution going forward: The less time we need to spend on maintenance, the more we can innovate on behalf of our customers.

There are, however, tensions between these goals: Simple code is easiest to maintain and write securely but tends to be slow. Fast code tends to be more difficult to audit and prone to errors.

Automated reasoning allows us to resolve these tensions and provide our customers with cryptographic solutions that are secure, fast, and maintainable, all at once.

Yet another implementation of ML-KEM?

ML-KEM — formerly known as Kyber — is well studied from an implementation perspective: On the one hand, the Kyber reference code provides a clean C implementation that has been scrutinized for years. On the other hand, numerous research papers describe how to optimize ML-KEM for various metrics and platforms.

The challenge faced by AWS Cryptography and the Automated Reasoning Group in 2024 was to combine the simplicity of the reference implementation and the optimization potential revealed in the research works in a single production-ready implementation.

mlkem-native v4.JPG
In 2024, AWS Cryptography and the Amazon Automated Reasoning Group took on the challenge of combining the simplicity of the thoroughly scrutinized ML-KEM reference implementation and the optimization potential revealed by research in a single production-ready implementation: mlkem-native.

Around the same time, AWS became a founding member of the Linux Foundation’s Post-Quantum Cryptography Alliance (PQCA), which created the Post-Quantum Cryptography Package (PQCP), “a collection of open-source projects aiming to build high-assurance software implementations of standards-track post-quantum cryptography algorithms”.

Therefore, rather than brewing our own code, members of our team joined the PQCP and soon after launched mlkem-native, a high-assurance, high-performance C implementation of ML-KEM aiming to combine the ML-KEM reference implementation with research on optimization and formal verification.

Coding, fast and slow

Mlkem-native’s modular design combines a frontend covering the high-level logic of ML-KEM with a backend responsible for all performance-critical subroutines. Each subroutine — including the Keccak permutation underlying SHA3 and the number-theoretic transform (NTT) underlying fast polynomial arithmetic — has multiple, highly efficient implementations written natively for specific hardware. In addition to the default C implementation, mlkem-native provides assembly/intrinsics backends for AArch64, x86_64, and RISC-V64.

mlkem-native-modularity v2.png
Mlkem-native’s modular design combines a frontend covering the high-level logic of ML-KEM and a backend consisting of multiple, hardware-specific implementations of performance-critical subroutines.

Importantly for maintainability, the interface between frontend and backend is fixed: a developer adding optimizations for a new target architecture implements select backend functionality against the backend specification, while the frontend stays the same. The development of the backend specification turned out to be less obvious than it sounds, as we explain below.

Knowing your limits

Memory safety

A well-known challenge with the C programming language is the risk of buffer overflows: writing past the designated limits of a memory region can corrupt data structures and, when maliciously exploited, lead to unprivileged code execution. The umbrella term for such issues is memory safety. Memory-safe languages such as Rust can limit the impact of out-of-bounds accesses — by, for example, panicking instead of exhibiting undefined behavior — but they don’t prevent the mistake itself.

Type safety

Another well-known challenge, this time with implementing ML-KEM, is the risk of integer overflows an aspect of type safety. Like RSA and ECC, ML-KEM relies on modular arithmetic, in which the results of operations are divided by a particular number — in ML-KEM’s case, the prime 3,329, designated MLKEM_Q or just q — and only the remainder is carried forward. The modulo operator is represented by the percentage symbol, %.

Logically, if two numbers x and y need adding or multiplying in ML-KEM, one needs to compute (x + y) % q and (x * y) % q; for example, (294 * 38) % q = 11,172 % q = 1,185. Such “eager” arithmetic modulo q, which constantly applies modular reduction to represent data in the “canonical” range {0, 1, 2, … , q-1}, is prohibitively slow.

Efficient ML-KEM implementations instead use “lazy” arithmetic modulo q: data is operated on without modular reduction for as long as possible, and only once there is a worst-case risk of overflow does reduction happen. Further, this allows the use of imperfect reduction algorithms such as Montgomery reduction, which are fast but don’t always give fully reduced outputs.

In the case of ML-KEM, data modulo q = 3,329 is typically stored in signed 16-bit integers. When dealing with lazy arithmetic across the numerous arithmetic routines in ML-KEM, it is therefore essential to track the worst-case bounds of the data and insert modular reductions where those bounds would exceed the limits of 16-bit integers. Small mistakes in this domain can evade testing — because average bounds tend to be much smaller than worst-case bounds — and then randomly surface in production.

Tracking buffer bounds and especially arithmetic bounds is time consuming and error prone: for example, weakening the output bounds of a low-level arithmetic function might lead to a rare arithmetic overflow in an entirely different function. Checking this by hand not only requires meticulous documentation and skilled auditors but also slows down development.

In mlkem-native, we use a tool called the C Bounded Model Checker (CBMC) to automatically verify memory safety and type safety at the C level: for every function, we add machine- and human-readable contracts to the source code to specify the bounds of buffers and arithmetic data, and we have CBMC automatically verify that, with respect to those bounds, no buffer overflow or arithmetic overflow can happen.

Let’s look at a simple example of modular reduction:

Focusing on the relevant parts one at a time: First, note the __contract__( ... ) . Slightly simplified, the memory_no_alias and memory_slice lines specify which memory the code can read and write; this relates to memory safety. The ensures(array_bound(...)) clause relates to type safety: it specifies that the function will guarantee that upon return, the data is within the interval [0, 1, …, q). In the proof, you see the __loop__(invariant(...)), specifying how the loop gradually establishes this bound: in the ith iteration, it holds up to the ith coefficient. Finally, the implementation effectively composes mlk_barrett_reduce and mlk_scalar_signed_to_unsigned_q. CBMC does not look inside these but replaces them with their contracts:

You can see that mlk_barrett_reduce first establishes a symmetric output interval (-q/2, …, q/2), and then mlk_scalar_signed_to_unsigned_q maps it to [0,1, …, q). In this instance, it is easy to confirm by eye that the specifications line up in the desired way, but for more complex examples, this is less obvious. Either way, CBMC checks it for us automatically.

Going fast, staying safe

The CBMC proofs described above establish memory safety and type safety for mlkem-native's C code. However, the most performance-critical parts of mlkem-native — the Keccak permutation and number theoretic transform — are implemented in hand-optimized assembly for AArch64 and x86_64.

To gain assurance for the assembly implementations in mlkem-native while maintaining high performance, we use three components: SLOTHY, an assembly superoptimizer; HOL Light, a theorem prover; and s2n-bignum, a verification infrastructure for assembly built on HOL Light. Together, they enable a workflow where developers write clean, maintainable assembly, while deployed code achieves peak performance with formal guarantees of correctness.

Writing high-performance assembly by hand creates a fundamental tension: clean, auditable code that clearly expresses the computation is slow, while fast code is dense, microarchitecture specific, and difficult to maintain. SLOTHY resolves this tension by automating microarchitecture-specific optimizations: it converts an assembly program into a constraint satisfaction problem, finds optimal instruction schedules and register allocations using a constraint solver, and outputs optimized assembly. Developers write clean code emphasizing the logic of the computation, and SLOTHY generates the fast code.

We prove functional correctness for all AArch64 and x86_64 assembly routines using HOL Light and s2n-bignum. Where SLOTHY is used, the proofs are written to be agnostic to the specific instruction ordering and register allocation; we can therefore reoptimize the code for a specific microarchitecture without having to adjust the proofs. This “post-hoc” verification approach establishes the mathematical correctness of the computation represented by the assembly regardless of how it came about; in particular, SLOTHY is removed from the trusted computing base.

Keeping it honest

Formal verification is never absolute. Every proof links formal objects — specifications and models — to informal, real-world requirements and systems, and these links introduce gaps. Does the formal specification capture what we actually need? Does the formal model faithfully reflect the real system? Is the proof infrastructure itself sound?

Earning and maintaining customer trust requires being transparent about these limits. We therefore developed and published a document titled SOUNDNESS.md, where we map out what is proved in mlkem-native, what is assumed, and where the residual risks lie — from the fidelity of the hardware models used in HOL Light proofs, to the larger trusted computing base of CBMC, to the manual bridge between the two verification stacks. For each gap, we describe mitigations in place and outline future work.

Our goal is not to claim perfection but to earn trust through transparency. We encourage the community to read SOUNDNESS.md critically, challenge our assumptions, and help us close the remaining gaps.

Getting on the road

Mlkem-native is integrated into AWS-LC, Amazon's open-source cryptographic library, which underpins secure communication across AWS services. The integration uses an automated importer that pulls mlkem-native source code directly from the upstream repository, ensuring that AWS-LC stays synchronized with the latest verified implementation.

The integration is designed for minimal friction: mlkem-native's modular architecture allows AWS-LC to import the core ML-KEM logic while providing its own implementations of platform-specific components. For example, AWS-LC maps mlkem-native's cryptographic primitives to its existing FIPS-202 (SHA-3) implementation, uses AWS-LC's random-number generation and memory zeroization functions, and enables FIPS-mode features like pairwise consistency tests when required. Enabling this is a thin compatibility layer that bridges mlkem-native's API to AWS-LC's infrastructure without modifying the verified code.

Critically, the CBMC contracts that prove memory safety and type safety are preserved in the imported source code. While the preprocessor removes them from compiled binaries, they remain in the source as machine-checkable documentation of the code's guarantees — a form of "living proof" that travels with the implementation.

Moreover, because both mlkem-native and AWS-LC are open source and permissively licensed, their benefits extend beyond AWS. Anyone can integrate mlkem-native into their systems and gain the same combination of performance and assurance. The formal verification artifacts — CBMC contracts and HOL Light proofs — are part of the repository, all tools involved are open source, and scripts are provided for setup and proof checking, inviting an independent validation of our security claims.

Impact

The development of mlkem-native demonstrates that the three goals of cryptographic engineering — security, performance, and maintainability — are not in conflict when automated reasoning is applied systematically.

CBMC freed us from manually tracking bounds through complex arithmetic, catching errors that would evade testing and surface randomly in production. The annotations stay in the source code as machine-checkable documentation, making the code simultaneously more maintainable and more secure. HOL Light and s2n-bignum allowed us to deploy aggressive assembly optimizations with mathematical certainty of correctness. SLOTHY let us write clean, auditable code while achieving peak performance for specific microarchitectures. And because the proofs are written to be optimization agnostic, we can retarget the code without redoing the verification.

The result is an implementation that is simultaneously more secure, faster, and easier to maintain than what traditional development could achieve. We didn't compromise between customer security, customer experience, and our ability to innovate: automated reasoning delivered all three.

AWS-LC-FIPS release

Platform

Operation

3.1

4.0

Ratio

c7i

Keygen

30899

65146

2.1

Encaps

30623

61233

2.0

Decaps

25141

51545

2.0

c7g

Keygen

29617

71134

2.4

Encaps

28482

66874

2.3

Decaps

23919

64765

2.3

Performance impact of switching from the ML-KEM reference implementation to mlkem-native in Amazon’s cryptography library AWS-LC. ML-KEM-768 performance is measured on c7i and c7g EC2 instances. The numbers represent operations per second (higher is better). The baseline is an AWS-LC-FIPS 3.1 release that contains the ML-KEM C reference implementation. The AWS-LC-FIPS 4 release is built with mlkem-native. The platforms are c7i with Intel(R) Xeon(R) Platinum 8488C and c7g with Graviton 3 processor.

Acknowledgments

We thank our colleague John Harrison, senior principal applied scientist at the Automated Reasoning Group, for providing the bulk of the AArch64 assembly proofs in HOL Light and for maintaining the HOL Light interactive theorem prover and the s2n-bignum verification infrastructure. Mlkem-native is a collaborative effort involving not only AWS but many members of the open-source community. Foremost, we thank our co-maintainer Matthias Kannwischer from zeroRISC, who started mlkem-native with us and has since been instrumental in the success of the project.

Research areas

Related content

US, CA, Pasadena
The Amazon Center for Quantum Computing in Pasadena, CA, is looking to hire an Applied Scientist specializing in Testing of Control Systems hardware. Working alongside other scientists and engineers, you will validate hardware and software systems performing the control and readout functions for Amazon quantum processors. Working effectively within a cross-functional team environment is critical. The ideal candidate will have an established background in test engineering applicable to large mixed-signal systems. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Key job responsibilities Our scientists and engineers collaborate across diverse teams and projects to offer state of the art, cost effective solutions for the control of Amazon quantum processor systems. You’ll bring a passion for innovation and collaboration to: Develop automated test scripts for mid-volume electronics manufacturing, utilizing high-speed test equipment such as Gsps oscilloscopes, logic analyzers, and network analyzers. Design and implement test plans for high-speed, mixed-signal PCAs and instrument assemblies, covering analog/digital interfaces, ADCs/DACs, FPGAs, and power distribution systems. Develop test requirements and coverage matrices with hardware and software stakeholders, including optimization of test coverage vs test time. Analyze test data to identify failure root causes and trends, implement corrective actions, and drive design-for-testability (DFT) enhancements. Drive continuous test improvement to improve test accuracy, improve final product reliability, and adapt to new measurement requirements.
US, WA, Seattle
This role will contribute to developing the Economics and Science products and services in the Fee domain, with specialization in supply chain systems and fees. Through the lens of economics, you will develop causal links for how Amazon, Sellers and Customers interact. You will be a key and senior scientist, advising Amazon leaders how to price our services. You will work on developing frameworks and scalable, repeatable models supporting optimal pricing and policy in the two-sided marketplace that is central to Amazon's business. The pricing for Amazon services is complex. You will partner with science and technology teams across Amazon including Advertising, Supply Chain, Operations, Prime, Consumer Pricing, and Finance. We are looking for an experienced Economist to improve our understanding of seller Economics, enhance our ability to estimate the causal impact of fees, and work with partner teams to design pricing policy changes. In this role, you will provide guidance to scientists to develop econometric models to influence our fee pricing worldwide. You will lead the development of causal models to help isolate the impact of fee and policy changes from other business actions, using experiments when possible, or observational data when not. Key job responsibilities The ideal candidate will have extensive Economics knowledge, demonstrated strength in practical and policy relevant structural econometrics, strong collaboration skills, proven ability to lead highly ambiguous and large projects, and a drive to deliver results. They will work closely with Economists, Data / Applied Scientists, Strategy Analysts, Data Engineers, and Product leads to integrate economic insights into policy and systems production. Familiarity with systems and services that constitute seller supply chains is a plus but not required. About the team The Stores Economics and Sciences team is a central science team that supports Amazon's Retail and Supply Chain leadership. We tackle some of Amazon's most challenging economics and machine learning problems, where our mandate is to impact the business on massive scale.
US, WA, Seattle
WW Amazon Stores Finance Science (ASFS) works to leverage science and economics to drive improved financial results, foster data backed decisions, and embed science within Finance. ASFS is focused on developing products that empower controllership, improve business decisions and financial planning by understanding financial drivers, and innovate science capabilities for efficiency and scale. We are looking for a data scientist to lead high visibility initiatives for forecasting Amazon Stores' financials. You will develop new science-based forecasting methodologies and build scalable models to improve financial decision making and planning for senior leadership up to VP and SVP level. You will build new ML and statistical models from the ground up that aim to transform financial planning for Amazon Stores. We prize creative problem solvers with the ability to draw on an expansive methodological toolkit to transform financial decision-making with science. The ideal candidate combines data-science acumen with strong business judgment. You have versatile modeling skills and are comfortable owning and extracting insights from data. You are excited to learn from and alongside seasoned scientists, engineers, and business leaders. You are an excellent communicator and effectively translate technical findings into business action. Key job responsibilities Demonstrating thorough technical knowledge, effective exploratory data analysis, and model building using industry standard ML models Working with technical and non-technical stakeholders across every step of science project life cycle Collaborating with finance, product, data engineering, and software engineering teams to create production implementations for large-scale ML models Innovating by adapting new modeling techniques and procedures Presenting research results to our internal research community
IN, KA, Bengaluru
RBS (Retail Business Services) Tech team works towards enhancing the customer experience (CX) and their trust in product data by providing technologies to find and fix Amazon CX defects at scale. Our platforms help in improving the CX in all phases of customer journey, including selection, discoverability & fulfilment, buying experience and post-buying experience (product quality and customer returns). The team also develops GenAI platforms for automation of Amazon Stores Operations. As a Sciences team in RBS Tech, we focus on foundational ML research and develop scalable state-of-the-art ML solutions to solve the problems covering customer experience (CX) and Selling partner experience (SPX). We work to solve problems related to multi-modal understanding (text and images), task automation through multi-modal LLM Agents, supervised and unsupervised techniques, multi-task learning, multi-label classification, aspect and topic extraction for Customer Anecdote Mining, image and text similarity and retrieval using NLP and Computer Vision for product groupings and identifying duplicate listings in product search results. Key job responsibilities As an Research Scientist, you will be responsible to design and deploy scalable GenAI, NLP and Computer Vision solutions that will impact the content visible to millions of customer and solve key customer experience issues. You will develop novel LLM, deep learning and statistical techniques for task automation, text processing, image processing, pattern recognition, and anomaly detection problems. You will define the research and experiments strategy with an iterative execution approach to develop AI/ML models and progressively improve the results over time. You will partner with business and engineering teams to identify and solve large and significantly complex problems that require scientific innovation. You will help the team leverage your expertise, by coaching and mentoring. You will contribute to the professional development of colleagues, improving their technical knowledge and the engineering practices. You will independently as well as guide team to file for patents and/or publish research work where opportunities arise. The RBS org deals with problems that are directly related to the selling partners and end customers and the ML team drives resolution to organization level problems. Therefore, the Research Scientist role will impact the large product strategy, identifies new business opportunities and provides strategic direction which is very exciting.
US, WA, Seattle
As part of the AWS Applied AI Solutions Core Services organization, we're advancing the frontier of geospatial intelligence and AI-powered spatial reasoning. Our vision is to be the trusted foundation for transforming every business with Amazon AI teammates. Our mission is to deliver turnkey, enterprise-grade foundational AI capabilities that create delightful AI powered solutions. We're building sophisticated AI systems that enable intelligent agents to understand and operate effectively in the physical world through advanced geospatial optimization. Key job responsibilities - Develop geospatial optimization models that generalize across diverse customer use cases in logistics, transportation, and spatial planning - Scope optimization projects with multiple customers in mind, abstracting away complex science problems to create scalable solutions - Discover, evaluate, and adapt existing optimization models and geospatial tools for customer deployment - Develop semantic enrichment methods to integrate heterogeneous data sources including open geospatial data, multimodal sensor data, images, videos, satellite imagery, and documents - Research novel approaches combining AI agents with geospatial optimization to solve complex spatial problems - Collaborate with engineering teams to integrate science components into production systems - Conduct rigorous experimentation and establish evaluation frameworks to measure solution performance A day in the life A day in the life As an Applied Scientist, you'll develop optimization algorithms and AI-powered geospatial solutions while maintaining a clear path to customer impact. You'll investigate novel approaches to spatial optimization, develop methods for semantic data enrichment, and validate ideas through rigorous experimentation with real customer data. You'll collaborate with other scientists and engineers to transform research insights into scalable solutions, work directly with enterprise customers to understand requirements, and help shape the future direction. Leveraging and advancing generative AI technology will be a big part of your charter. About the team Our Applied AI Solutions Core Services Science team is tackling fundamental challenges in geospatial optimization and AI-powered spatial reasoning. We're investigating novel approaches to how AI systems can solve complex logistics and transportation problems, reason about spatial relationships, and integrate diverse data sources to create enterprise-grade geospatial intelligence. Working at the intersection of optimization, large language models, and geospatial data science, we're developing practical techniques that advance the state-of-the-art in geospatial AI.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to apply their causal inference and/or structural econometrics skillsets to solve real world problems. The intern will work in the area of Economics Intelligence in Amazon Returns and Recommerce Technology and Innovation and develop new, data-driven solutions to support the most critical components of this rapidly scaling team. Our PhD Economist Internship Program offers hands-on experience in applied economics, supported by mentorship, structured feedback, and professional development. Interns work on real business and research problems, building skills that prepare them for full-time economist roles at Amazon and beyond. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. About the team The WWRR Economics Intelligence (RREI) team brings together Economists, Data Scientists, and Business Intelligence Engineers experts to delivers economic solutions focused on forecasting, causality, attribution, customer behavior for returns, recommerce, and sustainability domains.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to apply their causal inference and/or structural econometrics skillsets to solve real world problems. The intern will work in the area of Economics Intelligence in Amazon Returns and Recommerce Technology and Innovation and develop new, data-driven solutions to support the most critical components of this rapidly scaling team. Our PhD Economist Internship Program offers hands-on experience in applied economics, supported by mentorship, structured feedback, and professional development. Interns work on real business and research problems, building skills that prepare them for full-time economist roles at Amazon and beyond. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. About the team The WWRR Economics Intelligence (RREI) team brings together Economists, Data Scientists, and Business Intelligence Engineers experts to delivers economic solutions focused on forecasting, causality, attribution, customer behavior for returns, recommerce, and sustainability domains.
US, CA, San Francisco
AWS is one of Amazon’s largest and fastest growing businesses, serving millions of customers in more than 190 countries. We use cloud computing to reshape the way global enterprises use information technology. We are looking for entrepreneurial, analytical, creative, flexible leaders to help us redefine the information technology industry. If you want to join a fast-paced, innovative team that is making history, this is the place for you. AWS Central Economics & Science (ACES) drives best practices for objectively applying economics and science in decision making across AWS. The team collaborates with AWS science and business teams to identify, frame, and analyze complex and ambiguous problems of the highest priority. Through data-driven insights and modeling, ACES supports strategic decision-making across the AWS global organization, including sales operations and business performance optimization. The ACES Sales Channels team is hiring an Applied Scientist (Senior or below) to advance our mission of providing rigorous, causal-inference-driven recommendations for AWS sales optimization. This role will focus on building ML systems with a causal modeling foundation, designing seller incentive mechanisms, and developing intervention strategies across the entire sales motion. Key job responsibilities • Causal ML System Development: Build and deploy machine learning models that emphasize causal inference, ensuring recommendations are grounded in valid interventions • Incentive Design: Define and model incentives that drive desirable behaviors across AWS sales channels, partner programs, and reseller ecosystems • Stakeholder Collaboration: Work with business stakeholders to understand requirements, validate approaches, and ensure practical applicability of scientific solutions • Scientific Rigor: Promote findings at internal conferences and contribute to the team's reputation for methodological excellence A day in the life The ACES Sales Channels team works on understanding and optimizing AWS's sales channels, both direct (generalist and specialist sellers) and indirect (partners and Marketplace). Our work falls into three core areas: developing rigorous causal measurement and modeling frameworks using cutting-edge economics and statistical methods; designing programs and incentives to improve customer and business outcomes; and building ML-based recommendation systems for sellers, partners, and other AWS stakeholders. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, WA, Bellevue
The Central Learning Solutions (CLS) - Science team builds state-of-the-art Artificial Intelligence (AI) solutions for enhancing leadership and associate development within the organization. We develop technology and mechanisms for building personalized learning courses based on the profiles of different learners and asses the post-training performance curves for different learner profiles. As a Data Scientist on the team, you will be driving the data science/ML roadmap for the CLS t Science team. You will leverage your knowledge in statistics and econometrics, estimate the causal impact of training interventions, recommend the right interventions for a given learner profile, and measure the post-launch success of these interventions through A/B weblabs. These insights will help in dynamically changing the training content of Learning & Development courses and unlock opportunities to improve both training effectiveness and learner experience. You will collaborate effectively with internal stakeholders and cross-functional teams for solving business problems, create operational efficiencies, and deliver successfully against high organizational standards. Key job responsibilities - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and implementation. - Use advanced causal inference methodologies to estimate the learning curves for different learner profiles and the effectiveness of training content. - Perform statistical analysis and statistical tests including hypothesis testing and A/B testing. - Implement new statistical, machine learning, or other mathematical methodologies to solve specific business problems. - Present deep dives and analysis to both technical and non-technical stakeholders, ensure clarity, and influence the strategy of business partners. About the team We serve North America L&D orgs as the strategic thought leader, looking beyond where other teams are focused to drive transformative solutions that leverage technology and processes to improve learning outcomes and drive down the cost to serve.
US, WA, Bellevue
The Principal Applied Scientist will own the science mission for building next-generation proactive and autonomous agentic experiences across Alexa AI's Personalization, Autonomy and Proactive Intelligence organization. You will technically lead a team of applied scientists to harness state-of-the-art technologies in machine learning, natural language processing, LLM training and application, and agentic AI systems to advance the scientific frontiers of autonomous intelligence and proactive user assistance. The right candidate will be an inventor at heart, provide deep scientific leadership, establish compelling technical direction and vision, and drive ambitious research initiatives that push the boundaries of what's possible with AI agents. You will need to be adept at identifying promising research directions in agentic AI, developing novel autonomous agent solutions, and translating advanced AI research into production-ready agentic systems. You will need to be adept at influencing and collaborating with partner teams, launching AI-powered autonomous agents into production, and building team mechanisms that will foster innovation and execution in the rapidly evolving field of agentic AI. This role represents a unique opportunity to tackle fundamental challenges in how Alexa proactively understands user needs, autonomously takes actions on behalf of users, and delivers intelligent assistance through state-of-the-art agentic AI technologies. As a science leader in Alexa AI, you will shape the technical strategy for making Alexa a truly proactive and autonomous agent that anticipates user needs, takes intelligent actions, and provides seamless assistance without explicit prompting. Your team will be at the forefront of solving complex problems in agentic reasoning, multi-step task planning, autonomous decision-making, proactive intelligence, and context-aware action execution that will fundamentally transform how users interact with Alexa as an intelligent agent. The successful candidate will bring deep technical expertise in machine learning, natural language processing, and agentic AI systems, along with the leadership ability to guide talented scientists in pursuing ambitious research that advances the state of the art in autonomous agents, proactive intelligence, and AI-driven personalization. Experience with multi-agent systems, reinforcement learning, goal-oriented dialogue systems, and production-scale agentic architectures is highly valued. You will lead the development of breakthrough capabilities that enable Alexa to: 1) proactively anticipate user needs through advanced predictive modeling and contextual understanding; 2) autonomously execute complex multi-step tasks with minimal user intervention; 3) reason and plan intelligently across diverse user goals and environmental contexts; 4) learn and adapt continuously from user interactions to improve agentic behaviors; 5) coordinate actions seamlessly across multiple domains and services as a unified intelligent agent. This is a unique opportunity to define the future of conversational AI agents and build technology that will impact hundreds of millions of customers worldwide. Key job responsibilities Technical Leadership - Lead complex research and development projects - Partner closely with the T&C Product and Engineering leaders on the technical strategy and roadmap - Evaluate emerging technologies and methodologies - Make high-level architectural decisions Technical leadership and mentoring: - Mentor and develop technical talent - Set team project goals and metrics - Help with resource allocation and project prioritization from technical side Research & Development - Drive innovation in applied science areas - Translate research into practical business solutions - Author technical papers and patents - Collaborate with academic and industry partners About the team PAPI (Personalization Autonomy and Proactive Intelligence) aims to accelerate personalized and intuitive experiences across Amazon's customer touchpoints through automated, scalable, self-serve AI systems. We leverage customer, device, and ambient signals to deliver conversational, visual, and proactive experiences that delight customers, increase engagement, reduce defects, and enable natural interactions across Amazon touch points including Alexa, FireTV, and Mobile etc. Our systems offer personalized suggestions, comprehend customer inputs, learn from interactions, and propose appropriate actions to serve millions of customers globally.