Today, secure online communication is enabled by public-key cryptography, primarily RSA and elliptic-curve cryptography (ECC), whose security depends on the assumption that certain computational problems are intractable. However, while believed to be intractable for conventional computers, the problems underlying RSA and ECC may be tractable for sufficiently large quantum computers. “Store now, decrypt later” attacks — which intercept encrypted information and hold onto it until quantum computers can decrypt it — require protection against these attacks long before they become technically feasible.
Post-quantum cryptography (PQC) is cryptography running on classical computers but secure in the face of quantum computing. In 2024, following an eight-year standardization effort, the US National Institute of Standards and Technology (NIST) published standard FIPS-203, which specifies the Module-Lattice-Based Key Encapsulation Mechanism, or ML-KEM, as a mechanism for key agreement believed to be secure against attacks from quantum computers.
In this post, we describe how Amazon’s Automated Reasoning Group, AWS Cryptography, and the open-source community have collaborated to create an open-source, formally verified, and optimized implementation of ML-KEM, protecting customers against store-now-decrypt-later attacks with the highest assurance and minimal cost.
What is good cryptographic engineering?
In keeping with Amazon’s customer obsession, we prioritize three goals when working on cryptographic solutions:
- The security of the customer’s data: Cryptography is notoriously hard to implement securely, and any flaw can endanger the customer’s privacy;
- The customer experience: Cryptography is a computational tax that we minimize to ensure the lowest cost and best experience for our customers;
- Our ability to maintain the solution going forward: The less time we need to spend on maintenance, the more we can innovate on behalf of our customers.
There are, however, tensions between these goals: Simple code is easiest to maintain and write securely but tends to be slow. Fast code tends to be more difficult to audit and prone to errors.
Automated reasoning allows us to resolve these tensions and provide our customers with cryptographic solutions that are secure, fast, and maintainable, all at once.
Yet another implementation of ML-KEM?
ML-KEM — formerly known as Kyber — is well studied from an implementation perspective: On the one hand, the Kyber reference code provides a clean C implementation that has been scrutinized for years. On the other hand, numerous research papers describe how to optimize ML-KEM for various metrics and platforms.
The challenge faced by AWS Cryptography and the Automated Reasoning Group in 2024 was to combine the simplicity of the reference implementation and the optimization potential revealed in the research works in a single production-ready implementation.
Around the same time, AWS became a founding member of the Linux Foundation’s Post-Quantum Cryptography Alliance (PQCA), which created the Post-Quantum Cryptography Package (PQCP), “a collection of open-source projects aiming to build high-assurance software implementations of standards-track post-quantum cryptography algorithms”.
Therefore, rather than brewing our own code, members of our team joined the PQCP and soon after launched mlkem-native, a high-assurance, high-performance C implementation of ML-KEM aiming to combine the ML-KEM reference implementation with research on optimization and formal verification.
Coding, fast and slow
Mlkem-native’s modular design combines a frontend covering the high-level logic of ML-KEM with a backend responsible for all performance-critical subroutines. Each subroutine — including the Keccak permutation underlying SHA3 and the number-theoretic transform (NTT) underlying fast polynomial arithmetic — has multiple, highly efficient implementations written natively for specific hardware. In addition to the default C implementation, mlkem-native provides assembly/intrinsics backends for AArch64, x86_64, and RISC-V64.
Importantly for maintainability, the interface between frontend and backend is fixed: a developer adding optimizations for a new target architecture implements select backend functionality against the backend specification, while the frontend stays the same. The development of the backend specification turned out to be less obvious than it sounds, as we explain below.
Knowing your limits
Memory safety
A well-known challenge with the C programming language is the risk of buffer overflows: writing past the designated limits of a memory region can corrupt data structures and, when maliciously exploited, lead to unprivileged code execution. The umbrella term for such issues is memory safety. Memory-safe languages such as Rust can limit the impact of out-of-bounds accesses — by, for example, panicking instead of exhibiting undefined behavior — but they don’t prevent the mistake itself.
Type safety
Another well-known challenge, this time with implementing ML-KEM, is the risk of integer overflows — an aspect of type safety. Like RSA and ECC, ML-KEM relies on modular arithmetic, in which the results of operations are divided by a particular number — in ML-KEM’s case, the prime 3,329, designated MLKEM_Q or just q — and only the remainder is carried forward. The modulo operator is represented by the percentage symbol, %.
Logically, if two numbers x and y need adding or multiplying in ML-KEM, one needs to compute (x + y) % q and (x * y) % q; for example, (294 * 38) % q = 11,172 % q = 1,185. Such “eager” arithmetic modulo q, which constantly applies modular reduction to represent data in the “canonical” range {0, 1, 2, … , q-1}, is prohibitively slow.
Efficient ML-KEM implementations instead use “lazy” arithmetic modulo q: data is operated on without modular reduction for as long as possible, and only once there is a worst-case risk of overflow does reduction happen. Further, this allows the use of imperfect reduction algorithms such as Montgomery reduction, which are fast but don’t always give fully reduced outputs.
In the case of ML-KEM, data modulo q = 3,329 is typically stored in signed 16-bit integers. When dealing with lazy arithmetic across the numerous arithmetic routines in ML-KEM, it is therefore essential to track the worst-case bounds of the data and insert modular reductions where those bounds would exceed the limits of 16-bit integers. Small mistakes in this domain can evade testing — because average bounds tend to be much smaller than worst-case bounds — and then randomly surface in production.
Tracking buffer bounds and especially arithmetic bounds is time consuming and error prone: for example, weakening the output bounds of a low-level arithmetic function might lead to a rare arithmetic overflow in an entirely different function. Checking this by hand not only requires meticulous documentation and skilled auditors but also slows down development.
In mlkem-native, we use a tool called the C Bounded Model Checker (CBMC) to automatically verify memory safety and type safety at the C level: for every function, we add machine- and human-readable contracts to the source code to specify the bounds of buffers and arithmetic data, and we have CBMC automatically verify that, with respect to those bounds, no buffer overflow or arithmetic overflow can happen.
Let’s look at a simple example of modular reduction:
void mlk_poly_reduce_c(mlk_poly *r)
__contract__(
requires(memory_no_alias(r, sizeof(mlk_poly)))
assigns(memory_slice(r, sizeof(mlk_poly)))
ensures(array_bound(r->coeffs, 0, MLKEM_N, 0, MLKEM_Q)))
{
unsigned i;
for (i = 0; i < MLKEM_N; i++)
__loop__(
invariant(i <= MLKEM_N)
invariant(array_bound(r->coeffs, 0, i, 0, MLKEM_Q)))
{
/* Barrett reduction, giving signed canonical representative */
int16_t t = mlk_barrett_reduce(r->coeffs[i]);
/* Conditional addition to get unsigned canonical representative */
r->coeffs[i] = mlk_scalar_signed_to_unsigned_q(t);
}
mlk_assert_bound(r, MLKEM_N, 0, MLKEM_Q);
}
Focusing on the relevant parts one at a time: First, note the __contract__( ... ) . Slightly simplified, the memory_no_alias and memory_slice lines specify which memory the code can read and write; this relates to memory safety. The ensures(array_bound(...)) clause relates to type safety: it specifies that the function will guarantee that upon return, the data is within the interval [0, 1, …, q). In the proof, you see the __loop__(invariant(...)), specifying how the loop gradually establishes this bound: in the ith iteration, it holds up to the ith coefficient. Finally, the implementation effectively composes mlk_barrett_reduce and mlk_scalar_signed_to_unsigned_q. CBMC does not look inside these but replaces them with their contracts:
int16_t mlk_barrett_reduce(int16_t a)
__contract__(
ensures(return_value > -MLKEM_Q_HALF && return_value < MLKEM_Q_HALF)
{ ... }
int16_t mlk_scalar_signed_to_unsigned_q(int16_t c)
__contract__(
requires(c > -MLKEM_Q && c < MLKEM_Q)
ensures(return_value >= 0 && return_value < MLKEM_Q)
ensures(return_value == (int32_t)c + (((int32_t)c < 0) * MLKEM_Q))
{ ... }
You can see that mlk_barrett_reduce first establishes a symmetric output interval (-q/2, …, q/2), and then mlk_scalar_signed_to_unsigned_q maps it to [0,1, …, q). In this instance, it is easy to confirm by eye that the specifications line up in the desired way, but for more complex examples, this is less obvious. Either way, CBMC checks it for us automatically.
Going fast, staying safe
The CBMC proofs described above establish memory safety and type safety for mlkem-native's C code. However, the most performance-critical parts of mlkem-native — the Keccak permutation and number theoretic transform — are implemented in hand-optimized assembly for AArch64 and x86_64.
To gain assurance for the assembly implementations in mlkem-native while maintaining high performance, we use three components: SLOTHY, an assembly superoptimizer; HOL Light, a theorem prover; and s2n-bignum, a verification infrastructure for assembly built on HOL Light. Together, they enable a workflow where developers write clean, maintainable assembly, while deployed code achieves peak performance with formal guarantees of correctness.
Writing high-performance assembly by hand creates a fundamental tension: clean, auditable code that clearly expresses the computation is slow, while fast code is dense, microarchitecture specific, and difficult to maintain. SLOTHY resolves this tension by automating microarchitecture-specific optimizations: it converts an assembly program into a constraint satisfaction problem, finds optimal instruction schedules and register allocations using a constraint solver, and outputs optimized assembly. Developers write clean code emphasizing the logic of the computation, and SLOTHY generates the fast code.
We prove functional correctness for all AArch64 and x86_64 assembly routines using HOL Light and s2n-bignum. Where SLOTHY is used, the proofs are written to be agnostic to the specific instruction ordering and register allocation; we can therefore reoptimize the code for a specific microarchitecture without having to adjust the proofs. This “post-hoc” verification approach establishes the mathematical correctness of the computation represented by the assembly regardless of how it came about; in particular, SLOTHY is removed from the trusted computing base.
Keeping it honest
Formal verification is never absolute. Every proof links formal objects — specifications and models — to informal, real-world requirements and systems, and these links introduce gaps. Does the formal specification capture what we actually need? Does the formal model faithfully reflect the real system? Is the proof infrastructure itself sound?
Earning and maintaining customer trust requires being transparent about these limits. We therefore developed and published a document titled SOUNDNESS.md, where we map out what is proved in mlkem-native, what is assumed, and where the residual risks lie — from the fidelity of the hardware models used in HOL Light proofs, to the larger trusted computing base of CBMC, to the manual bridge between the two verification stacks. For each gap, we describe mitigations in place and outline future work.
Our goal is not to claim perfection but to earn trust through transparency. We encourage the community to read SOUNDNESS.md critically, challenge our assumptions, and help us close the remaining gaps.
Getting on the road
Mlkem-native is integrated into AWS-LC, Amazon's open-source cryptographic library, which underpins secure communication across AWS services. The integration uses an automated importer that pulls mlkem-native source code directly from the upstream repository, ensuring that AWS-LC stays synchronized with the latest verified implementation.
The integration is designed for minimal friction: mlkem-native's modular architecture allows AWS-LC to import the core ML-KEM logic while providing its own implementations of platform-specific components. For example, AWS-LC maps mlkem-native's cryptographic primitives to its existing FIPS-202 (SHA-3) implementation, uses AWS-LC's random-number generation and memory zeroization functions, and enables FIPS-mode features like pairwise consistency tests when required. Enabling this is a thin compatibility layer that bridges mlkem-native's API to AWS-LC's infrastructure without modifying the verified code.
Critically, the CBMC contracts that prove memory safety and type safety are preserved in the imported source code. While the preprocessor removes them from compiled binaries, they remain in the source as machine-checkable documentation of the code's guarantees — a form of "living proof" that travels with the implementation.
Moreover, because both mlkem-native and AWS-LC are open source and permissively licensed, their benefits extend beyond AWS. Anyone can integrate mlkem-native into their systems and gain the same combination of performance and assurance. The formal verification artifacts — CBMC contracts and HOL Light proofs — are part of the repository, all tools involved are open source, and scripts are provided for setup and proof checking, inviting an independent validation of our security claims.
Impact
The development of mlkem-native demonstrates that the three goals of cryptographic engineering — security, performance, and maintainability — are not in conflict when automated reasoning is applied systematically.
CBMC freed us from manually tracking bounds through complex arithmetic, catching errors that would evade testing and surface randomly in production. The annotations stay in the source code as machine-checkable documentation, making the code simultaneously more maintainable and more secure. HOL Light and s2n-bignum allowed us to deploy aggressive assembly optimizations with mathematical certainty of correctness. SLOTHY let us write clean, auditable code while achieving peak performance for specific microarchitectures. And because the proofs are written to be optimization agnostic, we can retarget the code without redoing the verification.
The result is an implementation that is simultaneously more secure, faster, and easier to maintain than what traditional development could achieve. We didn't compromise between customer security, customer experience, and our ability to innovate: automated reasoning delivered all three.
AWS-LC-FIPS release |
||||
Platform |
Operation |
3.1 |
4.0 |
Ratio |
c7i |
Keygen |
30899 |
65146 |
2.1 |
Encaps |
30623 |
61233 |
2.0 |
|
Decaps |
25141 |
51545 |
2.0 |
|
c7g |
Keygen |
29617 |
71134 |
2.4 |
Encaps |
28482 |
66874 |
2.3 |
|
Decaps |
23919 |
64765 |
2.3 |
|
Performance impact of switching from the ML-KEM reference implementation to mlkem-native in Amazon’s cryptography library AWS-LC. ML-KEM-768 performance is measured on c7i and c7g EC2 instances. The numbers represent operations per second (higher is better). The baseline is an AWS-LC-FIPS 3.1 release that contains the ML-KEM C reference implementation. The AWS-LC-FIPS 4 release is built with mlkem-native. The platforms are c7i with Intel(R) Xeon(R) Platinum 8488C and c7g with Graviton 3 processor.
Acknowledgments
We thank our colleague John Harrison, senior principal applied scientist at the Automated Reasoning Group, for providing the bulk of the AArch64 assembly proofs in HOL Light and for maintaining the HOL Light interactive theorem prover and the s2n-bignum verification infrastructure. Mlkem-native is a collaborative effort involving not only AWS but many members of the open-source community. Foremost, we thank our co-maintainer Matthias Kannwischer from zeroRISC, who started mlkem-native with us and has since been instrumental in the success of the project.