Verifying and optimizing post-quantum cryptography at Amazon

How automated reasoning reconciles the demands of security, performance, and maintainability.

Overview by Amazon Nova
  • Mlkem-native, a high-assurance, high-performance C implementation of ML-KEM, combines the simplicity of the reference implementation with research optimizations and formal verification.
  • Automated tools like CBMC and SLOTHY are used to ensure memory safety, type safety, and functional correctness, enabling aggressive assembly optimizations with mathematical certainty.
  • Mlkem-native achieves significant performance gains over the ML-KEM reference implementation, with operations per second increasing by factors of 2.0 to 2.4 on different EC2 instances, while maintaining security and maintainability.
Was this answer helpful?

Today, secure online communication is enabled by public-key cryptography, primarily RSA and elliptic-curve cryptography (ECC), whose security depends on the assumption that certain computational problems are intractable. However, while believed to be intractable for conventional computers, the problems underlying RSA and ECC may be tractable for sufficiently large quantum computers. “Store now, decrypt later” attacks — which intercept encrypted information and hold onto it until quantum computers can decrypt it — require protection against these attacks long before they become technically feasible.

Post-quantum cryptography (PQC) is cryptography running on classical computers but secure in the face of quantum computing. In 2024, following an eight-year standardization effort, the US National Institute of Standards and Technology (NIST) published standard FIPS-203, which specifies the Module-Lattice-Based Key Encapsulation Mechanism, or ML-KEM, as a mechanism for key agreement believed to be secure against attacks from quantum computers.

In this post, we describe how Amazon’s Automated Reasoning Group, AWS Cryptography, and the open-source community have collaborated to create an open-source, formally verified, and optimized implementation of ML-KEM, protecting customers against store-now-decrypt-later attacks with the highest assurance and minimal cost.

What is good cryptographic engineering?

In keeping with Amazon’s customer obsession, we prioritize three goals when working on cryptographic solutions:

  • The security of the customer’s data: Cryptography is notoriously hard to implement securely, and any flaw can endanger the customer’s privacy;
  • The customer experience: Cryptography is a computational tax that we minimize to ensure the lowest cost and best experience for our customers;
  • Our ability to maintain the solution going forward: The less time we need to spend on maintenance, the more we can innovate on behalf of our customers.

There are, however, tensions between these goals: Simple code is easiest to maintain and write securely but tends to be slow. Fast code tends to be more difficult to audit and prone to errors.

Automated reasoning allows us to resolve these tensions and provide our customers with cryptographic solutions that are secure, fast, and maintainable, all at once.

Yet another implementation of ML-KEM?

ML-KEM — formerly known as Kyber — is well studied from an implementation perspective: On the one hand, the Kyber reference code provides a clean C implementation that has been scrutinized for years. On the other hand, numerous research papers describe how to optimize ML-KEM for various metrics and platforms.

The challenge faced by AWS Cryptography and the Automated Reasoning Group in 2024 was to combine the simplicity of the reference implementation and the optimization potential revealed in the research works in a single production-ready implementation.

mlkem-native v4.JPG
In 2024, AWS Cryptography and the Amazon Automated Reasoning Group took on the challenge of combining the simplicity of the thoroughly scrutinized ML-KEM reference implementation and the optimization potential revealed by research in a single production-ready implementation: mlkem-native.

Around the same time, AWS became a founding member of the Linux Foundation’s Post-Quantum Cryptography Alliance (PQCA), which created the Post-Quantum Cryptography Package (PQCP), “a collection of open-source projects aiming to build high-assurance software implementations of standards-track post-quantum cryptography algorithms”.

Therefore, rather than brewing our own code, members of our team joined the PQCP and soon after launched mlkem-native, a high-assurance, high-performance C implementation of ML-KEM aiming to combine the ML-KEM reference implementation with research on optimization and formal verification.

Coding, fast and slow

Mlkem-native’s modular design combines a frontend covering the high-level logic of ML-KEM with a backend responsible for all performance-critical subroutines. Each subroutine — including the Keccak permutation underlying SHA3 and the number-theoretic transform (NTT) underlying fast polynomial arithmetic — has multiple, highly efficient implementations written natively for specific hardware. In addition to the default C implementation, mlkem-native provides assembly/intrinsics backends for AArch64, x86_64, and RISC-V64.

mlkem-native-modularity v2.png
Mlkem-native’s modular design combines a frontend covering the high-level logic of ML-KEM and a backend consisting of multiple, hardware-specific implementations of performance-critical subroutines.

Importantly for maintainability, the interface between frontend and backend is fixed: a developer adding optimizations for a new target architecture implements select backend functionality against the backend specification, while the frontend stays the same. The development of the backend specification turned out to be less obvious than it sounds, as we explain below.

Knowing your limits

Memory safety

A well-known challenge with the C programming language is the risk of buffer overflows: writing past the designated limits of a memory region can corrupt data structures and, when maliciously exploited, lead to unprivileged code execution. The umbrella term for such issues is memory safety. Memory-safe languages such as Rust can limit the impact of out-of-bounds accesses — by, for example, panicking instead of exhibiting undefined behavior — but they don’t prevent the mistake itself.

Type safety

Another well-known challenge, this time with implementing ML-KEM, is the risk of integer overflows an aspect of type safety. Like RSA and ECC, ML-KEM relies on modular arithmetic, in which the results of operations are divided by a particular number — in ML-KEM’s case, the prime 3,329, designated MLKEM_Q or just q — and only the remainder is carried forward. The modulo operator is represented by the percentage symbol, %.

Logically, if two numbers x and y need adding or multiplying in ML-KEM, one needs to compute (x + y) % q and (x * y) % q; for example, (294 * 38) % q = 11,172 % q = 1,185. Such “eager” arithmetic modulo q, which constantly applies modular reduction to represent data in the “canonical” range {0, 1, 2, … , q-1}, is prohibitively slow.

Efficient ML-KEM implementations instead use “lazy” arithmetic modulo q: data is operated on without modular reduction for as long as possible, and only once there is a worst-case risk of overflow does reduction happen. Further, this allows the use of imperfect reduction algorithms such as Montgomery reduction, which are fast but don’t always give fully reduced outputs.

In the case of ML-KEM, data modulo q = 3,329 is typically stored in signed 16-bit integers. When dealing with lazy arithmetic across the numerous arithmetic routines in ML-KEM, it is therefore essential to track the worst-case bounds of the data and insert modular reductions where those bounds would exceed the limits of 16-bit integers. Small mistakes in this domain can evade testing — because average bounds tend to be much smaller than worst-case bounds — and then randomly surface in production.

Tracking buffer bounds and especially arithmetic bounds is time consuming and error prone: for example, weakening the output bounds of a low-level arithmetic function might lead to a rare arithmetic overflow in an entirely different function. Checking this by hand not only requires meticulous documentation and skilled auditors but also slows down development.

In mlkem-native, we use a tool called the C Bounded Model Checker (CBMC) to automatically verify memory safety and type safety at the C level: for every function, we add machine- and human-readable contracts to the source code to specify the bounds of buffers and arithmetic data, and we have CBMC automatically verify that, with respect to those bounds, no buffer overflow or arithmetic overflow can happen.

Let’s look at a simple example of modular reduction:

Focusing on the relevant parts one at a time: First, note the __contract__( ... ) . Slightly simplified, the memory_no_alias and memory_slice lines specify which memory the code can read and write; this relates to memory safety. The ensures(array_bound(...)) clause relates to type safety: it specifies that the function will guarantee that upon return, the data is within the interval [0, 1, …, q). In the proof, you see the __loop__(invariant(...)), specifying how the loop gradually establishes this bound: in the ith iteration, it holds up to the ith coefficient. Finally, the implementation effectively composes mlk_barrett_reduce and mlk_scalar_signed_to_unsigned_q. CBMC does not look inside these but replaces them with their contracts:

You can see that mlk_barrett_reduce first establishes a symmetric output interval (-q/2, …, q/2), and then mlk_scalar_signed_to_unsigned_q maps it to [0,1, …, q). In this instance, it is easy to confirm by eye that the specifications line up in the desired way, but for more complex examples, this is less obvious. Either way, CBMC checks it for us automatically.

Going fast, staying safe

The CBMC proofs described above establish memory safety and type safety for mlkem-native's C code. However, the most performance-critical parts of mlkem-native — the Keccak permutation and number theoretic transform — are implemented in hand-optimized assembly for AArch64 and x86_64.

To gain assurance for the assembly implementations in mlkem-native while maintaining high performance, we use three components: SLOTHY, an assembly superoptimizer; HOL Light, a theorem prover; and s2n-bignum, a verification infrastructure for assembly built on HOL Light. Together, they enable a workflow where developers write clean, maintainable assembly, while deployed code achieves peak performance with formal guarantees of correctness.

Writing high-performance assembly by hand creates a fundamental tension: clean, auditable code that clearly expresses the computation is slow, while fast code is dense, microarchitecture specific, and difficult to maintain. SLOTHY resolves this tension by automating microarchitecture-specific optimizations: it converts an assembly program into a constraint satisfaction problem, finds optimal instruction schedules and register allocations using a constraint solver, and outputs optimized assembly. Developers write clean code emphasizing the logic of the computation, and SLOTHY generates the fast code.

We prove functional correctness for all AArch64 and x86_64 assembly routines using HOL Light and s2n-bignum. Where SLOTHY is used, the proofs are written to be agnostic to the specific instruction ordering and register allocation; we can therefore reoptimize the code for a specific microarchitecture without having to adjust the proofs. This “post-hoc” verification approach establishes the mathematical correctness of the computation represented by the assembly regardless of how it came about; in particular, SLOTHY is removed from the trusted computing base.

Keeping it honest

Formal verification is never absolute. Every proof links formal objects — specifications and models — to informal, real-world requirements and systems, and these links introduce gaps. Does the formal specification capture what we actually need? Does the formal model faithfully reflect the real system? Is the proof infrastructure itself sound?

Earning and maintaining customer trust requires being transparent about these limits. We therefore developed and published a document titled SOUNDNESS.md, where we map out what is proved in mlkem-native, what is assumed, and where the residual risks lie — from the fidelity of the hardware models used in HOL Light proofs, to the larger trusted computing base of CBMC, to the manual bridge between the two verification stacks. For each gap, we describe mitigations in place and outline future work.

Our goal is not to claim perfection but to earn trust through transparency. We encourage the community to read SOUNDNESS.md critically, challenge our assumptions, and help us close the remaining gaps.

Getting on the road

Mlkem-native is integrated into AWS-LC, Amazon's open-source cryptographic library, which underpins secure communication across AWS services. The integration uses an automated importer that pulls mlkem-native source code directly from the upstream repository, ensuring that AWS-LC stays synchronized with the latest verified implementation.

The integration is designed for minimal friction: mlkem-native's modular architecture allows AWS-LC to import the core ML-KEM logic while providing its own implementations of platform-specific components. For example, AWS-LC maps mlkem-native's cryptographic primitives to its existing FIPS-202 (SHA-3) implementation, uses AWS-LC's random-number generation and memory zeroization functions, and enables FIPS-mode features like pairwise consistency tests when required. Enabling this is a thin compatibility layer that bridges mlkem-native's API to AWS-LC's infrastructure without modifying the verified code.

Critically, the CBMC contracts that prove memory safety and type safety are preserved in the imported source code. While the preprocessor removes them from compiled binaries, they remain in the source as machine-checkable documentation of the code's guarantees — a form of "living proof" that travels with the implementation.

Moreover, because both mlkem-native and AWS-LC are open source and permissively licensed, their benefits extend beyond AWS. Anyone can integrate mlkem-native into their systems and gain the same combination of performance and assurance. The formal verification artifacts — CBMC contracts and HOL Light proofs — are part of the repository, all tools involved are open source, and scripts are provided for setup and proof checking, inviting an independent validation of our security claims.

Impact

The development of mlkem-native demonstrates that the three goals of cryptographic engineering — security, performance, and maintainability — are not in conflict when automated reasoning is applied systematically.

CBMC freed us from manually tracking bounds through complex arithmetic, catching errors that would evade testing and surface randomly in production. The annotations stay in the source code as machine-checkable documentation, making the code simultaneously more maintainable and more secure. HOL Light and s2n-bignum allowed us to deploy aggressive assembly optimizations with mathematical certainty of correctness. SLOTHY let us write clean, auditable code while achieving peak performance for specific microarchitectures. And because the proofs are written to be optimization agnostic, we can retarget the code without redoing the verification.

The result is an implementation that is simultaneously more secure, faster, and easier to maintain than what traditional development could achieve. We didn't compromise between customer security, customer experience, and our ability to innovate: automated reasoning delivered all three.

AWS-LC-FIPS release

Platform

Operation

3.1

4.0

Ratio

c7i

Keygen

30899

65146

2.1

Encaps

30623

61233

2.0

Decaps

25141

51545

2.0

c7g

Keygen

29617

71134

2.4

Encaps

28482

66874

2.3

Decaps

23919

64765

2.3

Performance impact of switching from the ML-KEM reference implementation to mlkem-native in Amazon’s cryptography library AWS-LC. ML-KEM-768 performance is measured on c7i and c7g EC2 instances. The numbers represent operations per second (higher is better). The baseline is an AWS-LC-FIPS 3.1 release that contains the ML-KEM C reference implementation. The AWS-LC-FIPS 4 release is built with mlkem-native. The platforms are c7i with Intel(R) Xeon(R) Platinum 8488C and c7g with Graviton 3 processor.

Acknowledgments

We thank our colleague John Harrison, senior principal applied scientist at the Automated Reasoning Group, for providing the bulk of the AArch64 assembly proofs in HOL Light and for maintaining the HOL Light interactive theorem prover and the s2n-bignum verification infrastructure. Mlkem-native is a collaborative effort involving not only AWS but many members of the open-source community. Foremost, we thank our co-maintainer Matthias Kannwischer from zeroRISC, who started mlkem-native with us and has since been instrumental in the success of the project.

Research areas

Related content

US, WA, Redmond
Amazon Leo is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. As a Communications Engineer in Modeling and Simulation, this role is primarily responsible for the developing and analyzing high level system resource allocation techniques for links to ensure optimal system and network performance from the capacity, coverage, power consumption, and availability point of view. Be part of the team defining the overall communication system and architecture of Amazon Leo’s broadband wireless network. This is a unique opportunity to innovate and define novel wireless technology with few legacy constraints. The team develops and designs the communication system of Leo and analyzes its overall system level performance, such as overall throughput, latency, system availability, packet loss, etc., as well as compatibility for both connectivity and interference mitigation with other space and terrestrial systems. This role in particular will be responsible for 1) evaluating complex multi-disciplinary trades involving RF bandwidth and network resource allocation to customers, 2) understanding and designing around hardware/software capabilities and constraints to support a dynamic network topology, 3) developing heuristic or solver-based algorithms to continuously improve and efficiently use available resources, 4) demonstrating their viability through detailed modeling and simulation, 5) working with operational teams to ensure they are implemented. This role will be part of a team developing the necessary simulation tools, with particular emphasis on coverage, capacity, latency and availability, considering the yearly growth of the satellite constellation and terrestrial network. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities • Work within a project team and take the responsibility for the Leo's overall communication system design and architecture • Extend existing code/tools and create simulation models representative of the target system, primarily in MATLAB • Design interconnection strategies between fronthaul and backhaul nodes. Analyze link availability, investigate link outages, and optimize algorithms to study and maximize network performance • Use RF and optical link budgets with orbital constellation dynamics to model time-varying system capacity • Conduct trade-off analysis to benefit customer experience and optimization of resources (costs, power, spectrum), including optimization of satellite constellation design and link selection • Work closely with implementation teams to simulate expected system level performance and provide quick feedback on potential improvements • Analyze and minimize potential self-interference or interference with other communication systems • Provide visualizations, document results, and communicate them across multi-disciplinary project teams to make key architectural decisions
US, WA, Seattle
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to apply their causal inference / structural econometrics skillsets to solve real world problems. The intern will work in the area of Store Economics and Science (SEAS) and develop models to SEAS. Our PhD Economist Internship Program offers hands-on experience in applied economics, supported by mentorship, structured feedback, and professional development. Interns work on real business and research problems, building skills that prepare them for full-time economist roles at Amazon and beyond. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. About the team The Stores Economics and Science Team (SEAS) is a Stores-wide interdisciplinary team at Amazon with a "peak jumping" mission focused on disruptive innovation. The team applies science, economics, and engineering expertise to tackle the business's most critical problems, working to move from local to global optima across Amazon Stores operations. SEAS builds partnerships with organizations throughout Amazon Stores to pursue this mission, exploring frontier science while learning from the experience and perspective of others. Their approach involves testing solutions first at a small scale, then aligning more broadly to build scalable solutions that can be implemented across the organization. The team works backwards from customers using their unique scientific expertise to add value, takes on long-run and high-risk projects that business teams typically wouldn't pursue, helps teams with kickstart problems by building practical prototypes, raises the scientific bar at Amazon, and builds and shares software that makes Amazon more productive.
US, TX, Austin
Amazon Security is seeking an Applied Scientist to work on GenAI acceleration within the Secure Third Party Tools (S3T) organization. The S3T team has bold ambitions to re-imagine security products that serve Amazon's pace of innovation at our global scale. This role will focus on leveraging large language models and agentic AI to transform third-party security risk management, automate complex vendor assessments, streamline controllership processes, and dramatically reduce assessment cycle times. You will drive builder efficiency and deliver bar-raising security engagements across Amazon. Key job responsibilities Own and drive end-to-end technical delivery for scoped science initiatives focused on third-party security risk management, independently defining research agendas, success metrics, and multi-quarter roadmaps with minimal oversight. Understanding approaches to automate third-party security review processes using state-of-the-art large language models, development intelligent systems for vendor assessment document analysis, security questionnaire automation, risk signal extraction, and compliance decision support. Build advanced GenAI and agentic frameworks including multi-agent orchestration, RAG pipelines, and autonomous workflows purpose-built for third-party risk evaluation, security documentation processing, and scalable vendor assessment at enterprise scale. Build ML-powered risk intelligence capabilities that enhance third-party threat detection, vulnerability classification, and continuous monitoring throughout the vendor lifecycle. Coordinate with Software Engineering and Data Engineering to deploy production-grade ML solutions that integrate seamlessly with existing third-party risk management workflows and scale across the organization. About the team Security is central to maintaining customer trust and delivering delightful customer experiences. At Amazon, our Security organization is designed to drive bar-raising security engagements. Our vision is that Builders raise the Amazon security bar when they use our recommended tools and processes, with no overhead to their business. Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores. Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, CA, Mountain View
At AWS Healthcare AI, we're revolutionizing healthcare delivery through AI solutions that serve millions globally. As a pioneer in healthcare technology, we're building next-generation services that combine Amazon's world-class AI infrastructure with deep healthcare expertise. Our mission is to accelerate our healthcare businesses by delivering intuitive and differentiated technology solutions that solve enduring business challenges. The AWS Healthcare AI organization includes services such as HealthScribe, Comprehend Medical, HealthLake, and more. We're seeking a Senior Applied Scientist to join our team working on our AI driven clinical solutions that are transforming how clinicians interact with patients and document care. Key job responsibilities To be successful in this mission, we are seeking an Applied Scientist to contribute to the research and development of new, highly influencial AI applications that re-imagine experiences for end-customers (e.g., consumers, patients), frontline workers (e.g., customer service agents, clinicians), and back-office staff (e.g., claims processing, medical coding). As a leading subject matter expert in NLU, deep learning, knowledge representation, foundation models, and reinforcement learning, you will collaborate with a team of scientists to invent novel, generative AI-powered experiences. This role involves defining research directions, developing new ML techniques, conducting rigorous experiments, and ensuring research translates to impactful products. You will be a hands-on technical innovator who is passionate about building scalable scientific solutions. You will set the standard for excellence, invent scalable, scientifically sound solutions across teams, define evaluation methods, and lead complex reviews. This role wields significant influence across AWS, Amazon, and the global research community.
US, WA, Seattle
Come be a part of a rapidly expanding $35 billion-dollar global business. At Amazon Business, a fast-growing startup passionate about building solutions, we set out every day to innovate and disrupt the status quo. We stand at the intersection of tech & retail in the B2B space developing innovative purchasing and procurement solutions to help businesses and organizations thrive. At Amazon Business, we strive to be the most recognized and preferred strategic partner for smart business buying. Bring your insight, imagination and a healthy disregard for the impossible. Join us in building and celebrating the value of Amazon Business to buyers and sellers of all sizes and industries. Unlock your career potential. Amazon Business Data Insights and Analytics team is looking for a Data Scientist to lead the research and thought leadership to drive our data and insights strategy for Amazon Business. This role is central in shaping the definition and execution of the long-term strategy for Amazon Business. You will be responsible for researching, experimenting and analyzing predictive and optimization models, designing and implementing advanced detection systems that analyze customer behavior at registration and throughout their journey. You will work on ambiguous and complex business and research science problems with large opportunities. You'll leverage diverse data signals including customer profiles, purchase patterns, and network associations to identify potential abuse and fraudulent activities. You are an analytical individual who is comfortable working with cross-functional teams and systems, working with state-of-the-art machine learning techniques and AWS services to build robust models that can effectively distinguish between legitimate business activities and suspicious behavior patterns You must be a self-starter and be able to learn on the go. Excellent written and verbal communication skills are required as you will work very closely with diverse teams. Key job responsibilities - Interact with business and software teams to understand their business requirements and operational processes - Frame business problems into scalable solutions - Adapt existing and invent new techniques for solutions - Gather data required for analysis and model building - Create and track accuracy and performance metrics - Prototype models by using high-level modeling languages such as R or in software languages such as Python. - Familiarity with transforming prototypes to production is preferred. - Create, enhance, and maintain technical documentation
US, TX, Austin
Amazon Leo is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. As a Systems Engineer, this role is primarily responsible for the design, development and integration of communication payload and customer terminal systems. The Role: Be part of the team defining the overall communication system and architecture of Amazon Leo’s broadband wireless network. This is a unique opportunity to innovate and define groundbreaking wireless technology at global scale. The team develops and designs the communication system for Leo and analyzes its overall system level performance such as for overall throughput, latency, system availability, packet loss etc. This role in particular will be responsible for leading the effort in designing and developing advanced technology and solutions for communication system. This role will also be responsible developing advanced physical layer + protocol stacks systems as proof of concept and reference implementation to improve the performance and reliability of the LEO network. In particular this role will be responsible for using concepts from digital signal processing, information theory, wireless communications to develop novel solutions for achieving ultra-high performance LEO network. This role will also be part of a team and develop simulation tools with particular emphasis on modeling the physical layer aspects such as advanced receiver modeling and abstraction, interference cancellation techniques, FEC abstraction models etc. This role will also play a critical role in the integration and verification of various HW and SW sub-systems as a part of system integration and link bring-up and verification. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
US, MA, N.reading
Amazon Industrial Robotics Group is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics Group, we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of dexterous manipulation system that: - Enables unprecedented generalization across diverse tasks - Enables contact-rich manipulation in different environments - Seamlessly integrates low-level skills and high-level behaviors - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. A day in the life - Work on design and implementation of methods for Visual SLAM, navigation and spatial reasoning - Leverage simulation and real-world data collection to create large datasets for model development - Develop a hierarchical system that combines low-level control with high-level planning - Collaborate effectively with multi-disciplinary teams to co-design hardware and algorithms for dexterous manipulation
US, NY, New York
We are seeking an Applied Scientist to lead the development of evaluation frameworks and data collection protocols for robotic capabilities. In this role, you will focus on designing how we measure, stress-test, and improve robot behavior across a wide range of real-world tasks. Your work will play a critical role in shaping how policies are validated and how high-quality datasets are generated to accelerate system performance. You will operate at the intersection of robotics, machine learning, and human-in-the-loop systems, building the infrastructure and methodologies that connect teleoperation, evaluation, and learning. This includes developing evaluation policies, defining task structures, and contributing to operator-facing interfaces that enable scalable and reliable data collection. The ideal candidate is highly experimental, systems-oriented, and comfortable working across software, robotics, and data pipelines, with a strong focus on turning ambiguous capability goals into measurable and actionable evaluation systems. Key job responsibilities - Design and implement evaluation frameworks to measure robot capabilities across structured tasks, edge cases, and real-world scenarios - Develop task definitions, success criteria, and benchmarking methodologies that enable consistent and reproducible evaluation of policies - Create and refine data collection protocols that generate high-quality, task-relevant datasets aligned with model development needs - Build and iterate on teleoperation workflows and operator interfaces to support efficient, reliable, and scalable data collection - Analyze evaluation results and collected data to identify performance gaps, failure modes, and opportunities for targeted data collection - Collaborate with engineering teams to integrate evaluation tooling, logging systems, and data pipelines into the broader robotics stack - Stay current with advances in robotics, evaluation methodologies, and human-in-the-loop learning to continuously improve internal approaches - Lead technical projects from conception through production deployment - Mentor junior scientists and engineers
US, NY, New York
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through industry leading generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Global Optimization (GO) team within Sponsored Products and Brands at Amazon Ads is re-imagining the advertising stack from the ground up across 20+ marketplaces. We are seeking an experienced Senior Data Scientist to join our team. You will develop scalable analytical approaches to evaluate marketplace performance across the entire Ads stack to uncover regional and marketplace-specific insights, design and run experiments, and shape our development roadmap. We operate as a closely integrated team of Data Scientists, Applied Scientists, and Engineers to translate data-driven insights into measurable business impact. If you're energized by solving complex challenges at international scale and pushing the boundaries of what's possible with GenAI, join us in shaping the future of global advertising at Amazon. Key job responsibilities As a Data Scientist on this team, you will: - Write code to obtain, manipulate, and analyze data to derive business insights. - Apply statistical and ML knowledge to specific business problems and data. - Analyze historical data to identify trends and support optimal decision making. - Formalize assumptions about how our systems are expected to work and develop methods to systematically identify high ROI improvements. About the team SPB Global Optimization (GO) team was created to accelerate growth in non-US markets. We are driving business growth across all marketplaces by creating delightful experiences for shoppers and advertisers alike. We are working backwards from customers to re-imagine Amazon's advertising stack from the ground up, leveraging GenAI to deliver solutions that scale across 20+ marketplaces from day one.
US, WA, Seattle
Unleash Your Potential as an AI Trailblazer At Amazon, we're on a mission to revolutionize the way people discover and access information. Our Applied Science team is at the forefront of this endeavor, pushing the boundaries of recommender systems and information retrieval. We're seeking brilliant minds to join us as interns and contribute to the development of cutting-edge AI solutions that will shape the future of personalized experiences. As an Applied Science Intern focused on Recommender Systems and Information Retrieval in Machine Learning, you'll have the opportunity to work alongside renowned scientists and engineers, tackling complex challenges in areas such as deep learning, natural language processing, and large-scale distributed systems. Your contributions will directly impact the products and services used by millions of Amazon customers worldwide. Imagine a role where you immerse yourself in groundbreaking research, exploring novel machine learning models for product recommendations, personalized search, and information retrieval tasks. You'll leverage natural language processing and information retrieval techniques to unlock insights from vast repositories of unstructured data, fueling the next generation of AI applications. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Must be eligible and available for a full-time (40h / week) 12 week internship between May 2026 and September 2026 Amazon has positions available for Machine Learning Applied Science Internships in, but not limited to Arlington, VA; Bellevue, WA; Boston, MA; New York, NY; Palo Alto, CA; San Diego, CA; Santa Clara, CA; Seattle, WA. Key job responsibilities We are particularly interested in candidates with expertise in: Knowledge Graphs and Extraction, Programming/Scripting Languages, Time Series, Machine Learning, Natural Language Processing, Deep Learning,Neural Networks/GNNs, Large Language Models, Data Structures and Algorithms, Graph Modeling, Collaborative Filtering, Learning to Rank, Recommender Systems In this role, you'll collaborate with brilliant minds to develop innovative frameworks and tools that streamline the lifecycle of machine learning assets, from data to deployed models in areas at the intersection of Knowledge Management within Machine Learning. You will conduct groundbreaking research into emerging best practices and innovations in the field of ML operations, knowledge engineering, and information management, proposing novel approaches that could further enhance Amazon's machine learning capabilities. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Design, implement, and experimentally evaluate new recommendation and search algorithms using large-scale datasets - Develop scalable data processing pipelines to ingest, clean, and featurize diverse data sources for model training - Conduct research into the latest advancements in recommender systems, information retrieval, and related machine learning domains - Collaborate with cross-functional teams to integrate your innovative solutions into production systems, impacting millions of Amazon customers worldwide - Communicate your findings through captivating presentations, technical documentation, and potential publications, sharing your knowledge with the global AI community