Formal verification makes RSA faster — and faster to deploy

Optimizations for Amazon's Graviton2 chip boost efficiency, and formal verification shortens development time.

Most secure transactions online are protected by public-key encryption schemes like RSA, whose security depends on the difficulty of factoring large numbers. Public-key encryption improves security because it enables the encrypted exchange of private keys. But because it depends on operations like modular exponentiation of large integers, it introduces significant computational overhead.

Researchers and engineers have introduced all kinds of optimizations to make public-key encryption more efficient, but the resulting complexity makes it difficult to verify that the encryption algorithms are behaving properly. And a bug in an encryption algorithm can be disastrous.

This post explains how Amazon’s Automated Reasoning group improved the throughput of RSA signatures on Amazon’s Graviton2 chip by 33% to 94%, depending on the key size, while also proving the functional correctness of our optimizations using formal verification.

Graviton chip.png
An AWS Graviton chip.

Graviton2 is a server-class CPU developed by Amazon Annapurna Labs, based on Arm Neoverse N1 cores. To improve the throughput of RSA signatures on Graviton2, we combined various techniques for fast modular arithmetic with assembly-level optimizations specific to Graviton2. To show that the optimized code is functionally correct, we formally verified it using the HOL Light interactive theorem prover, which was developed by a member of our team (John Harrison).

Our code is written in a constant-time style (for example, no secret-dependent branches or memory access patterns) to avoid side-channel attacks, which can learn secret information from operational statistics like function execution time. The optimized functions and their proofs are included in Amazon Web Services’ s2n-bignum library of formally verified big-number operations. The functions are also adopted by AWS-LC, the cryptographic library maintained by AWS, and by its bindings Amazon Corretto Crypto Provider (ACCP) and AWS Libcrypto for Rust (AWS-LC-RS).

Key size (bits)

Baseline throughput (ops/sec)

Improved throughput (ops/sec)

Speedup (%)

2048

299

541

81.00%

3072

95

127

33.50%

4096

42

81

94.20%

Improvements in the throughput times of RSA signatures in AWS-LC on Graviton2. 

Step 1. Making RSA fast on Graviton2

Optimizing the execution of RSA algorithms on Graviton2 requires the careful placement and use of multiplication instructions. On 64-bit Arm CPUs, the multiplication of two 64-bit numbers, with a product of up to 128 bits (conventionally designated 64×64→128), are accomplished by two instructions: MUL, producing the lower 64 bits, and UMULH, producing the upper 64 bits. On Graviton2, MUL has a latency of four cycles and stalls the multiplier pipeline for two cycles after issue, while UMULH has a latency of five cycles and stalls the multiplier pipeline for three cycles after issue. Since Neoverse N1 has a single multiplier pipeline but three addition pipelines, multiplication throughput is around one-tenth the throughput of 64-bit addition.

To improve throughput, we (1) applied a different multiplication algorithm, trading multiplication for addition instructions, and (2) used single-instruction/multiple-data (SIMD) instructions to offload a portion of multiplication work to the vector units of the CPU.

Algorithmic optimization

For fast and secure modular arithmetic, Montgomery modular multiplication is a widely used technique. Montgomery multiplication represents numbers in a special form called Montgomery form, and when a sequence of modular operations needs to be executed — as is the case with the RSA algorithm — keeping intermediary products in Montgomery form makes computation more efficient.

We implement Montgomery multiplication as the combination of big-integer multiplication and a separate Montgomery reduction, which is one of its two standard implementations.

Related content
Solution method uses new infrastructure that reduces proof-checking overhead by more than 90%.

On Graviton2, the benefit of this approach is that we can use the well-known Karatsuba algorithm to trade costly multiplications for addition operations. The Karatsuba algorithm decomposes a multiplication into three smaller multiplications, together with some register shifts. It can be performed recursively, and for large numbers, it’s more efficient than the standard multiplication algorithm.

We used Karatsuba’s algorithm for power-of-two bit sizes, such as 2,048 bits and 4,096 bits. For other sizes (e.g., 3072 bits), we still use a quadratic multiplication. The Karatsuba multiplication can be further optimized when the two operands are equal, and we wrote functions specialized for squaring as well.

With these optimizations we achieved a 31–49% speedup in 2,048- and 4,096-bit RSA signatures compared with our original code.

Microarchitectural optimization

Many Arm CPUs implement the Neon single-instruction/multiple-data (SIMD) architecture extension. It adds a file of 128-bit registers, which are viewed as vectors of various sizes (8/16/32/64 bit), and SIMD instructions that can operate on some or all of those vectors in parallel. Furthermore, SIMD instructions use different pipelines than scalar instructions, so both types of instructions can be executed in parallel.

Vectorization strategy. Vectorization is a process that replaces sequential executions of the same operation with a single operation over multiple values; it usually increases efficiency. Using SIMD instructions, we vectorized scalar 64-bit multiplications.

For big-integer multiplication, vectorized 64-bit multiply-low code nicely overlapped with scalar 64-bit multiply-high instructions (UMULH). For squaring, vectorizing two 64×64→128-bit squaring operations worked well. For multiplications occurring in Montgomery reduction, vectorizing 64×64→128-bit multiplications and 64×64→64 multiply-lows worked. To choose which scalar multiplications to vectorize, we wrote a script that enumerated differently vectorized codes and timed their execution. For short code fragments, exhaustive enumeration was possible, but for larger code fragments, we had to rely on experience. The overall solution was chosen only after extensive experiments with other alternatives, such as those described by Seo et. al. at ICISC’14.

Related content
Using time to last byte — rather than time to first byte — to assess the effects of data-heavy TLS 1.3 on real-world connections yields more encouraging results.

Although the scalar and SIMD units are able to operate in parallel, it is sometimes necessary to move inputs and intermediate results between integer and SIMD registers, and this brings significant complications. The FMOV instruction copies data from a 64-bit scalar register to a SIMD register, but it uses the same pipeline as the scalar multiplier, so its use would reduce scalar-multiplier throughput.

The alternative of loading into a vector register first and then using MOV to copy it to a scalar register has lower latency, but it occupies the SIMD pipeline and hence lowers the throughput of SIMD arithmetic operations. Somewhat counterintuitively, the best solution was to make two separate memory loads into the integer and SIMD registers, with care for their relative placement. We did still use MOV instructions to copy certain SIMD results into integer registers when the SIMD results were already placed at SIMD registers because it was faster than a round trip via store-load instructions.

Fast constant-time table lookup code. Another independent improvement was the reimplementation of a vectorized constant-time lookup table for a fast modular-exponentiation algorithm. Combining this with our earlier optimization further raises our speedup to 80–94% when compared to the throughput of 2,048-/4,096-bit RSA signatures from our initial code, as well as a 33% speedup for 3,072-bit signatures.

Instruction scheduling. Even though Graviton2 is an out-of-order CPU, carefully scheduling instructions is important for performance, due to the finite capacity of components like reorder buffers and issue queues. The implementations discussed here were obtained by manual instruction scheduling, which led to good results but was time consuming.

We also investigated automating the process using the SLOTHY superoptimizer, which is based on constraint solving and a (simplified) microarchitecture model. With additional tweaks to Montgomery reduction to precalculate some numbers used in Karatsuba, SLOTHY optimization enabled a 95–120% improvement on 2,048-/4,096-bit throughputs and 46% on 3,072-bit! However, this method is not yet incorporated into AWS-LC since verifying the automated scheduling proved to be challenging. Studying the potential for automatically proving correctness of scheduling optimizations is a work in progress.

Step 2. Formally verifying the code

To deploy the optimized code in production we need to ensure that it works correctly. Random testing is a cheap approach for quickly checking simple and known cases, but to deliver a higher level of assurance, we rely on formal verification. In this section we explain how we apply formal verification to prove functional correctness of cryptographic primitives.

Introduction to s2n-bignum

AWS’s s2n-bignum is both (1) a framework for formally verifying assembly code in x86-64 and Arm and (2) a collection of fast assembly functions for cryptography, verified using the framework itself.

Related content
New IAM Access Analyzer feature uses automated reasoning to ensure that access policies written in the IAM policy language don’t grant unintended access.

Specification in s2n-bignum. Every assembly function in s2n-bignum — including the new assembly functions used in RSA — has a specification stating its functional correctness. A specification states that for any program state satisfying some precondition, the output state of the program must satisfy some postcondition. For example, bignum_mul_4_8(uint64_t *z, uint64_t *x, uint64_t *y) is intended to multiply two 256-bit (four-word) numbers producing a 512-bit (eight-word) result. Its (abbreviated) precondition over an input state s is

  aligned_bytes_loaded s (word pc) bignum_mul_4_8_mc
∧ read PC s = word pc
∧ C_ARGUMENTS [z, x, y] s
∧ bignum_from_memory (x,4) s = a
∧ bignum_from_memory (y,4) s = b

This means that the machine code of bignum_mul_4_8 is loaded at the address currently contained in the program counter PC (aligned_bytes_loaded), symbolic values are assigned to the function arguments according to C’s application binary interface (C_ARGUMENTS ...), and big integers logically represented by the symbols a and b are stored in the memory location pointed to by x and y for four words (bignum_from_memory ...).

The (abbreviated) postcondition over an output state s is

bignum_from_memory (z,8) s = a * b

This means that the multiplied result a * b is stored in the eight-word buffer starting at location z.

One more component is a relation between the input and output states that must be satisfied:

(MAYCHANGE_REGS_AND_FLAGS_PERMITTED_BY_ABI;
MAYCHANGE [memory :> bytes(z,8 * 8)]) (s_in,s_out)

This means that executing the code may change registers/flags permitted by the application binary interface (ABI) and the eight-word buffer starting at z, but all other state components must remain unchanged.

Verifying assembly using HOL Light. To prove that the implementation is correct with respect to the specification, we use the HOL Light interactive theorem prover. In contrast to “black-box” automated theorem provers, tools like HOL Light emphasize a balance between automating routine proof steps and allowing explicit, and programmable, user guidance. When a proof exists on paper or inside someone’s head, a proficient user can effectively rewrite the proof in an interactive theorem prover. S2n-bignum uses a combination of two strategies to verify a program:

Related content
Both secure multiparty computation and differential privacy protect the privacy of data used in computation, but each has advantages in different contexts.

Symbolic execution. Given a representation of the input program state using symbolic variables in place of specific values, symbolic execution infers a symbolic output state at the end of some code snippet, in effect doing a more rigorous and generalized form of program execution. While this still leaves the postcondition to be proved, it strips away artifacts of program execution and leaves a purely mathematical problem.

Intermediate annotations in the style of Floyd-Hoare logic. Each intermediate assertion serves as a postcondition for the preceding code and a precondition for the subsequent code. The assertion need contain only the details that are necessary to prove its corresponding postcondition. This abstraction helps make symbolic simulation more tractable, in terms of both automated-reasoning capacity and the ease with which humans can understand the result.

We assume that the Arm hardware behaves in conformance with the model of s2n-bignum, but the model was developed with care, and it was validated by extensively cross-checking its interpretations against hardware.

Future formal-verification improvements. The formal verification for s2n-bignum does not yet cover nonfunctional properties of the implementation, including whether it may leak information through side channels such as the running time of the code. Rather, we handle this through a disciplined general style of implementation: never using instructions having variable timing, such as division, and no conditional branching/memory access patterns that depend on secret data. Also, we sanity-check some of these properties using simple static checks, and we execute the code on inputs with widely differing bit densities to analyze the corresponding run times and investigate any unexpected correlations.

These disciplines and sanity checks are standard practice with us, and we apply them to all the new implementations described here. In ongoing work, we are exploring the possibility of formally verifying the absence of information leakage.

Research areas

Related content

GB, London
Come build the future of entertainment with us. Are you interested in shaping the future of movies and television? Do you want to define the next generation of how and what Amazon customers are watching? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows including Amazon Originals and exclusive licensed content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels which they can cancel at anytime and to rent or buy new release movies and TV box sets on the Prime Video Store. Prime Video is a fast-paced, growth business - available in over 200 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on. The Insights team is looking for an Applied Scientist for our London office experienced in generative AI and large models. This is a wide impact role working with development teams across the UK, India, and the US. This greenfield project will deliver features that reduce the operational load for internal Prime Video builders and for this, you will need to develop personalized recommendations for their services. You will have strong technical ability, excellent teamwork and communication skills, and a strong motivation to deliver customer value from your research. Our position offers opportunities to grow your technical and non-technical skills and make a global impact immediately. Key job responsibilities - Develop machine learning algorithms for high-scale recommendations problems - Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement - Collaborate with software engineers to integrate successful experimental results into Prime Video wide processes - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports A day in the life You will lead the design of machine learning models that scale to very large quantities of data across multiple dimensions. You will embody scientific rigor, designing and executing experiments to demonstrate the technical effectiveness and business value of your methods. You will work alongside other scientists and engineering teams to deliver your research into production systems. About the team Our team owns Prime Video observability features for development teams. We consume PBs of data daily which feed into multiple observability features focussed on reducing the customer impact time.
IN, KA, Bengaluru
Have you ever ordered a product on Amazon and when that box with the smile arrived you wondered how it got to you so fast? Have you wondered where it came from and how much it cost Amazon to deliver it to you? If so, the WW Amazon Logistics, Business Analytics team is for you. We manage the delivery of tens of millions of products every week to Amazon’s customers, achieving on-time delivery in a cost-effective manner. We are looking for an enthusiastic, customer obsessed, Applied Scientist with good analytical skills to help manage projects and operations, implement scheduling solutions, improve metrics, and develop scalable processes and tools. The primary role of an Operations Research Scientist within Amazon is to address business challenges through building a compelling case, and using data to influence change across the organization. This individual will be given responsibility on their first day to own those business challenges and the autonomy to think strategically and make data driven decisions. Decisions and tools made in this role will have significant impact to the customer experience, as it will have a major impact on how the final phase of delivery is done at Amazon. Ideal candidates will be a high potential, strategic and analytic graduate with a PhD in (Operations Research, Statistics, Engineering, and Supply Chain) ready for challenging opportunities in the core of our world class operations space. Great candidates have a history of operations research, and the ability to use data and research to make changes. This role requires robust program management skills and research science skills in order to act on research outcomes. This individual will need to be able to work with a team, but also be comfortable making decisions independently, in what is often times an ambiguous environment. Responsibilities may include: - Develop input and assumptions based preexisting models to estimate the costs and savings opportunities associated with varying levels of network growth and operations - Creating metrics to measure business performance, identify root causes and trends, and prescribe action plans - Managing multiple projects simultaneously - Working with technology teams and product managers to develop new tools and systems to support the growth of the business - Communicating with and supporting various internal stakeholders and external audiences
US, NY, New York
The Measurement Intelligence Science Team (MIST) in the Measurement, Ad Tech, and Data Science (MADS) organization of Amazon Ads serves a centralized role developing solutions for a multitude of performance measurement products. We create solutions which measure the comprehensive impact of their ad spend, including sales impacts both online and offline and across timescales, and provide actionable insights that enable our advertisers to optimize their media portfolios. We leverage a host of scientific technologies to accomplish this mission, including Generative AI, classical ML, Causal Inference, Natural Language Processing, and Computer Vision. As an Applied Science Manager on the team, you will lead a team of scientists to define and execute a transformative vision for holistic measurement and reporting insights for ad effectiveness. Your team will own the science solutions for foundational experimentation platforms, foundational customer journey understanding technologies, state of the art attribution algorithms to measure the role of advertising in driving observed retail outcomes, and/or agentic AI solutions that help advertisers get quick access to custom insights that inform how to get the most out of their ad spend. Key job responsibilities You independently manage a team of scientists. You identify the needs of your team and effectively grow, hire, and promote scientists to maintain a high-performing team. You have a broad understanding of scientific techniques, several of which may fall out of your specific job function. You define the strategic vision for your team. You establish a roadmap and successfully deliver scientific solutions that execute that vision. You define clear goals for your team and effectively prioritize, balancing short-term needs and long-term value. You establish clear and effective metrics and scientific process to enforce consistent, high-quality artifact delivery. You proactively identify risks and bring them to the attention of your manager, customers, and stakeholders with plans for mitigation before they become roadblocks. You know when to escalate. You communicate ideas effectively, both verbally and in writing, to all types of audiences. You author strategic documentation for your team. You communicate issues and options with leaders in such a way that facilitates understanding and that leads to a decision. You work successfully with customers, leaders, and engineering teams. You foster a constructive dialogue, harmonize discordant views, and lead the resolution of contentious issues. About the team We are a team of scientists across Applied, Research, Data Science and Economist disciplines. You will work with colleagues with deep expertise in ML, NLP, CV, Gen AI, and Causal Inference with a diverse range of backgrounds. We partner closely with top-notch engineers, product managers, sales leaders, and other scientists with expertise in the ads industry and on building scalable modeling and software solutions.
US, NY, New York
The Measurement Intelligence Science Team (MIST) in the Measurement, Ad Tech, and Data Science (MADS) organization of Amazon Ads serves a centralized role developing solutions for a multitude of performance measurement products. We create solutions which measure the comprehensive impact of their ad spend, including sales impacts both online and offline and across timescales, and provide actionable insights that enable our advertisers to optimize their media portfolios. We leverage a host of scientific technologies to accomplish this mission, including Generative AI, classical ML, Causal Inference, Natural Language Processing, and Computer Vision. As an Applied Science Manager on the team, you will lead a team of scientists to define and execute a transformative vision for holistic measurement and reporting insights for ad effectiveness. Your team will own the science solutions for foundational experimentation platforms, foundational customer journey understanding technologies, state of the art attribution algorithms to measure the role of advertising in driving observed retail outcomes, and/or agentic AI solutions that help advertisers get quick access to custom insights that inform how to get the most out of their ad spend. Key job responsibilities You independently manage a team of scientists. You identify the needs of your team and effectively grow, hire, and promote scientists to maintain a high-performing team. You have a broad understanding of scientific techniques, several of which may fall out of your specific job function. You define the strategic vision for your team. You establish a roadmap and successfully deliver scientific solutions that execute that vision. You define clear goals for your team and effectively prioritize, balancing short-term needs and long-term value. You establish clear and effective metrics and scientific process to enforce consistent, high-quality artifact delivery. You proactively identify risks and bring them to the attention of your manager, customers, and stakeholders with plans for mitigation before they become roadblocks. You know when to escalate. You communicate ideas effectively, both verbally and in writing, to all types of audiences. You author strategic documentation for your team. You communicate issues and options with leaders in such a way that facilitates understanding and that leads to a decision. You work successfully with customers, leaders, and engineering teams. You foster a constructive dialogue, harmonize discordant views, and lead the resolution of contentious issues. About the team We are a team of scientists across Applied, Research, Data Science and Economist disciplines. You will work with colleagues with deep expertise in ML, NLP, CV, Gen AI, and Causal Inference with a diverse range of backgrounds. We partner closely with top-notch engineers, product managers, sales leaders, and other scientists with expertise in the ads industry and on building scalable modeling and software solutions.
US, WA, Seattle
This role leads the science function in WW Stores Finance as part of the IPAT organization (Insights, Planning, Analytics and Technology), driving transformative innovations in financial analytics through AI and machine learning across the global Stores finance organization. The successful candidate builds and directs a multidisciplinary team of data scientists, applied scientists, economists, and product managers to deliver scalable solutions that fundamentally change how finance teams generate insights, automate workflows, and make decisions. As part of the WW Stores Finance leadership team, this leader partners with engineering, product, and finance stakeholders to translate emerging AI capabilities into production systems that deliver measurable improvements in speed, accuracy, and efficiency. The role's outputs directly inform VP/SVP/CFO/CEO leadership decisions and drive impact across the entire Stores P&L. Success requires translating complex technical concepts for finance domain experts and business leaders while maintaining deep technical credibility with science and engineering teams. The role demands both strategic vision—identifying high-impact opportunities where AI can transform finance operations—and execution excellence in coordinating project planning, resource allocation, and delivery across multiple concurrent initiatives. This leader establishes methodologies and models that enable Amazon finance to achieve step-change improvements in both the speed and quality of business insights, directly supporting critical processes including month-end reporting, quarterly guidance, annual planning cycles, and financial controllership. Key job responsibilities Transformation of Finance Workflows — Lead development of agentic AI solutions that automate routine finance tasks and transform how teams communicate business insights. Deploy these solutions across financial analysis, narrative generation, and dynamic table creation for month-end reporting and planning cycles. Partner with engineering and product teams to integrate these capabilities into production systems that directly support Stores Finance and FGBS automation goals, delivering measurable reductions in manual effort and cycle time. Science-Based Forecasting — Develop and deploy machine learning forecasts that integrate into existing planning processes including OP1, OP2, and quarterly guidance cycles. Partner with finance teams across WW Stores to iterate on forecast accuracy, applying these models either as alternative viewpoints to complement bottoms-up forecasts or as hands-off replacements for manual forecasting processes. Establish evaluation frameworks that demonstrate forecast performance against business benchmarks and drive adoption across critical planning workflows. Financial Controllership — Scale AI capabilities across controllership workstreams to improve reporting accuracy and automate manual processes. Leverage generative AI to identify financial risk through systematic pattern recognition in transaction data, account reconciliations, and variance analysis. Develop production systems that enhance decision-making speed and quality in financial close, audit preparation, and compliance reporting, delivering quantifiable improvements in error detection rates and process efficiency. About the team IPAT (Insights, Planning, Analytics, and Technology) is a team in the Worldwide Amazon Stores Finance organization composed of leaders across engineering, finance, product, and science. Our mission is to reimagine finance using technology and science to provide fast, efficient, and accurate insights that drive business decisions and strengthen governance. We are dedicated to improving financial operations through innovative applications of technology and science. Our work focuses on developing adaptive solutions for diverse financial use cases, applying AI to solve complex financial challenges, and conducting financial data analysis. Operating globally, we strive to develop adaptable solutions for diverse markets. We aim to advance financial science, continually improving accuracy, efficiency, and insight generation in support of Amazon's mission to be Earth's most customer-centric company.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! The Prime Video Title Lifecycle Presentation team sits at the intersection of science, experimentation, and customer experience. We leverage data signals and rigorous testing to present the most engaging information about our content to customers at precisely the right moment. Our mission is to ensure every customer interaction with Prime Video content is informed, relevant, and compelling in order to drive discovery and engagement across our vast catalog. We're seeking an Applied Scientist who excels at building sophisticated machine learning systems for content presentation and discovery. The ideal candidate brings deep expertise in: - Multi-modal embeddings for rich metadata representation, enabling nuanced understanding of content attributes and customer preferences - Contextualized ranking systems that adapt to customer intent, viewing context, and real-time signals - Reinforcement learning frameworks that create continuous improvement loops, allowing our systems to learn and optimize from customer interactions over time - General modeling techniques with strong fundamentals in machine learning and statistical methods - Recommender systems experience, with proven ability to build and scale personalization solutions You'll work with cutting-edge technology to solve complex problems in content discovery, leveraging large-scale data to create experiences that delight millions of Prime Video customers worldwide. Key job responsibilities As an Applied Scientist, you will have access to large datasets with billions of images and video to build large-scale machine learning systems. Additionally, you will analyze and model terabytes of text, images, and other types of data to solve real-world problems and translate business and functional requirements into quick prototypes or proofs of concept. We are looking for smart scientists capable of using a variety of domain expertise combined with machine learning and statistical techniques to invent, design, evangelize, and implement state-of-the-art solutions for never-before-solved problems.
US, NY, New York
Do you want to lead the Ads industry and redefine how we measure the effectiveness of Amazon Ads business? Are you passionate about causal inference, Deep Learning/DNN, raising the science bar, and connecting leading-edge science research to Amazon-scale implementation? If so, come join Amazon Ads to be an Economist leader within our Advertising Incrementality Measurement science team! Our work builds the foundations for providing customer-facing experimentation tools, furthering internal research & development on Econometrics, and building out Amazon's advertising measurement offerings. Incrementality is a lynchpin for the next generation of Amazon Advertising measurement solutions and this role will play a key role in the release and expansion of these offerings. Key job responsibilities As an Economist leader within the Advertising Incrementality Measurement (AIM) science team, you are responsible for defining and executing on key workstreams within our overall causal measurement science vision. In particular, you can lead the development of experimental methodologies to measure ad effectiveness, and also build observational models that lay the foundations for understanding the impact of individual ad touchpoints for billions of daily ad interactions. You will work on a team of Applied Scientists, Economists, and Data Scientists, alongside a dedicated Engineering team, to work backwards from customer needs and translate product ideas into concrete science deliverables. You will be a thought leader for inventing scalable causal measurement solutions that support highly accurate and actionable insights--from defining and executing hundreds of thousands of RCTs, to developing an exciting science R&D agenda. You will be working with massive data and industry-leading partner scientists, while also interfacing with leadership to define our future vision. Your work will help shape the future of Amazon Advertising. About the team AIM is a cross disciplinary team of engineers, product managers, economists, data scientists, and applied scientists with a charter to build scientifically-rigorous causal inference methodologies at scale. Our job is to help customers cut through the noise of the modern advertising landscape and understand what actions, behaviors, and strategies actually have a real, measurable impact on key outcomes. The data we produce becomes the effective ground truth for advertisers and partners making decisions affecting millions in advertising spend.
IN, KA, Bengaluru
RBS (Retail Business Services) Tech team works towards enhancing the customer experience (CX) and their trust in product data by providing technologies to find and fix Amazon CX defects at scale. Our platforms help in improving the CX in all phases of customer journey, including selection, discoverability & fulfilment, buying experience and post-buying experience (product quality and customer returns). The team also develops GenAI platforms for automation of Amazon Stores Operations. As a Sciences team in RBS Tech, we focus on foundational ML research and develop scalable state-of-the-art ML solutions to solve the problems covering customer experience (CX) and Selling partner experience (SPX). We work to solve problems related to multi-modal understanding (text and images), task automation through multi-modal LLM Agents, supervised and unsupervised techniques, multi-task learning, multi-label classification, aspect and topic extraction for Customer Anecdote Mining, image and text similarity and retrieval using NLP and Computer Vision for product groupings and identifying duplicate listings in product search results. Key job responsibilities As an Applied Scientist, you will be responsible to design and deploy scalable GenAI, NLP and Computer Vision solutions that will impact the content visible to millions of customer and solve key customer experience issues. You will develop novel LLM, deep learning and statistical techniques for task automation, text processing, image processing, pattern recognition, and anomaly detection problems. You will define the research and experiments strategy with an iterative execution approach to develop AI/ML models and progressively improve the results over time. You will partner with business and engineering teams to identify and solve large and significantly complex problems that require scientific innovation. You will independently file for patents and/or publish research work where opportunities arise. The RBS org deals with problems that are directly related to the selling partners and end customers and the ML team drives resolution to organization level problems. Therefore, the Applied Scientist role will impact the large product strategy, identifies new business opportunities and provides strategic direction which is very exciting.
IN, KA, Bengaluru
Selection Monitoring team is responsible for making the biggest catalog on the planet even bigger. In order to drive expansion of the Amazon catalog, we develop advanced ML/AI technologies to process billions of products and algorithmically find products not already sold on Amazon. We work with structured, semi-structured and Visually Rich Documents using deep learning, NLP and image processing. The role demands a high-performing and flexible candidate who can take responsibility for success of the system and drive solutions from research, prototype, design, coding and deployment. We are looking for Applied Scientists to tackle challenging problems in the areas of Information Extraction, Efficient crawling at internet scale, developing ML models for website comprehension and agents to take multi-step decisions. You should have depth and breadth of knowledge in text mining, information extraction from Visually Rich Documents, semi structured data (HTML) and advanced machine learning. You should also have programming and design skills to manipulate Semi-Structured and unstructured data and systems that work at internet scale. You will encounter many challenges, including: - Scale (build models to handle billions of pages), - Accuracy (requirements for precision and recall) - Speed (generate predictions for millions of new or changed pages with low latency) - Diversity (models need to work across different languages, market places and data sources) You will help us to - Build a scalable system which can algorithmically extract information from world wide web. - Intelligently cluster web pages, segment and classify regions, extract relevant information and structure the data available on semi-structured web. - Build systems that will use existing Knowledge Base to perform open information extraction at scale from visually rich documents. Key job responsibilities - Use AI, NLP and advances in LLMs/SLMs and agentic systems to create scalable solutions for business problems. - Efficiently Crawl web, Automate extraction of relevant information from large amounts of Visually Rich Documents and optimize key processes. - Design, develop, evaluate and deploy, innovative and highly scalable ML models, esp. leveraging latest advances in RL-based fine tuning methods like DPO, GRPO etc. - Work closely with software engineering teams to drive real-time model implementations. - Establish scalable, efficient, automated processes for large scale model development, model validation and model maintenance. - Lead projects and mentor other scientists, engineers in the use of ML techniques. - Publish innovation in research forums.
US, WA, Seattle
Unlock the Future with Amazon Science! Amazon is seeking boundary-pushing graduate student scientists who can turn revolutionary theory into awe-inspiring reality for internships in 2026. Join our team of visionary scientists and embark on a journey to harnessing the power of cutting-edge techniques in deep learning and revolutionize the fields of artificial intelligence, data science, speech recognition, text understanding, robotics and more. At Amazon, we don't just talk about innovation – we live and breathe it. You'll conducting research into the theory and application of deep learning. You will work on some of the most difficult problems in the industry with some of the best product managers, scientists, and software engineers in the industry. You will propose and deploy solutions that will likely draw from a range of scientific areas. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Applied Science Internships in, but not limited to Arlington, VA; Bellevue, WA; Boston, MA; New York, NY; Palo Alto, CA; San Diego, CA; Santa Clara, CA; Seattle, WA. Key job responsibilities We are particularly interested in candidates with expertise in: Machine Learning, Deep Learning, Robotics, LLMs, NLP/NLU, Gen AI, Transformers, Fine-Tuning, Recommendation Systems, Programming/Scripting Languages, Reinforcement Learning, Causal Inference and more. In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of Reinforcement Learning and Optimization within Machine Learning. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on developing novel RL algorithms and applying them to complex, real-world challenges. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Develop scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. - Design, development and evaluation of highly innovative ML models for solving complex business problems. - Research and apply the latest ML techniques and best practices from both academia and industry. - Think about customers and how to improve the customer delivery experience. - Use and analytical techniques to create scalable solutions for business problems.