Promptimus: Improving already good LLM prompts with zero manual engineering

By focusing on specific failure points and suggesting targeted solutions, a new automated prompt-engineering framework improves prompt performance without compromising existing functionality.

Overview by Amazon Nova
  • Promptimus is an automated method for optimizing well-developed prompts for large language models (LLMs), designed to improve performance without manual engineering.
  • It works through a four-step iteration loop that includes evaluation, feedback generation, strategy and edit generation, and candidate evaluation, with options for standard or edit mode depending on the prompt's complexity.
  • Promptimus achieves the best results on 16 of 20 benchmarks, outperforming six leading automatic prompt optimization methods, and demonstrating sample efficiency and model-agnostic generalizability across various LLMs and enterprise tasks.
Was this answer helpful?

Large language models (LLMs) have become integral to enterprise applications across industries. Under the hood, customers’ inputs to the models are usually augmented with prompts that encode intricate business logic, regulatory requirements, and domain expertise: a healthcare system must use language compliant with the Health Insurance Portability and Accountability Act, for instance, and a financial trading system must follow risk tolerance rules.

These prompts are typically crafted by domain experts over weeks or months. Yet business demands continue to push for further performance gains. The challenge, therefore, is not engineering prompts from scratch but rather elevating already strong performance by discovering nuanced, task-specific refinements — without compromising domain requirements.

In this post, we present Promptimus, a method for automatically optimizing well-developed prompts that has several advantages over its predecessors:

  • It's model agnostic: It takes a prompt already optimized for a source model, rapidly reoptimizes it for a target model, and compares the optimized prompts across models.
  • It's driven by performance criteria: It takes the existing prompt template, task-specific data samples, and user-defined performance metrics and generates targeted improvement strategies, iterating repeatedly to achieve domain-specific optimization objectives.
  • It focuses on exploits: It uses a metric-analyzer AI agent to identify failure points and a debugging helper agent to identify root causes, and it surgically refines prompts relative to failures (rather than along random dimensions) for targeted performance improvement.
  • It’s fully automated: It analyzes user-defined metrics and uses a code sanitization AI agent to generate debugging checkpoints automatically. Metric functions can be imported as Python code, and performance criteria can be added or modified at any time.
  • It has an edit mode: For large, carefully structured prompts with complex business logic, the edit mode makes surgical, targeted modifications instead of rewriting the entire prompt — preserving the parts that already work while fixing exactly what’s broken.

Promptimus supports a wide range of textual and multimodal LLM tasks, including classification, extraction, generation, summarization, code generation, and tool use. In the following sections, we’ll present our methodology, the system architecture, and experimental results on multiple enterprise tasks.

Promptimus-02b-16x9.png
By focusing on specific failure points and suggesting targeted solutions, Promptimus — a new automated prompt-engineering framework — improves prompt performance without compromising existing functionality.

Why good prompts are hard to improve

Attempts to automate prompt optimization are as old as prompt engineering itself, but approaches that work well when generating prompts from scratch struggle to improve well-engineered prompts. Random exploration strategies using generic directions like "be more creative" or "add examples" are ineffective, because the remaining improvements lie in very specific strategic directions. Sparse feedback in the form of scalar scores provides no guidance on why instances fail or how to improve.

On top of growing complexity from business domain demands, rapid model evolution further compounds the challenge of prompt optimization. As providers like Anthropic, OpenAI, Google, Meta, and Alibaba release new models, enterprises face recurring prompt migration challenges. Prompts optimized for one model often underperform on another due to different instruction-following characteristics. Manual reoptimization is costly and time consuming, and regression risks delay adoption of better models.

Methodology and system design

Promptimus addresses these challenges with a methodology built around a four-step iteration loop, with the following inputs:

  • the LLM you aim to use for inference
  • the initial prompt template
  • a small JSONL dataset (typically 20–50 samples) with corresponding variables for prompt templates, split into a development set (for prompt tuning) and a held-out test set (for validation); it is not mandatory for the samples to contain the ground truth
  • a user-defined performance-evaluation metric function (you can bring your own Python code)
Promptimus system design flow chart..png
Promptimus system design flow chart.

The four-step iteration loop

Step 1 — evaluation: During initialization, the original prompt is executed on the target LLM using the development set (dev set) to establish baseline evaluation scores. Additionally, the metric-analyzer agent performs analysis of the user-defined metric function, generating checkpoint functions that decompose the evaluation into intermediate validation steps. These checkpoints enable fine-grained failure diagnosis throughout the optimization process. For example, when the checkpoints reveal that 98% of outputs have the correct JSON format, and 95% have valid schemas, but only 88% have valid values, the cause of underperformance is localized to value validation.

After the initial evaluation, Promptimus branches into either standard mode, where it conducts full prompt rewrites, or edit mode, where it modifies prompts with structured find-and-replace edits.

Standard mode

Edit mode

Step 2

Feedback generation: The LLM-driven feedback generator uses the metric checkpoints precomputed by the metric analyzer to diagnose failure patterns in the current-prompt results. It identifies the bottleneck checkpoint (the one with the lowest pass rate) and collects representative instances — including both failing and passing examples, to provide contrast — then analyzes root causes and common failure modes. Finally, it provides actionable suggestions for fixing the prompt (such as “model outputs descriptive text instead of enum codes, suggest adding explicit constraint”).

Analysis + strategy + edit generation: After performing the same failure analysis as in the standard mode, the feedback generator proposes targeted find-and-replace edits, pinning changes to the exact locations responsible for specific failures.

Step 3

Strategy + full rewrite: Based on the feedback from the previous step, along with the metrics and data samples, the metaoptimizer analyzes task characteristics and generates task-specific exploration strategies, while maintaining all domain-specific requirements encoded in the original prompt. Then, for each strategy, the instruction optimizer proposes an improved prompt candidate that addresses the identified weaknesses and specific error patterns. This one-to-one coupling between strategies and candidates ensures diverse exploration of the optimization landscape.

Programmatic edit application: For each proposed edit in step 2, Promptimus deterministically matches the edit to the identified failure with three match levels: exact match, whitespace-normalized fuzzy match, and similarity match near line reference. This process has a 97.3% success rate with zero LLM calls.

Step 4

Candidate evaluation: Each candidate is executed using the dev set, and the best candidate is selected by running the user-defined metric function. The best-performing candidate becomes the starting point for the next iteration. This exploration-focused process runs iteratively for a user-specified number of iterations, with each iteration building on what was learned and achieved in the previous one.

We recommend standard mode for short prompts that need significant expansion — for example, a two-line math prompt that needs to grow into detailed reasoning protocols. Edit mode is a better choice for longer and already well-crafted prompts containing structured content like API schemas, compliance rules, or domain taxonomies, where full rewrites risk silently dropping or reorganizing carefully crafted sections. For a prompt with 50,000–100,000 tokens, a typical iteration produces three to five edits totaling 500–1,000 tokens, versus regeneration of the entire prompt.

More generally, Promptimus adds content only when the optimization loop surfaces unaddressed failure modes, so prompt length plateaus within the first few iterations. This means that the relative serving-time impact is small for already long production prompts and larger for short starter templates. If the optimized prompt is served as a cached system prompt, the additional cost is one call during the cache's time to live, which becomes negligible at scale.

Empirical experiments and analysis

We evaluated Promptimus against six leading automatic prompt optimization methods across 20 public benchmarks spanning reasoning, math, question answering, text-to-SQL, coding, function calling, instruction following, and multimodal tasks. All methods used the same optimizer model and evaluation budgets with Claude Sonnet 4.6 as the target model, averaged over five random seeds. Each benchmark used 20 dev samples for optimization and 100 held-out test examples for evaluation.

As reported in the table below, Promptimus achieves the best result on 16 of 20 benchmarks and ties on one, outperforming all six baselines on average (0.792 vs. 0.765 for the best-of-six baseline). The largest gains appear on tasks where the metric has a decomposable structure. Notably, Promptimus with edit mode outperforms all four multimodal benchmarks, suggesting that vision-language prompts benefit from preserving existing visual-analysis structure rather than rewriting it.

Benchmark

Metric

No optimization

Best of six baselines

Promptimus

Mode

BBH-CausalJudge

Acc [0,1]

0.538

0.726 (GEPA)

0.718

Standard

BBH-DisambigQA

Acc [0,1]

0.601

0.868 (GPO)

0.908

Standard

BBH-GeoShapes

Acc [0,1]

0.747

0.770 (OPRO)

0.936

Standard

BBH-RuinNames

Acc [0,1]

0.918

0.926 (GEPA)

0.928

Standard

BBH-Snarks

Acc [0,1]

0.324

0.920 (OPRO)

0.908

Edit

GSM8K

Acc [0,1]

0.658

0.964 (MIPROv2)

0.958

Standard

DAPO-AIME

Acc [0,1]

0.703

0.730 (ProTeGi)

0.79

Standard

HotPotQA

F1 [0,1]

0.16

0.832 (MIPROv2)

0.839

Standard

Spider

ExAcc [0,1]

0.68

0.846 (GEPA)

0.85

Edit

BIRD

ExAcc [0,1]

0.626

0.684 (ProTeGi)

0.684

Standard

BigCodeBench-hard

Pass@1 [0,1]

0.339

0.336 (ProTeGi)

0.345

Standard

Codeforces

Pass@1 [0,1]

0.589

0.808 (TextGrad)

0.818

Edit

BFCL

AST [0,1]

0.882

0.968 (MIPROv2)

0.98

Standard

NesT-FuL

PMacc [0,1]

0.375

0.429 (TextGrad)

0.469

Standard

IFBench

Acc [0,1]

0.498

0.509 (GEPA)

0.53

Standard

IFEval

Strict [0,1]

0.876

0.886 (GPO)

0.892

Standard

MathVista

Acc [0,1]

0.433

0.606 (GPO)

0.644

Edit

ChartQA

Relaxed Acc [0,1]

0.279

0.828 (ProTeGi)

0.834

Edit

AI2D

Acc [0,1]

0.834

0.824 (MIPROv2)

0.868

Edit

DeFactify

Acc [0,1]

0.835

0.922 (MIPROv2)

0.938

Edit

Average

0.595

0.765

0.792

The figure below shows convergence through iterations on two representative benchmarks. Promptimus edit mode reaches 90% of its final development score in a median of about 300 metric calls, faster than all baselines. Both modes typically plateau within eight iterations, with the bulk of improvement concentrated in the first three to five iterations.

Importantly, dev set gains transfer to the held-out test set. Sometimes baselines match or even exceed Promptimus on dev but fall behind on test, indicating overfitting. We attribute this to edit mode's surgical modifications, which preserve generalizable prompt structure, and metric probing, which produces failure signals that transfer across examples, as opposed to memorization of dev-set patterns.

Convergence on two representative.png
Convergence on two representative benchmarks (Claude Sonnet 4.6, five seeds). Lines show mean best dev-set score (left y-axis) vs. cumulative metric calls with ±1 standard error of the mean (SEM) as shadings; ★ markers show mean held-out test score (right y-axis) at the average step at which each method converged. Promptimus (gold) converges faster, reaches higher dev scores, and achieves the best test performance.

We also evaluated Promptimus across multiple LLMs using a public benchmark and Amazon enterprise use cases, spanning the tasks of classification, text-to-SQL, math reasoning, coding, multimodal understanding, and complex API generation on seven target models. Promptimus improved baseline prompts on all nine tasks, with gains ranging from 3.18% to 90.27%. Dev sets ranged from 30 to 160 examples, with the majority of tasks using fewer than 100, demonstrating the system's sample efficiency. The results also highlight model-agnostic generalizability: the same optimization framework produced meaningful gains across both proprietary and open-source target models without task-specific engineering.

Task

Target LLM

Performance metric

Dev set size

No optimization

Optimized

Complex API call generation

GPT-OSS-120B

API Acc (user-defined) [0,1]

43

0.45

0.86

Classification_A

Nova Pro

F1 score and FPR score [0,1]

210

0.64

0.78

Multimodal classification_B

Haiku-4.5

Accuracy [0,1]

160

0.51

0.76

Classification_C

Nova Lite

Accuracy [0,1]

85

0.56

0.58

Text2sql_A

Nova-Micro

Execution Accuracy

[0,1]

50

0.72

0.83

Math reasoning_A

Qwen3-235B[WS12] (non-reasoning)

Accuracy (user-defined) [0,1]

30

0.47

0.50

Math reasoning_B

Claude-4.5-Opus (non-reasoning)

Accuracy (user-defined) [0,1]

30

0.60

0.73

Coding_A

GPT-OSS-120B

Pass@1 [0,1]

100

0.26

0.33

Coding_B

GPT-OSS-120B

Pass@1 [0,1]

31

0.56

0.64

Following are examples of how Promptimus improved already fine-grained prompts to further drive application performance for a variety of use cases.

Example 1: CodeForces (coding benchmark designed to evaluate LLM reasoning)

This use case is to use an LLM to generate a Python function based on a user-provided problem description. We used 50 dev samples (sampled from the original dev set) and 148 test samples with a user-defined scoring approach. The Promptimus (edit mode) optimization converged in five iterations.

Original vs. optimized prompt (deletions in italic, additions in bold)

-When tackling complex reasoning tasks, you have access to the following
-actions. Use them as needed to progress through your thought process.
-[ASSESS]
-[ADVANCE]
-[VERIFY]
-[SIMPLIFY]
-[SYNTHESIZE]
-[PIVOT]
-[OUTPUT]
-You should strictly follow the format below:
-[ACTION NAME]
-# Your action step 1
-# Your action step 2
-...
-Next action: [NEXT ACTION NAME]
+You are an expert competitive programmer. Solve the given programming
+problem in Python using the strict 2-phase reasoning structure defined below.
+ ## ABSOLUTE RULE – ONE [OUTPUT] BLOCK ONLY – ZERO EXCEPTIONS
+ The first [OUTPUT] block encountered is the ONLY one evaluated. A second [OUTPUT] block causes
+ immediate evaluation failure and a score of 0.
+ ## CRITICAL CONSTRAINTS
+ Standard Library Only – Use ONLY Python standard library modules. No exceptions.
+ Forbidden: sortedcontainers, numpy, scipy, pandas. Allowed: bisect, heapq, collections, math,
+ itertools, functools, sys.
+ If you need a sorted structure: implement using bisect + a plain list.
+ Sorting Pitfall Warning:
+ Never use sort(reverse=True) when the secondary sort direction differs from the primary.
+ Descending by key A, ascending by key B: items.sort(key=lambda x: (-x[0], x[1]))
+ I /O Consistency Rule:
+ Use exactly ONE I/O method throughout – no mixing.
+ Strategy A: input = sys.stdin.readline at top, then use input() everywhere.
+ Strategy B: use sys.stdin.readline() directly everywhere.
+ Variable Initialization Rule:
+ Declare all variables that are conditionally assigned BEFORE their conditional block.
+ ## STRICT 2-PHASE STRUCTURE
+ ### PHASE 1 – [ASSESS] (ONE block only)
+ 5 mandatory gates (G1–G5). Each gate requires a one-line YES/NO + justification.
+ G1 – Brute force feasible? Is O(nˆ2) within time constraints?
+ G2 – All variables initialized before conditional use?
+ G3 – I/O strategy chosen and consistent? Declare exactly one strategy.
+ G4 – Demo output reproducible by hand? Perform explicit dry run on demo input.
+ G5 – Any mutable structure modified during iteration? Confirm index recomputation.
+ End with: Chosen approach: [algorithm name], O([complexity]) – Tier [1/2/3]
+ Tier 1 = Brute-force correct, Tier 2 = Optimized correct, Tier 3 = Optimal.
+ Fallback Rule: If you cannot confidently implement Tier 2+, commit to Tier 1. A slow, correct
+ solution scores higher than a fast, broken one.
+ ### PHASE 2 – [OUTPUT] (ONE block only, immediately after ASSESS)
+ First line inside [OUTPUT] must declare I/O strategy as a comment.
+ Produce the complete Python solution. No other action types permitted.
+ ## CRITICAL OUTPUT RULES
+ 1. Exactly ONE [OUTPUT] block. Fix mistakes inline – never open a second.
+ 2. Inside [OUTPUT], the ONLY content is the fenced Python code block.
+ 3. Reasoning word budget: entire [ASSESS] block must not exceed 250 words.
+ 4. No trailing empty lines in output.
+ 5. Never end your response with only reasoning – even brute-force is acceptable over no solution.
+ 6. Never output -1 or “no solution” if the problem guarantees a solution always exists.
+ [. . . mandatory code scaffold template with I/O strategy declaration, imports, solve() structure, sorting/mutation reminders, output
+ formatting rules . . . ]
Title: {problem_title}
Time Limit: {time_limit}
Memory Limit: {memory_limit}
Problem Description: {problem_description}
Output Specification: {output_specification}
Demo Input: {demo_input}
Demo Output: {demo_output}
Note: {demo_note}
-Write Python code to solve the problem. Present the code in “‘python ... “‘ at the end.
+Solve the problem using the 2-phase structure: [ASSESS] block (5 mandatory gates G1–G5, ≤250 words),
+then [OUTPUT] block (fenced Python solution)

Qualitative example from CodeForce.png
Qualitative example from CodeForces test set. Predicted code from the original prompt fails due to the use of array('H') (typed C arrays), which incurs significant iteration overhead, causing it to exceed the time limit with large numbers of iterations. The code generated from the optimized prompt passed all test cases.

Example 2: Multimodal AI agent

This AI agent is for Amazon to detect construction defects. The original and optimized prompts are shown below. We used the vision-language model qwen3-vl-235b-a22b on Amazon Bedrock to examine the images taken by inspectors and identify construction defect categories and risk levels. The optimization process looped in three iterations with 16 dev samples. The recommendations generated by the metric analyzer and instruction optimizer in Promptimus (including providing a role, a task objective, defect categories with examples, a category disambiguation section, analysis instructions with a decision tree, output format requirements, and critical output requirements) improved the image classification accuracy from 0.438 to 0.812. When we applied the optimized prompt to the test sample set (17 samples), accuracy improved from 0.471 to 0.529.

Qualitative example from Multimodal AI Agent dataset..png
Qualitative example from Multimodal AI Agent dataset.

Example 3: Defactify (multimodal fact verification)

This is a comprehensive framework for evaluating an LLM’s ability to perform multimodal fact verification, detect misinformation, and identify AI-generated content. The Promptimus metric analyzer found that the model defaults to ''Real'' for photorealistic AI-generated images. The optimizer introduces an adversarial dual-hypothesis framework with asymmetric weighting that biases the model toward “AI-generated”. For example, with the original prompt, the model dismisses a clock with garbled numbers as an “artistic design choice” and is fooled by photorealistic textures. After optimization, by contrast, the adversarial dual-hypothesis protocol forces systematic signal enumeration, catching the garbled clock numerals that the baseline dismissed.

Qualitative example from Defactify dataset..png
Qualitative example from Defactify dataset.

Conclusion and future work

Compared to other metric-driven prompt optimization approaches, Promptimus excels at preventing exploitation through targeted and exploitation-focused refinements. It is fully generalizable, adaptive to user-defined metric functions and task domains without manual engineering. The dense feedback loop drives automatic analysis on metric-function code, identifies debugging checkpoints, and generates adaptive, task-aware exploration strategies that target the specific failure modes of each prompt-and-task combination.

Particularly, our approach is sample efficient, requiring only a small number of dev examples (typically 20–50) to drive significant improvements, fitting it for enterprise scenarios where labeled data is scarce or expensive to obtain. Furthermore, its model-agnostic design enables it to rapidly adapt prompts to target models for seamless enterprise-level model migration. We are making this innovation available through Amazon Bedrock to enable model migration for enterprise generative-AI applications with zero manual engineering and minimal labeled datasets.

Research areas

Related content

US, TX, Austin
What happens when you combine startup speed with Amazon-scale impact? You get this team. Amazon Enterprise Security Products is a newly launched group building intelligent, cloud-agnostic security tools using AI-first development practices. Here, you build AI and you build with AI — at the same time. This role is a chance to shape the future of security tooling with a small, fast team that ships like a startup but deploys at Amazon scale. We're looking for a Data Scientist who thrives at the intersection of applied ML, agentic AI, and security. You'll design and deploy models that detect threats, power intelligent agents, and make security decisions at cloud scale. You'll work shoulder-to-shoulder with SDEs, applied scientists, security researchers, and PMs on a team where the best idea wins, regardless of title or tenure. Key job responsibilities * Build the intelligence behind AI-first security products: Design, train, and ship ML models that power agentic systems, anomaly detection, threat classification, and automated response — all running across multi-cloud environments. * Own the full science lifecycle: From problem framing and data exploration through model development, evaluation, production deployment, and monitoring. You build it, you ship it, you run it. * Build with AI to build AI: Use agentic coding tools, LLM-powered workflows, and experimental AI tooling to accelerate every phase of your work; from EDA to feature engineering to model iteration. Multiply your velocity and raise the bar for what one scientist can deliver. * Power agentic architectures: Develop the models, embeddings, RAG pipelines, evaluation frameworks, and feedback loops that make multi-agent security systems smart, safe, and customer-ready. * Prototype rapidly and validate with customers: Turn hypotheses into prototypes in days, not quarters. Iterate based on real customer signal and ship what works. * Partner across disciplines: Work directly with SDEs, applied scientists, security researchers, PMs, and UX designers to turn ambiguous problems into shipped solutions. Small team means short lines between you and the decision. * Communicate with impact: Translate complex modeling results into clear recommendations for engineers, product leaders, and senior executives. Influence direction with data. * Raise the science bar: Contribute to technical and science reviews, mentor teammates, and champion AI-first development practices. Help shape the science culture of a fast-growing team from the ground floor. A day in the life No two days look the same on this fast-growing, AI-first team. You might start your morning reviewing evaluation results from overnight model training runs, then dive into building a RAG pipeline or tuning a multi-agent orchestration loop. Before lunch, you're pair-prompting with an agentic coding assistant to stand up a new feature pipeline. In the afternoon, you join a design session with senior and principal scientists and engineers where your ideas carry weight regardless of title. You own science problems end to end, ship using the latest AI-assisted workflows, and see your models reach production fast. This is where builders thrive. About the team Amazon Enterprise Security Products is built by builders who tackle challenges others might consider too ambitious. We're a small team where there are no layers between you and the decision, no waiting quarters to see your work reach customers. Every team member brings an owner's mentality. If there's a problem worth solving, we solve it. No mission is beyond reach, no detail beneath our attention. We move fast, we ship fast, and we learn from what we ship. This is where builders who want to make the impossible routine come to do their best work. Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores. Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, NY, New York
We are seeking a Robotics/AI Motor Control Scientist to develop cutting-edge machine learning algorithms for motor control systems in robots. In this role, you will focus on creating and optimizing intelligent motor control strategies to enable robots to perform complex, whole-body tasks. Your contributions will be essential in advancing robotics by enabling fluid, reliable, and safe interactions between robots and their environments. Key job responsibilities - Develop controllers that leverage reinforcement learning, imitation learning, or other advanced AI techniques to achieve natural, robust, and adaptive motor behaviors - Collaborate with multi-disciplinary teams to integrate motor control systems with robotic hardware, ensuring alignment with real-world constraints such as actuator dynamics and energy efficiency - Use simulation and real-world testing to refine and validate control algorithms - Stay updated on advancements in robotics, AI, and control systems to apply advanced techniques to robotic motion challenges - Lead technical projects from conception through production deployment - Mentor junior scientists and engineers - Bridge research initiatives with practical engineering implementation About the team Fauna Robotics, an Amazon company, is building capable, safe, and genuinely delightful robots for everyday life. Our goal is simple: make robots people actually want to live and interact with in everyday human spaces. We believe that future won’t arrive until building for robotics becomes far more accessible. Today, too much effort is spent reinventing the fundamentals. We’re changing that by developing tightly integrated hardware and software systems that make it faster, safer, and more intuitive to create real-world robotic products. Our work spans the full stack: mechanical design, control systems, dynamic modeling, and intelligent software. The focus is not just functionality, but experience. We’re building robots that feel responsive, expressive, and genuinely useful. At Fauna, you’ll work at the frontier of this space, helping define how robots move, manipulate, and interact with people in natural environments. It’s an opportunity to solve hard problems across hardware and software with a team focused on making robotics accessible and joyful to build. If you care about making robotics real for everyone and building systems that are as delightful as they are capable, we’re interested in hearing from you. an opportunity to solve hard problems across hardware and software with a team focused on making robotics accessible and joyful to build. If you care about making robotics real for everyone and building systems that are as delightful as they are capable, we’re interested in hearing from you.
IN, KA, Bengaluru
Passionate about books? The Amazon Books team is looking for a talented Applied Scientist II to help invent, design, and deliver science solutions to make it easier for millions of customers to find the next book they will love. In this role, you will - Be a part of a growing team of scientists, economists, engineers, analysts, and business partners. - Use Amazon’s large-scale computing and data resources to generate deep understandings of our customers and products. - Build highly accurate models (and/or agentic systems) to enhance the book reading & discovery experiences. - Design, implement, and deliver novel solutions to some of Amazon’s oldest problems. Key job responsibilities - Inspect science initiatives across Amazon to identify opportunities for application and scaling within book reading and discovery experiences. - Participate in team design, scoping, and prioritization discussions while mapping business goals to scientific problems and aligning business metrics with technical metrics. - Spearhead the design and implementation of new features through thorough research and collaboration with cross-functional teams. - Initiate the design, development, execution, and implementation of project components with input and guidance from team members. - Work with Software Development Engineers (SDEs) to deliver production-ready solutions that benefit customers and business operations. - Invent, refine, and develop solutions to ensure they meet customer needs and team objectives. - Demonstrate ability to use reasonable assumptions, data analysis, and customer requirements to solve complex problems. - Write secure, stable, testable, and maintainable code with minimal defects while taking full responsibility for your components. - Possess strong understanding of data structures, algorithms, model evaluation techniques, performance optimization, and trade-off analysis. - Follow engineering and scientific method best practices, including design reviews, model validation, and comprehensive testing. - Maintain current knowledge of research trends in your field and apply rigorous scrutiny to results and methodologies. A day in the life In this role, you will address complex Books customer challenges by developing innovative solutions that leverage the advancements in science. Working alongside a talented team of scientists, you will conduct research and execute experiments designed to enhance the Books reading and shopping experience. Your responsibilities will encompass close collaboration with cross-functional partner teams, including engineering, product management, and fellow scientists, to ensure optimal data quality, robust model development, and successful productionization of scientific solutions. Additionally, you will provide mentorship to other scientists, conduct reviews of their work, and contribute to the development of team roadmaps. About the team The team consists of a collaborative group of scientists, product leaders, and dedicated engineering teams. We work with multiple partner teams to leverage our systems to drive a diverse array of customer experiences, owned both by ourselves and others, that enable shoppers to easily find their perfect next read and enable delightful reading experiences that would make Kindle the best place to read.
US, WA, Bellevue
The Amazon Fulfillment Technologies (AFT) Science team is looking for an exceptional Applied Scientist, with strong optimization and analytical skills, to develop production solutions for one of the most complex systems in the world: Amazon’s Fulfillment Network. At AFT Science, we design, build and deploy optimization, simulation, and machine learning solutions to power the production systems running at world wide Amazon Fulfillment Centers. We solve a wide range of problems that are encountered in the network, including labor planning and staffing, demand prioritization, pick assignment and scheduling, and flow process optimization. We are tasked to develop innovative, scalable, and reliable science-driven solutions that are beyond the published state of art in order to run frequently (ranging from every few minutes to every few hours per use case) and continuously in our large scale network. Key job responsibilities As an Applied Scientist, you will work with other scientists, software engineers, product managers, and operations leaders to develop scientific solutions and analytics using a variety of tools and observe direct impact to process efficiency and associate experience in the fulfillment network. Key responsibilities include: * Develop an understanding and domain knowledge of operational processes, system architecture and functions, and business requirements * Deep dive into data and code to identify opportunities for continuous improvement and/or disruptive new approach * Develop scalable mathematical models for production systems to derive optimal or near-optimal solutions for existing and new challenges * Create prototypes and simulations for agile experimentation of devised solutions * Advocate technical solutions to business stakeholders, engineering teams, and senior leadership * Partner with engineers to integrate prototypes into production systems * Design experiment to test new or incremental solutions launched in production and build metrics to track performance About the team Amazon Fulfillment Technology (AFT) designs, develops and operates the end-to-end fulfillment technology solutions for all Amazon Fulfillment Centers (FC). We harmonize the physical and virtual world so Amazon customers can get what they want, when they want it. The AFT Science team has expertise in operations research, optimization, scheduling, planning, simulation, and machine learning. We also have domain expertise in the operational processes within the FCs and their defects. We prioritize advancements that support AFT tech teams and focus areas rather than specific fields of research or individual business partners. We influence each stage of innovation from inception to deployment which includes both developing novel solutions or improving existing approaches. Resulting production systems rely on a diverse set of technologies, our teams therefore invest in multiple specialties as the needs of each focus area evolves.
US, WA, Bellevue
Alexa International is looking for a passionate, talented, and inventive Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. You will contribute to developing novel solutions and deliver high-quality results that impact Alexa's international products and services. Key job responsibilities As an Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our international customers in the form of products and services that make use of digital assistant technology. You will leverage Amazon's heterogeneous data sources, unique and diverse international customer nuances and large-scale computing resources to accelerate advances in text, voice, and vision domains in a multimodal setup. The ideal candidate possesses a solid understanding of machine learning, natural language understanding, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environments to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and collaborate effectively with cross-functional teams. A day in the life * Analyze, understand, and model customer behavior and the customer experience based on large-scale data. * Build novel online & offline evaluation metrics and methodologies for multimodal personal digital assistants. * Fine-tune/post-train LLMs using techniques like SFT, DPO, RLHF, and RLAIF. * Set up experimentation frameworks for agile model analysis and A/B testing. * Collaborate with partner teams on LLM evaluation frameworks and post-training methodologies. * Contribute to end-to-end delivery of solutions from research to production, including reusable science components. * Communicate solutions clearly to partners and stakeholders. * Contribute to the scientific community through publications and community engagement.
US, WA, Bellevue
Amazon’s Last Mile Team is looking for a passionate individual with strong optimization and analytical skills to join its Last Mile Science team in the endeavor of designing and improving the most complex planning of delivery network in the world. Last Mile builds global solutions that enable Amazon to attract an elastic supply of drivers, companies, and assets needed to deliver Amazon's and other shippers' volumes at the lowest cost and with the best customer delivery experience. Last Mile Science team owns the core decision models in the space of jurisdiction planning, delivery channel and modes network design, capacity planning for on the road and at delivery stations, routing inputs estimation and optimization. Our research has direct impact on customer experience, driver and station associate experience, Delivery Service Partner (DSP)’s success and the sustainable growth of Amazon. Optimizing the last mile delivery requires deep understanding of transportation, supply chain management, pricing strategies and forecasting. Only through innovative and strategic thinking, we will make the right capital investments in technology, assets and infrastructures that allows for long-term success. Our team members have an opportunity to be on the forefront of supply chain thought leadership by working on some of the most difficult problems in the industry with some of the best product managers, scientists, and software engineers in the industry. Key job responsibilities Candidates will be responsible for developing solutions to better manage and optimize delivery capacity in the last mile network. The successful candidate should have solid research experience in one or more technical areas of Operations Research or Machine Learning. These positions will focus on identifying and analyzing opportunities to improve existing algorithms and also on optimizing the system policies across the management of external delivery service providers and internal planning strategies. They require superior logical thinkers who are able to quickly approach large ambiguous problems, turn high-level business requirements into mathematical models, identify the right solution approach, and contribute to the software development for production systems. To support their proposals, candidates should be able to independently mine and analyze data, and be able to use any necessary programming and statistical analysis software to do so. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs.
US, WA, Bellevue
Alexa International is looking for a passionate, talented, and inventive Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. You will contribute to developing novel solutions and deliver high-quality results that impact Alexa's international products and services. Key job responsibilities As an Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our international customers in the form of products and services that make use of digital assistant technology. You will leverage Amazon's heterogeneous data sources, unique and diverse international customer nuances and large-scale computing resources to accelerate advances in text, voice, and vision domains in a multimodal setup. The ideal candidate possesses a solid understanding of machine learning, natural language understanding, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environments to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and collaborate effectively with cross-functional teams. A day in the life * Analyze, understand, and model customer behavior and the customer experience based on large-scale data. * Build novel online & offline evaluation metrics and methodologies for multimodal personal digital assistants. * Fine-tune/post-train LLMs using techniques like SFT, DPO, RLHF, and RLAIF. * Set up experimentation frameworks for agile model analysis and A/B testing. * Collaborate with partner teams on LLM evaluation frameworks and post-training methodologies. * Contribute to end-to-end delivery of solutions from research to production, including reusable science components. * Communicate solutions clearly to partners and stakeholders. * Contribute to the scientific community through publications and community engagement.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer. Throughout your internship journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of Quantum Computing and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Quantum Research Science and Applied Science Internships in Santa Clara, CA and Pasadena, CA. We are particularly interested in candidates with expertise in any of the following areas: superconducting qubits, cavity/circuit QED, quantum optics, open quantum systems, superconductivity, electromagnetic simulations of superconducting circuits, microwave engineering, benchmarking, quantum error correction, fabrication, etc. Key job responsibilities In this role, you will work alongside global experts to develop and implement novel, scalable solutions that advance the state-of-the-art in the areas of quantum computing. You will tackle challenging, groundbreaking research problems, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
IN, TN, Chennai
We are seeking a Senior Applied Scientist to join the Alexa Availability team within Alexa Excellence. This role leads the research and development of machine learning and statistical models that power Alexa's reliability at massive scale — serving hundreds of millions of customers globally. The ideal candidate will tackle complex, ambiguous problems spanning time series multivariate modeling, statistical anomaly detection, LLM-based operational intelligence, and adaptive threshold systems. They will design production-grade ML solutions, establish rigorous evaluation frameworks, and ensure AI systems are grounded, reliable, and free from systematic bias — leveraging techniques such as RAG, confidence scoring, knowledge graph integration, and counterfactual testing. This scientist will partner with engineers, product managers, and operations leaders to translate scientific innovation into production systems that directly impact Alexa's availability worldwide. They will drive the scientific agenda for the team, mentor fellow scientists, and influence the broader Alexa Excellence organization through technical leadership and cross-team collaboration. Key Focus Areas: Anomaly detection and predictive failure modeling Cross-service correlation and LLM-driven operational intelligence Production ML at the intersection of large-scale distributed systems and applied science Model reliability, hallucination mitigation, and grounding for operational AI Key job responsibilities As a Senior Applied Scientist on the Alexa Availability team, you will lead the research and development of machine learning and statistical models that power Alexa's reliability at scale. You will work on some of the most complex and ambiguous problems in the space — from time series multivariate modeling and statistical anomaly detection to LLM-based operational intelligence and adaptive threshold systems. A day in the life You will design and implement production-grade ML solutions, establish rigorous model evaluation frameworks, and ensure our LLM-powered systems are grounded, reliable, and free from systematic bias. You will apply techniques such as Retrieval-Augmented Generation (RAG), confidence scoring, knowledge graph integration, and counterfactual testing to ensure our AI systems make trustworthy operational decisions at scale. You will partner closely with software engineers, product managers, and operations leaders to translate scientific innovation into production systems that directly impact Alexa's availability for customers worldwide. You will drive the scientific agenda for your team, mentor fellow scientists, and influence the broader Alexa Excellence organization through your technical leadership and cross-team collaboration. About the team The Alexa Excellence team is at the heart of delivering a world-class Alexa experience to hundreds of millions of customers globally. Within Alexa Excellence, the Alexa Availability team is responsible for ensuring Alexa is always on, always responsive, and always reliable. We own the systems, signals, and science that detect, diagnose, and drive resolution of availability issues at scale — before customers ever notice. We are building the next generation of intelligent availability solutions powered by machine learning, large language models, and advanced statistical modeling. Our work spans anomaly detection, predictive failure modeling, cross-service correlation, and LLM-driven operational intelligence — all operating at the scale and reliability bar that Alexa demands. We operate at the intersection of large-scale distributed systems, applied machine learning, and operational excellence, and we are looking for scientists who can bring both deep technical rigor and a bias for production impact.
US, WA, Seattle
Amazon Ads is building Ads Agent, an AI-powered agent that understands advertiser intent, reasons over campaign strategy, and executes across the full Amazon Ads portfolio. If you want to work at the frontier of agentic AI and large language models while directly impacting a multi-billion dollar business, this is your team. We are seeking an experienced Applied Scientist passionate about building intelligent agents that reason, plan, and act across complex advertising workflows. Ads Agent is an AI agent that simplifies how advertisers plan, launch, and optimize campaigns. Powered by AI, Ads Agent works alongside advertisers to automate time-consuming tasks, like identifying targeting segments, adjusting pacing across hundreds of campaigns, and generating SQL queries for advanced analytics. It also provides data-driven recommendations and simplifies analysis—all while providing transparency and control. With a broad mandate to experiment and innovate, we need applied scientists to define and build the future of advertising. Key job responsibilities - Design, build, and evaluate agentic systems that plan multi-step workflows, invoke tools, and take autonomous actions across Amazon Ads products on behalf of advertisers. - Define evaluation frameworks and benchmarks for agent reliability, correctness, safety, and advertiser satisfaction. - Analyze agent behavior through deep data analysis and rigorous A/B experimentation to identify failure modes, measure effectiveness, and derive business insights. - Partner with engineers, product managers, and UX designers to ship end-to-end agent experiences that are scalable, efficient, and reliable at Amazon scale. About the team We are a small, fast-moving team building a unified AI-native interface to all of Amazon Advertising. We sit at the intersection of large language models, agentic AI, and one of the world's most complex advertising ecosystems. If you want to shape how millions of advertisers interact with Amazon Ads, come build with us.