How Amazon uses agentic AI for vulnerability detection at global scale

Amazon’s RuleForge system uses agentic AI to generate production-ready detection rules 336% faster than traditional methods.

Overview by Amazon Nova
  • RuleForge, Amazon's agentic-AI system, generates detection rules 336% faster than manual methods while maintaining high precision.
  • RuleForge decomposes rule creation into stages mirroring human expert workflows, using specialized AI agents for ingestion, generation, evaluation, and validation.
  • A separate judge model, with domain-specific prompts and negative phrasing, reduces false positives by 67% while maintaining true positives.
  • RuleForge's multi-agent architecture and human-in-the-loop design ensure production-ready rules, closing the gap between vulnerability disclosure and defense.
Was this answer helpful?

In 2025, the National Vulnerability Database published more than 48,000 new common vulnerabilities and exposures (CVEs), reflecting the impact of automated and AI-powered tools on vulnerability discovery. For security teams, however, knowing about new vulnerabilities isn’t enough; they must translate each disclosure into robust detection logic fast enough to protect large, complex systems.

At AWS, we built RuleForge, an agentic-AI system that generates detection rules directly from examples of vulnerability-exploiting code, achieving a 336% productivity advantage over manual rule creation while maintaining the precision required for production security systems and enhanced customer security.

generation2.jpg
RuleForge architecture showing CVE repository, rule generation, validation, and feedback integration components.

Closing the gap between disclosure and defense

At Amazon, detection rules are written in JSON and applied to data such as requests to MadPot, a global “honeypot” system that uses digital decoys to capture the behavior of malicious hackers, and likely exploit attempts flagged by our internal detection system, Sonaris. We expect the number of high-severity vulnerabilities published to the NVD to continue to grow, which means that AI-powered automation is essential for security at scale.

By automating rule generation, we’re closing that gap while expanding our coverage. Our teams can now turn high-severity CVEs into validated detection rules at a pace and scale that would be impossible with traditional methods, providing more comprehensive protection for customers.

The manual-detection rule workflow

Before RuleForge, creating a detection rule for a new CVE was a multistep, analyst-driven process:

  1. Download and analyze. A security analyst located publicly available proof-of-concept exploit code — code that demonstrates how to trigger a vulnerability — and studied it to understand the attack mechanism, inputs, and expected behavior.
  2. Write detection logic. The analyst authored a rule to catch malicious traffic targeting the vulnerability, then wrote queries to measure the rule's accuracy against traffic logs.
  3. Validate and iterate. The analyst ran those queries, reviewed the results, tuned the rule to reduce false positives, and repeated until the rule performed well enough for production.
  4. Peer review and deploy. Finally, the analyst submitted the rule for code review by another security engineer before deployment.

This workflow produced high-quality rules, but the time investment meant the team had to carefully prioritize which vulnerabilities to cover first.

Reframing rule creation as an agentic-AI pipeline

RuleForge reimagines this workflow as an agentic-AI system — a set of specialized AI agents that collaborate to generate, evaluate, and refine detection rules, with humans remaining in the loop for final approval. Rather than attempting to solve the end-to-end problem with a single model, RuleForge decomposes the task into stages that mirror how human experts work:

  1. Automated ingestion and prioritization. RuleForge downloads publicly available exploit proof-of-concept code demonstrating how to target a specific vulnerability. It scores each exploit using content analysis and threat intelligence sources. This ensures that rule generation focuses on the threats that matter most.
  2. Parallel rule generation. For each prioritized CVE, a generation agent running on AWS Fargate with Amazon Bedrock proposes multiple candidate detection rules in parallel. Each candidate can be refined across several iterations based on feedback from later stages, enabling the system to explore different detection strategies before selecting the most promising ones. Instead of relying on one expert working rule by rule, RuleForge treats detection engineering as a pipeline where AI proposes options and humans decide what ships.
  3. AI-powered evaluation. A separate evaluation agent reviews each candidate. This is one of RuleForge's key innovations: rather than having the generation model judge its own work, RuleForge uses a dedicated "judge" model to score each rule on two dimensions that human experts use to assess detection rules:
    1. Sensitivity: What is the probability that this rule will fail to flag malicious requests described in the CVE?
    2. Specificity: What is the probability that this rule targets a feature that correlates with the vulnerability rather than the vulnerability itself?
  4. Multistage validation. Rules that pass the judge move through a pipeline of increasingly rigorous tests. Synthetic testing generates both malicious and benign test cases to verify basic detection accuracy. Rules are then validated against traffic logs, such as those from MadPot, to confirm they perform as expected. Rules that fail at any stage get sent back to the generation agent with specific feedback explaining why, creating a closed loop of improvement.
  5. Human review and deployment. The best-performing rule enters code review, just as before. A security engineer reviews it, and any feedback goes back to the generation agent for revision. Human judgment remains the final gate before production deployment.
generation1_v2.jpg
A depiction of RuleForge's five-by-five generation strategy, showing five parallel rule candidates, their confidence scores, and their iterative refinement. The system generates multiple candidates simultaneously and selects the best performer based on validation results.

Why a separate judge model matters

When we asked the rule generation model to report its confidence in its own candidate rules, it thought almost everything it produced was good. This aligns with research showing poor LLM calibration on security topics.

The solution was separating generation from evaluation. Using a dedicated judge model reduced false positives by 67% while maintaining the same number of true positive detections.

Two main design choices made the judge effective:

  • Negative phrasing improves accuracy. Asking "what is the probability that the rule fails to flag malicious requests?" produces better calibration than asking "what is the probability that the rule correctly flags all malicious requests?" Given that LLMs tend toward affirmation, framing the evaluation as a search for problems yields more honest assessments.
  • Domain-specific prompts outperform generic ones. Simply asking the model to rate its overall confidence in a rule produced poor calibration. The questions that worked encoded what security engineers actually look for: whether the rule targets the vulnerability mechanism itself versus a correlated surface feature and whether the rule covers the full range of exploit variations.

The system also generates reasoning chains explaining its scores. We evaluated those reasoning chains against human assessments and found that the AI judge's reasoning matched expert human reasoning for six out of nine rules. For example, when a human evaluator noted, "That SQL injection regex is too loose," the judge had independently determined that "the regex pattern will catch any query parameter with a single quote, which is broader than just the specific vulnerability."

Results and what’s next

We deployed the confidence scoring system in August 2025, accelerating how quickly our analysts can deploy new detection rules. Over the final four months of the year, RuleForge enabled our team to produce and validate rules 336% faster than it could manually, while maintaining the high accuracy required for production security systems. By shifting analyst focus from authoring to review, we’ve multiplied overall throughput without compromising quality. We’re closing the gap between vulnerability disclosure and defense more effectively than ever before and ensuring that the managed protections that help safeguard customer workloads on AWS are updated faster and cover more high-severity CVEs.

RuleForge demonstrates that agentic AI can augment human security expertise at production scale while meeting precision requirements. The key innovations are architectural: separating rule generation from rule evaluation, using multiple specialized agents rather than a single model, and keeping humans in the loop for final approval. As the rate of vulnerability disclosures continues to accelerate, these design principles will help us keep defenses current.

For a deeper look at the technical details behind RuleForge, including the evaluation methodology and experimental results, see our paper on arXiv.

Related content

US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for a Senior Applied Scientist to join our Applied AI team to work on LLM-based solutions. On our team you will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. You will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, excellent project management skills, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals.
IN, KA, Bengaluru
Do you want to lead the development of advanced machine learning systems that protect millions of customers and power a trusted global eCommerce experience? Are you passionate about modeling terabytes of data, solving highly ambiguous fraud and risk challenges, and driving step-change improvements through scientific innovation? If so, the Amazon Buyer Risk Prevention (BRP) Machine Learning team may be the right place for you. We are seeking a Senior Applied Scientist to define and drive the scientific direction of large-scale risk management systems that safeguard millions of transactions every day. In this role, you will lead the design and deployment of advanced machine learning solutions, influence cross-team technical strategy, and leverage emerging technologies—including Generative AI and LLMs—to build next-generation risk prevention platforms. Key job responsibilities Lead the end-to-end scientific strategy for large-scale fraud and risk modeling initiatives Define problem statements, success metrics, and long-term modeling roadmaps in partnership with business and engineering leaders Design, develop, and deploy highly scalable machine learning systems in real-time production environments Drive innovation using advanced ML, deep learning, and GenAI/LLM technologies to automate and transform risk evaluation Influence system architecture and partner with engineering teams to ensure robust, scalable implementations Establish best practices for experimentation, model validation, monitoring, and lifecycle management Mentor and raise the technical bar for junior scientists through reviews, technical guidance, and thought leadership Communicate complex scientific insights clearly to senior leadership and cross-functional stakeholders Identify emerging scientific trends and translate them into impactful production solutions
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
GB, London
We are looking for a Senior Economist to work on exciting and challenging business problems related to Amazon Retail’s worldwide product assortment. You will build innovative solutions based on econometrics, machine learning, and experimentation. You will be part of a interdisciplinary team of economists, product managers, engineers, and scientists, and your work will influence finance and business decisions affecting Amazon’s vast product assortment globally. If you have an entrepreneurial spirit, you know how to deliver results fast, and you have a deeply quantitative, highly innovative approach to solving problems, and long for the opportunity to build pioneering solutions to challenging problems, we want to talk to you. Key job responsibilities * Work on a challenging problem that has the potential to significantly impact Amazon’s business position * Develop econometric models and experiments to measure the customer and financial impact of Amazon’s product assortment * Collaborate with other scientists at Amazon to deliver measurable progress and change * Influence business leaders based on empirical findings
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About the team International Machine Learning Team is responsible for building novel ML solutions that attack India first (and other Emerging Markets across MENA and LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide
EG, Cairo
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, CA, San Diego
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to apply their macroeconomics and forecasting skillsets to solve real world problems. The intern will work in the area of forecasting, developing models to improve the success of new product launches in Private Brands. Our PhD Economist Internship Program offers hands-on experience in applied economics, supported by mentorship, structured feedback, and professional development. Interns work on real business and research problems, building skills that prepare them for full-time economist roles at Amazon and beyond. You will learn how to build data sets and perform applied econometric analysis collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis About the team The Amazon Private Brands Intelligence team applies Machine Learning, Statistics and Econometrics/economics to solve high-impact business problems, develop prototypes for Amazon-scale science solutions, and optimize key business functions of Amazon Private Brands and other Amazon orgs. We are an interdisciplinary team, using science and technology and leveraging the strengths of engineers and scientists to build solutions for some of the toughest business problems at Amazon, covering areas such as pricing, discovery, negotiation, forecasting, supply chain and product selection/development.