AWS and Gray Lab at Johns Hopkins Whiting School of Engineering announce groundbreaking database for AI/ML antibody design

The Antibody Developability Benchmark is powered by one of the most diverse antibody datasets, enabling transparent performance evaluation for AI-guided antibody design.

Overview by Amazon Nova
  • AWS and Johns Hopkins Engineering have launched the Antibody Developability Benchmark, a large-scale, diverse dataset for evaluating AI-guided antibody design.
  • The dataset includes 50 seed antibodies with four structural formats targeting 42 antigens, measuring six key developability traits.
  • It features engineered variants with both favorable and unfavorable developability outcomes, validated through wet-lab experiments.
  • The benchmark supports zero-shot learning, allowing models to be evaluated without prior exposure to the dataset, enhancing confidence in results.
  • The dataset will be expanded with additional models and properties, fostering continuous improvement in AI-driven antibody design.
Was this answer helpful?

In 1986 the US Food and Drug Administration issued its first approval for human use of a therapeutic antibody. Despite steady advances in methodology, genetic sequencing, and biomedical science, 40 years later the process of discovering and optimizing therapeutic antibodies often remains prohibitively expensive, in terms of both cost and time. Recent experiences with pandemic-style infectious-disease outbreaks lend an even greater urgency to the need to more quickly and efficiently identify and develop these antibodies.

Artificial-intelligence- and machine-learning-guided approaches to antibody design, in the form of biological foundation models (BioFM), represent a significant opportunity to address these challenges. Models built using protein language models (pLMs) and structure-based deep-learning frameworks have significant potential to predict antibody developability properties — the characteristics that determine whether a molecule is manufacturable, stable, and safe as a therapeutic. The development of those tools could drastically shorten discovery timelines while also reducing experimental costs.

That potential, however, has been hindered by the lack of a public dataset that would allow researchers to benchmark those tools, a crucial step in the development of trustworthy in-silico tools for drug discovery. While there are existing public antibody datasets, they are too frequently limited by a focus on a single antibody format or target. Others are composed of naturally occurring or clinically advanced antibodies, a bias that severely limits their utility for training or evaluating predictive models.

“Trust in the predictions made by these models must be grounded in evaluations against experimental data that is sufficiently large and diverse,” explained Luca Giancardo, an applied scientist with Amazon Web Services (AWS) who works on the Amazon Bio Discovery team. “That data must be representative of the real sequence space encountered during antibody engineering and balanced in terms of developability outcomes.”

Jeffrey Gray is a professor in the Chemical and Biomolecular Engineering Department at the Johns Hopkins Whiting School of Engineering, where he leads the Gray Lab, which focuses on the computational prediction and design of protein structures. He is also the original developer of RosettaDock, a tool for the prediction of the structure of protein complexes from their constituent proteins.

Gray noted that while AI has made tremendous progress in the prediction and design of antibody properties, his own lab’s benchmarks have shown that current models do not yet reliably predict critical developability features, such as solubility and specificity, needed for efficient design of therapeutics. He cited the lack of diverse data in standardized conditions as a primary limitation for training models. That, coupled with the absence of a comprehensive, heterogenous, large-scale database, has acted as a significant drag on the potential of developing AI tools for antibody development.

Antibody developability benchmark

To that end, AWS, in collaboration with the Gray Lab and Johns Hopkins Engineering are announcing the launch of the Antibody Developability Benchmark, powered by the largest and most diverse antibody dataset in public literature. This is the first large-scale benchmark of antibody biophysical and biochemical properties designed to support the development and rigorous evaluation of in-silico antibody property predictors.

  • 0

    seed antibodies

  • 0

    structural formats

  • 0

    antigen targets

The Antibody Developability Benchmark is 20 times as diverse — in terms of antibody formats, targets, and developability profiles — as benchmarks currently available in the scientific literature. While other datasets may contain more individual antibody designs, they typically explore a single target or antibody framework with limited property coverage. The Antibody Developability Benchmark is unique in its combination of scale and heterogeneity, encompassing 50 seed antibodies, four structural formats, and 42 antigens. It also includes both favorable and unfavorable developability outcomes.

Gray lauded the opportunity to work with AWS experts, noting that the collaboration has enabled the creation of a dataset larger and more diverse than any of the publicly available datasets. He called the project an important next step toward fulfilling the promise of AI to improve human health.

AntibodyBenchmark-01-16x9.png
Existing public antibody datasets typically focus on a single target or format with limited property coverage (left). The Antibody Developability Benchmark is 20 times more diverse — spanning 50 seed antibodies, 4 structural formats, 42 antigens, and both favorable and unfavorable outcomes (right).

The Antibody Developability Benchmark includes the first heterogeneous antibody-property dataset explicitly designed to capture favorable and unfavorable developability profiles across multiple antigens and mutation strategies. Crucially, all data was affirmed via wet-lab experiments, providing ground truth validation that existing public benchmarks lack.

“This dataset will allow researchers to confidently be able to answer ‘Which model is better suited for our purposes?’,” noted Giancardo, whose Bio Discovery team led the development of the dataset. “Today there are many computational models coming out that are mostly evaluated on either proprietary data or public datasets, which are not representative of antibody heterogeneity. That means deciding what is better or worse is very, very hard — if not impossible.”

The unmatched diversity and deliberate heterogeneity of the Antibody Developability Benchmark will help make those determinations possible.

Michael Chungyoun, a PhD researcher at JHU who worked on the project, observed that the benchmark covers a wide space of antibodies, particularly in terms of their properties. He noted that allowing researchers to check against a very diverse benchmark can save time and labor by helping them compare models and choose the best approach.

The antibody dataset

The dataset consists of 50 clinically and scientifically relevant seed antibodies spanning four structural formats — IgG, VHH, NearGermline-IgG, and scFv — targeting 42 distinct antigens. It measures expression, purity, thermostability, aggregation, polyreactivity, and hydrophobicity — six traits that are essential in the development of viable therapeutic antibodies.

antibody structural format
The 50 seed antibodies in the Antibody Developability Benchmark span four structural formats: IgG (35), VHH (7), NearGermline-IgG (5), and scFv (3).

“The composition is a deliberate design choice,” Giancardo noted. “We strove to find a balance between heterogeneity of antibody classes, therapeutic targets, and mutation types, with the aim of creating benchmarks that would be generalizable across the structural diversity of the modern therapeutic-antibody landscape.”

Researchers at the Gray Lab, assisted by a sponsored research grant from AWS, helped select the seed antibodies for inclusion in the dataset. They were intentional about the seeds they chose, Chungyoun noted, opting in some cases for existing clinical-stage antibodies or FDA-approved antibodies. The team also selected antibodies more akin to those that circulate in the human body but aren't approved therapeutics. Those are called germline antibodies.

Chungyoun explained that germline antibodies are those found in the human body, and they have important biophysical characteristics. While some of those characteristics are shared with therapeutic antibodies, there are also differences between the two. The extent of those differences, and how to bridge that gap, is a vital and unanswered question.

Traditional antibody-based drug discovery begins with antibodies that come from animals or humans. Chungyoun explained that germline antibodies occasionally need to be modified to look more like therapeutics. That process is one researchers are still exploring.

Mutation strategy

The dataset also includes engineered variants of each seed antibody, generated by applying systematic mutation strategies to each seed.

“Initially, the hardest thing was essentially coming up with example sequences that would cover the broad spectrum of properties and the ways of mutating these sequences,” Giancardo explained. “It's challenging because you have to do it a priori until you do it, and then you don't know what will come out.”

Working with Johns Hopkins Engineering, Giancardo and his team systematically engineered variants employing a variety of approaches, including protein-language-model-guided (pLM-guided) versus non-pLM-guided mutation selection and amino acid substitutions versus insertions/deletions.

“Protein language models are essentially the equivalent of large language models [LLMs] for the protein world,” Giancardo said. “There are multiple ways of looking at proteins. A common way is expressing them as a string of amino acids, which are essentially letters.” When some of the letters in the amino acid chains are masked, the models can be trained to fill in the gaps — the same "self-supervised" approach used to train LLMs. The models can also be trained to predict what changes inserting a different letter or letters — i.e., mutation — will yield. That approach resulted in a wide variety of mutations — up to 99 engineered variants per seed.

The breadth and depth of those mutations contribute to another distinguishing feature of the Antibody Developability Benchmark: its deliberate heterogeneity. The inclusion of both favorable, or developable, and unfavorable, or poorly developable, examples sets it apart from existing datasets.

“This range is essential for training and evaluating machine learning models, which require balanced label distributions and exposure to the failure modes they are intended to predict and avoid,” Giancardo explained. He also clarified that those failures still fall within a range of viability.

“These are not examples that are obviously wrong but rather bad examples that have a fighting chance," he added. "These all still meet some baseline quality assessment, meaning researchers could reasonably send them to a wet-lab partner to test.”

Zero-shot learning

Gray and his team at Hopkins Engineering also collaborated with their AWS counterparts by selecting and running existing open-source antibody design and prediction models on their own. They then shared their findings with the Bio Discovery team, who compared the results those models generated against the benchmarking dataset without exposing those models to the information in that dataset.

“This is essentially zero-shot inference,” Giancardo said. That siloed approach allowed both sides to have greater confidence in the results the Antibody Developability Benchmark generated. “The fact that we operated separately gave us confidence that we were not introducing errors. There is no data leakage of any sort, even from an external perspective.”

The teams compared their data and used those results to further fine-tune the Antibody Developability Benchmark. That iterative process means researchers who utilize the benchmark can have greater confidence about the viability of their models before the necessary, and costly, step of working with a wet lab partner. That can also shorten the overall timeline in terms of experimentation.

“When you are confident enough to do a screen, then you can turn to the wet lab, get new metrics, and further train on those results, which will be much, much, much more meaningful,” Giancardo explained.

The future

Researchers at both AWS and Hopkins Engineering emphasized the importance of sharing model benchmarks based on the Antibody Developability Benchmark Dataset with the larger scientific community. The benchmark results are now available as part of Amazon Bio Discovery; additional benchmarks will be added over time and released in a paper later this year.

The sharp uptick in proposed protein AI models has researchers excited, but the expense and time commitment of wet labs has meant researchers have thus far been unable to compare those models head to head, Chungyoun observed. He noted that the launch of this dataset means those researchers now have an opportunity to learn which model properties improve performance. That can serve to illuminate the connection between what models learn and how those models can be improved to better predict those properties.

The dataset won’t remain static either: more models and properties will be added in the future.

"The database has the potential to surface models and tools that may have previously gone unrecognized — research published in lesser-known venues or work that simply didn't receive the attention it deserved," said Nina Cheng, a senior science manager in the AWS Life Sciences organization. "This database can play a key role in bringing that kind of overlooked work to light."

Acknowledgements

Amazon Bio Discovery Science and product team: Luca Giancardo, Yue Zhao, Melih Yilmaz, Kemal Sonmez, Lan Guo, Gordon Trang, Edward Lee, Chuanyui Teh, Fangda Xu, Nina Cheng, Jiwon Kim.

Research areas

Related content

US, NY, New York
We are seeking an Applied Scientist to lead the development of evaluation frameworks and data collection protocols for robotic capabilities. In this role, you will focus on designing how we measure, stress-test, and improve robot behavior across a wide range of real-world tasks. Your work will play a critical role in shaping how policies are validated and how high-quality datasets are generated to accelerate system performance. You will operate at the intersection of robotics, machine learning, and human-in-the-loop systems, building the infrastructure and methodologies that connect teleoperation, evaluation, and learning. This includes developing evaluation policies, defining task structures, and contributing to operator-facing interfaces that enable scalable and reliable data collection. The ideal candidate is highly experimental, systems-oriented, and comfortable working across software, robotics, and data pipelines, with a strong focus on turning ambiguous capability goals into measurable and actionable evaluation systems. Key job responsibilities - Design and implement evaluation frameworks to measure robot capabilities across structured tasks, edge cases, and real-world scenarios - Develop task definitions, success criteria, and benchmarking methodologies that enable consistent and reproducible evaluation of policies - Create and refine data collection protocols that generate high-quality, task-relevant datasets aligned with model development needs - Build and iterate on teleoperation workflows and operator interfaces to support efficient, reliable, and scalable data collection - Analyze evaluation results and collected data to identify performance gaps, failure modes, and opportunities for targeted data collection - Collaborate with engineering teams to integrate evaluation tooling, logging systems, and data pipelines into the broader robotics stack - Stay current with advances in robotics, evaluation methodologies, and human-in-the-loop learning to continuously improve internal approaches - Lead technical projects from conception through production deployment - Mentor junior scientists and engineers
US, WA, Seattle
Come be a part of a rapidly expanding $35 billion-dollar global business. At Amazon Business, a fast-growing startup passionate about building solutions, we set out every day to innovate and disrupt the status quo. We stand at the intersection of tech & retail in the B2B space developing innovative purchasing and procurement solutions to help businesses and organizations thrive. At Amazon Business, we strive to be the most recognized and preferred strategic partner for smart business buying. Bring your insight, imagination and a healthy disregard for the impossible. Join us in building and celebrating the value of Amazon Business to buyers and sellers of all sizes and industries. Unlock your career potential. Amazon Business Data Insights and Analytics team is looking for a Data Scientist to lead the research and thought leadership to drive our data and insights strategy for Amazon Business. This role is central in shaping the definition and execution of the long-term strategy for Amazon Business. You will be responsible for researching, experimenting and analyzing predictive and optimization models, designing and implementing advanced detection systems that analyze customer behavior at registration and throughout their journey. You will work on ambiguous and complex business and research science problems with large opportunities. You'll leverage diverse data signals including customer profiles, purchase patterns, and network associations to identify potential abuse and fraudulent activities. You are an analytical individual who is comfortable working with cross-functional teams and systems, working with state-of-the-art machine learning techniques and AWS services to build robust models that can effectively distinguish between legitimate business activities and suspicious behavior patterns You must be a self-starter and be able to learn on the go. Excellent written and verbal communication skills are required as you will work very closely with diverse teams. Key job responsibilities - Interact with business and software teams to understand their business requirements and operational processes - Frame business problems into scalable solutions - Adapt existing and invent new techniques for solutions - Gather data required for analysis and model building - Create and track accuracy and performance metrics - Prototype models by using high-level modeling languages such as R or in software languages such as Python. - Familiarity with transforming prototypes to production is preferred. - Create, enhance, and maintain technical documentation
US, TX, Austin
Amazon Leo is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. As a Systems Engineer, this role is primarily responsible for the design, development and integration of communication payload and customer terminal systems. The Role: Be part of the team defining the overall communication system and architecture of Amazon Leo’s broadband wireless network. This is a unique opportunity to innovate and define groundbreaking wireless technology at global scale. The team develops and designs the communication system for Leo and analyzes its overall system level performance such as for overall throughput, latency, system availability, packet loss etc. This role in particular will be responsible for leading the effort in designing and developing advanced technology and solutions for communication system. This role will also be responsible developing advanced physical layer + protocol stacks systems as proof of concept and reference implementation to improve the performance and reliability of the LEO network. In particular this role will be responsible for using concepts from digital signal processing, information theory, wireless communications to develop novel solutions for achieving ultra-high performance LEO network. This role will also be part of a team and develop simulation tools with particular emphasis on modeling the physical layer aspects such as advanced receiver modeling and abstraction, interference cancellation techniques, FEC abstraction models etc. This role will also play a critical role in the integration and verification of various HW and SW sub-systems as a part of system integration and link bring-up and verification. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
US, MA, N.reading
Amazon Industrial Robotics Group is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics Group, we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of dexterous manipulation system that: - Enables unprecedented generalization across diverse tasks - Enables contact-rich manipulation in different environments - Seamlessly integrates low-level skills and high-level behaviors - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. A day in the life - Work on design and implementation of methods for Visual SLAM, navigation and spatial reasoning - Leverage simulation and real-world data collection to create large datasets for model development - Develop a hierarchical system that combines low-level control with high-level planning - Collaborate effectively with multi-disciplinary teams to co-design hardware and algorithms for dexterous manipulation
US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for a Senior Applied Scientist to join our Applied AI team to work on LLM-based solutions. On our team you will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. You will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, excellent project management skills, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals.
IN, KA, Bengaluru
Do you want to lead the development of advanced machine learning systems that protect millions of customers and power a trusted global eCommerce experience? Are you passionate about modeling terabytes of data, solving highly ambiguous fraud and risk challenges, and driving step-change improvements through scientific innovation? If so, the Amazon Buyer Risk Prevention (BRP) Machine Learning team may be the right place for you. We are seeking a Senior Applied Scientist to define and drive the scientific direction of large-scale risk management systems that safeguard millions of transactions every day. In this role, you will lead the design and deployment of advanced machine learning solutions, influence cross-team technical strategy, and leverage emerging technologies—including Generative AI and LLMs—to build next-generation risk prevention platforms. Key job responsibilities Lead the end-to-end scientific strategy for large-scale fraud and risk modeling initiatives Define problem statements, success metrics, and long-term modeling roadmaps in partnership with business and engineering leaders Design, develop, and deploy highly scalable machine learning systems in real-time production environments Drive innovation using advanced ML, deep learning, and GenAI/LLM technologies to automate and transform risk evaluation Influence system architecture and partner with engineering teams to ensure robust, scalable implementations Establish best practices for experimentation, model validation, monitoring, and lifecycle management Mentor and raise the technical bar for junior scientists through reviews, technical guidance, and thought leadership Communicate complex scientific insights clearly to senior leadership and cross-functional stakeholders Identify emerging scientific trends and translate them into impactful production solutions
US, CA, Palo Alto
The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through state-of-the-art generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value. Curious about our advertising solutions? Discover more about Sponsored Products and Sponsored Brands to see how we’re helping businesses grow on Amazon.com and beyond! Key job responsibilities This role will be pivotal in redesigning how ads contribute to a personalized, relevant, and inspirational shopping experience, with the customer value proposition at the forefront. Key responsibilities include, but are not limited to: - Contribute to the design and development of GenAI, deep learning, multi-objective optimization and/or reinforcement learning empowered solutions to transform ad retrieval, auctions, whole-page relevance, and/or bespoke shopping experiences. - Collaborate cross-functionally with other scientists, engineers, and product managers to bring scalable, production-ready science solutions to life. - Stay abreast of industry trends in GenAI, LLMs, and related disciplines, bringing fresh and innovative concepts, ideas, and prototypes to the organization. - Contribute to the enhancement of team’s scientific and technical rigor by identifying and implementing best-in-class algorithms, methodologies, and infrastructure that enable rapid experimentation and scaling. - Mentor and grow junior scientists and engineers, cultivating a high-performing, collaborative, and intellectually curious team. A day in the life As an Applied Scientist on the Sponsored Products and Brands Off-Search team, you will contribute to the development in Generative AI (GenAI) and Large Language Models (LLMs) to revolutionize our advertising flow, backend optimization, and frontend shopping experiences. This is a rare opportunity to redefine how ads are retrieved, allocated, and/or experienced—elevating them into personalized, contextually aware, and inspiring components of the customer journey. You will have the opportunity to fundamentally transform areas such as ad retrieval, ad allocation, whole-page relevance, and differentiated recommendations through the lens of GenAI. By building novel generative models grounded in both Amazon’s rich data and the world’s collective knowledge, your work will shape how customers engage with ads, discover products, and make purchasing decisions. If you are passionate about applying frontier AI to real-world problems with massive scale and impact, this is your opportunity to define the next chapter of advertising science. About the team The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value. Curious about our advertising solutions? Discover more about Sponsored Products and Sponsored Brands to see how we’re helping businesses grow on Amazon.com and beyond!
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.