Image shows the abstract page from a paper titled "Optimal Auction Design with Deferred Inspection and Reward" on the left; the authors — Saeed Alaei (top left), Alexandre Belloni (top right), Ali Makhdoumi (bottom left), and Azarakhsh Malekian (bottom right) are shown in a two-by-two grid on the right
In their paper, "Optimal Auction Design with Deferred Inspection and Reward", Saeed Alaei (top left), Alexandre Belloni (top right), Ali Makhdoumi (bottom left), and Azarakhsh Malekian (bottom right) developed a mechanism to incentivize buyers within an auction to bid higher by giving a bonus to bids whose value are closer to the true value of the item.

Monitoring and rewarding honest bids to increase revenue in auctions

Amazon Scholar Alexandre Belloni discusses the implications of auction design on digital goods.

Alexandre Belloni has been intrigued by operations research and optimization problems since his days at as an electrical engineering undergrad at the Pontifical Catholic University of Rio de Janeiro, back in his home country of Brazil. Further schooling just cemented that. His master’s in mathematical economics at the Institute for Pure and Applied Mathematics, also in Rio de Janeiro, “happened to have a strong optimization track,” he said. “Once I got there, the economics influence started to kick in,” he says. “And, given my background, I was always looking for the intersection of operations research and economics.”

For his PhD, Belloni worked on optimization and econometrics at the MIT Operations Research Center. His interest in economics continued to influence his academic path and most of his current research is focused on mechanism design problems, which he describes as “a broad class of ways to allocate resources.” “For example, auctions are a classic way that you can allocate an item and it is especially useful in cases where it’s difficult to price the value of the item.”

Belloni says mechanism design is an incredible field to work on. “Not only there are many interesting perspectives to consider — such as information, computational, approximations, robustness, dynamics — but we also see several industry problems requiring to coordinate decentralized systems.”

Since 2007, Belloni has also taught at the Fuqua Business School at Duke University, where he is currently the John D. Forsyth Professor of Decision Sciences. In 2018, he was recruited to become an Amazon Scholar, joining the company in that capacity in January 2019. “I always thought that the best research is the one that is motivated by empirical, real problems. Amazon gives you a great opportunity to see the real problems,” he says.

Related content
How the Amazon Logistics Research Science team guides important decisions related to last-mile delivery.

Since then, he has been studying problems related to mechanism design and machine learning at Fulfillment by Amazon (FBA), the subdivision of Amazon’s Supply Chain Optimization Technologies (SCOT) organization for third-party sellers who use Amazon’s storage and fulfillment capabilities.

One of the challenges Belloni and his FBA colleagues are currently addressing has to do with capacity management. Third-party sellers own and control their own inventories, and Amazon, with limited information, determines how to both balance the demand for space and ensure fulfillment center capacity is used efficiently and is available for products that customers love. “There has been tons of amazing work and we continue to obsess on finding better ways to manage capacity,” Belloni said.

Coordinating and optimizing allocations is also at the core of a recent work by Belloni and colleagues. In the paper “Optimal Auction Design with Deferred Inspection and Reward”, the authors develop a mechanism to incentivize buyers within an auction to bid higher by rewarding with a bonus the ones whose bids are closer to the true value of the item. This strategy can only be used in certain settings, where it is possible to monitor how the buyer is monetizing that good.

In this interview, Belloni discusses how he and his co-authors — Saeed Alaei, Ali Makhdoumi and Azarakhsh Malekian — came up with this new auction design that is especially suitable for digital goods and how it may impact revenues.

  1. Q. 

    What is the mechanism that you and your colleagues developed to optimize auction design? What are the implications for digital goods?

    A. 

    The key thing about this paper is that, in certain settings, after the winner of an auction is revealed, we can actually learn what is the true value of the good for the agent [buyer]. Indeed, there are many settings where the values are (nearly) observed with some delay. In those cases, if the agent said the truth — that is, the bid is close to the true value — we can give them a bonus back from their initial deposits.

    Related content
    The 2001 paper was awarded for “foundational work initiating a long and fruitful line of work in approximately revenue-optimal auction design in prior free settings”.

    It turns out that we were able to fully characterize the optimal mechanism for a single agent. By using rewards after the inspection to help us screen the agent, we found that the optimal allocation is not a thresholding strategy, and instead is an increasing and continuous function of the reported value. Indeed, it is possible to have different payments (via the rewards) for the same allocation, which contrasts with the case without inspection where no such mechanism would be incentive compatible.

    The results are quite relevant in settings where it is possible to monitor the value (or performance) of the good for the bidder. Digital goods are certainly one application that motivated our setting. For example, consider a platform that would like to sell some preferred advertisement position for a digital good to be displayed. Because consumption of the digital good occurs within the platform, its value is observed, whether it is the winner of the specific auction or not.

    Thus, the paper provides insights on how to monetize on this additional monitoring while still allowing agents to fully control the maximum they would be paying to acquire the preferred advertisement position. This is attractive as agents are always concerned with liability and, in practice, they could be reluctant to accept a contract in which they do not know how much they could end up paying. So, we are taking this concern into consideration. We monitor them, but we cannot charge more than whatever the amount they bid. The agents are in full control of how much they will spend. Ultimately, we are rewarding a digital good that has high value to be able to screen further via monitoring.

  2. Q. 

    How were you able to extrapolate your results from a single buyer to multiple buyers?

    A. 

    A priori, it was unclear how the results would generalize for the multiple-agents case given the generality of the first result. The first step was to consider the so-called reduced-form representation where we model the expected allocation and payments of a bidder condition on his or her own type (by averaging out over the types of the other bidders). But to ensure the reduced form is implementable as an auction, it is well known the additional Border constraints needed to be considered, which can get tricky.

    Using duality theory, we then find a sufficient condition under which the Border constraint in the reduced form of the problem can be dealt with nicely. The sufficient technical condition on the hazard rate of the distribution of the maximum value is not needed in the single-agent case. Indeed, the result for a single agent holds quite generally. Surprisingly, the same structural properties in the single-agent case are still preserved in the multiple-agents case.

    Related content
    Amazon Research Award recipient Éva Tardos studies complex theoretical questions that have far-ranging practical consequences.

    Importantly, we provide an implementation of our optimal auction for multiple agents — Border constraints guarantee an implementation exists but do not tell us how. In particular, we show that the implementation of the optimal auction involves allocating to the agent with the maximum bid and then rewarding this agent if they report truthfully. One aspect of this setup with inspection is that we can further distinguish bidders by having more freedom to manipulate the amount of allocation and payments. In typical auctions, without inspection, there is no value to do that and agents either get the good or not. In our case, we can essentially give you the good with only 50% chance if you bid low, for example.

    Indeed, we increase the chance of allocating the good as the bids increase and when we reach 100% change we can further increase the reward for reporting correctly. So, if you think about a second price auction, for example, the agent pays the second-highest price, and that's it. Here, the monitoring allows us further screen bidders after they bid which allows us to refine the final payments through the bonus. Thus bidders have an additional incentive to pay more (even in a single-agent case) just to make sure that they will have a higher chance of getting the good.

  3. Q. 

    What impact does your optimization have on revenue? And how does that differ from auctions in classic settings?

    A. 

    This auction will, by design, generate higher revenues than the standard option (without monitoring). Intuitively, because of the bonus, if the agent tries to take advantage of you by bidding too low, they are not getting any bonus back. Now, if the agent tells you the truth, then they're going to get a decent bonus. So, this creates this incentive that makes them willing to push towards the true value.

    Related content
    Amazon Scholar David Card and Amazon academic research consultant Guido Imbens talk about the past and future of empirical economics.

    In the paper, we present a nice characterization of why the revenue is going to be bigger. The typical idea in an auction is that you need to pay information rent for the agents. And what happens is that this monitoring reduces the information rent by design. More precisely, the information rent gets reduced by a factor related to the best alternative bid the agent could place. That comes out very clearly in the math.

    We cannot say that we are going to do 20% or 30% more because that's very specific of the company. However, note that this will be particularly impactful with a small number of agents. Thin markets where there is a single bidder, for example, who could typically walk away with a lot of surplus. In specific settings (depending on distributions, number of agents, etc.) we provide examples in the paper where gains are significant. Nonetheless, we can clearly say that we always reduce the information rent.

Related content

US, WA, Bellevue
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
GB, London
As a STRUC Economist Intern, you'll specialize in structural econometric analysis to estimate fundamental preferences and strategic effects in complex business environments. Your responsibilities include: Analyze large-scale datasets using structural econometric techniques to solve complex business challenges Applying discrete choice models and methods, including logistic regression family models (such as BLP, nested logit) and models with alternative distributional assumptions Utilizing advanced structural methods including dynamic models of customer or firm decisions over time, applied game theory (entry and exit of firms), auction models, and labor market models Building datasets and performing data analysis at scale Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations Tackling diverse challenges including pricing analysis, competition modeling, strategic behavior estimation, contract design, and marketing strategy optimization Helping business partners formalize and estimate business objectives to drive optimal decision-making and customer value Build and refine comprehensive datasets for in-depth structural economic analysis Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
At Amazon Selection and Catalog Systems (ASCS), our mission is to power the online buying experience for customers worldwide so they can find, discover, and buy any product they want. We innovate on behalf of our customers to ensure uniqueness and consistency of product identity and to infer relationships between products in Amazon Catalog to drive the selection gateway for the search and browse experiences on the website. We're solving a fundamental AI challenge: establishing product identity and relationships at unprecedented scale. Using Generative AI, Visual Language Models (VLMs), and multimodal reasoning, we determine what makes each product unique and how products relate to one another across Amazon's catalog. The scale is staggering: billions of products, petabytes of multimodal data, millions of sellers, dozens of languages, and infinite product diversity—from electronics to groceries to digital content. The research challenges are immense. GenAI and VLMs hold transformative promise for catalog understanding, but we operate where traditional methods fail: ambiguous problem spaces, incomplete and noisy data, inherent uncertainty, reasoning across both images and textual data, and explaining decisions at scale. Establishing product identities and groupings requires sophisticated models that reason across text, images, and structured data—while maintaining accuracy and trust for high-stakes business decisions affecting millions of customers daily. Amazon's Item and Relationship Platform group is looking for an innovative and customer-focused applied scientist to help us make the world's best product catalog even better. In this role, you will partner with technology and business leaders to build new state-of-the-art algorithms, models, and services to infer product-to-product relationships that matter to our customers. You will pioneer advanced GenAI solutions that power next-generation agentic shopping experiences, working in a collaborative environment where you can experiment with massive data from the world's largest product catalog, tackle problems at the frontier of AI research, rapidly implement and deploy your algorithmic ideas at scale, across millions of customers. Key job responsibilities Key job responsibilities include: * Formulate open research problems at the intersection of GenAI, multimodal reasoning, and large-scale information retrieval—defining the scientific questions that transform ambiguous, real-world catalog challenges into publishable, high-impact research * Push the boundaries of VLMs, foundation models, and agentic architectures by designing novel approaches to product identity, relationship inference, and catalog understanding—where the problem complexity (billions of products, multimodal signals, inherent ambiguity) demands methods that don't yet exist * Advance the science of efficient model deployment—developing distillation, compression, and LLM/VLM serving optimization strategies that preserve frontier-level multimodal reasoning in compact, production-grade architectures while dramatically reducing latency, cost, and infrastructure footprint at billion-product scale * Make frontier models reliable—advancing uncertainty calibration, confidence estimation, and interpretability methods so that frontier-scale GenAI systems can be trusted for autonomous catalog decisions impacting millions of customers daily * Own the full research lifecycle from problem formulation through production deployment—designing rigorous experiments over petabytes of multimodal data, iterating on ideas rapidly, and seeing your research directly improve the shopping experience for hundreds of millions of customers * Shape the team's research vision by defining technical roadmaps that balance foundational scientific inquiry with measurable product impact * Mentor scientists and engineers on advanced ML techniques, experimental design, and scientific rigor—building deep organizational capability in GenAI and multimodal AI * Represent the team in the broader science community—publishing findings, delivering tech talks, and staying at the forefront of GenAI, VLM, and agentic system research