Invalidating robotic ad clicks in real time

Slice-level detection of robots (SLIDR) uses deep-learning and optimization techniques to ensure that advertisers aren’t charged for robotic or fraudulent ad clicks.

Robotic-ad-click detection is the task of determining whether an ad click on an e-commerce website was initiated by a human or a software agent. Its goal is to ensure that advertisers’ campaigns are not billed for robotic activity and that human clicks are not invalidated. It must act in real time, to cause minimal disruption to the advertiser experience, and it must be scalable, comprehensive, precise, and able to respond rapidly to changing traffic patterns.

At this year’s Conference on Innovative Applications of Artificial Intelligence (IAAI) — part of AAAI, the annual meeting of the Association for the Advancement of Artificial Intelligence — we presented SLIDR, or SLIce-Level Detection of Robots, a real-time deep-neural-network model trained with weak supervision to identify invalid clicks on online ads. SLIDR has been deployed on Amazon since 2021, safeguarding advertiser campaigns against robotic clicks.

Related content
Paper introduces a unified view of the learning-to-bid problem and presents AuctionGym, a simulation environment that enables reproducible validation of new solutions.

In the paper, we formulate a convex optimization problem that enables SLIDR to achieve optimal performance on individual traffic slices, with a budget of overall false positives. We also describe our system design, which enables continuous offline retraining and large-scale real-time inference, and we share some of the important lessons we’ve learned from deploying SLIDR, including the use of guardrails to prevent updates of anomalous models and disaster recovery mechanisms to mitigate or correct decisions made by a faulty model.

Challenges

Detecting robotic activity in online advertising faces various challenges: (1) precise ground-truth labels with high coverage are hard to come by; (2) bot behavior patterns are continuously evolving; (3) bot behavior patterns vary significantly across different traffic slices (e.g., desktop vs, mobile); and (4) false positives reduce ad revenue.

Labels

Since accurate ground truth is unavailable at scale, we generate data labels by identifying two high-hurdle activities that are very unlikely to be performed by a bot: (1) ad clicks that lead to purchases and (2) ad clicks from customer accounts with high RFM scores. RFM scores represent the recency (R), frequency (F), and monetary (M) value of customers’ purchasing patterns on Amazon. Clicks of either sort are labeled as human; all remaining clicks are marked as non-human.

Metrics

Due to the lack of reliable ground truth labels, typical metrics such as accuracy cannot be used to evaluate the model performance. So we turn to a trio of more-specific metrics.

Related content
Amazon VP and chief economist for digital streaming and advertising Phil Leslie on economists’ role in industry.

Invalidation rate (IVR) is defined as the fraction of total clicks marked as robotic by the algorithm. IVR is indicative of the recall of our model, since a model with a higher IVR is more likely to invalidate robotic clicks.

On its own, however, IVR can be misleading, since a poorly performing model will invalidate human and robot clicks. Hence we measure IVR in conjunction with the false-positive rate (FPR). We consider purchasing clicks as a proxy for the distribution of human clicks and define FPR as the fraction of purchasing clicks invalidated by the algorithm. Here, we make two assumptions: (1) all purchasing clicks are human, and (2) purchasing clicks are a representative sample of all human clicks.

We also define a more precise variant of recall by checking the model’s coverage over a heuristic that identifies clicks with a high likelihood to be robotic. The heuristic labels all clicks in user sessions with more than k ad clicks in an hour as robotic. We call this metric robotic coverage.

A neural model for detecting bots

We consider various input features for our model that will enable it to disambiguate robotic and human behavior:

  1. User-level frequency and velocity counters compute volumes and rates of clicks from users over various time periods. These enable identification of emergent robotic attacks that involve sudden bursts of clicks.
  2. User entity counters keep track of statistics such as number of distinct sessions or users from an IP. These features help to identify IP addresses that may be gateways with many users behind them.
  3. Time of click tracks hour of day and day of week, which are mapped to a unit circle. Although human activity follows diurnal and weekly activity patterns, robotic activity often does not.
  4. Logged-in status differentiates between customers and non-logged-in sessions as we expect a lot more robotic traffic in the latter.

The neural network is a binary classifier consisting of three fully connected layers with ReLU activations and L2 regularization in the intermediate layers.

DNN architecture.png
Neural-network architecture.

While training our model, we use sample weights that weigh clicks equivalently across hour of day, day of the week, logged-in status, and the label value. We have found sample weights to be crucial in improving the model’s performance and stability, especially with respect to sparse data slices such as night hours.

Baseline comparison.png
Baseline comparison.

We compare our model against baselines such as logistic regression and a heuristic rule that computes velocity scores of clicks. Both the baselines lack the ability to model complex patterns and hence are unable to perform as well as the neural network.

Calibration

Calibration involves choosing a threshold for the model’s output probability above which all clicks are marked as invalid. The model should invalidate certain highly robotic clicks but at the same time not incur high revenue loss by invalidating human clicks. Toward this, one option is to pick the “knee” of the IVR-FPR curve, beyond which the false positive rate increases sharply when compared to the increase in IVR.

Full traffic.png
IVR-FPR curve of full traffic.

But calibrating the model across all traffic slices together leads to different behaviors for different slices. For example, a decision threshold obtained via overall calibration, when applied to the desktop slice, could be undercalibrated: a lower probability threshold could invalidate more bots. Similarly, when the global decision threshold is applied to the mobile slice, it could be overcalibrated: a higher probability threshold might be able to recover some revenue loss without compromising on the bot coverage.

To ensure fairness across all traffic slices, we formulate calibration as a convex optimization problem. We perform joint optimization across all slices by fixing an overall FPR budget (an upper limit to the FPR of all slices combined) and solve to maximize the combined IVR on all slices together. The optimization must meet two conditions: (1) each slice has a minimum robotic coverage, which establishes a lower found for its FPR, and (2) the combined FPR of all slices should not exceed the FPR budget.

Traffic slices.png
IVR-FPR curve of traffic slices.

Since the IVR-FPR curve of each slice can be approximated as a quadratic function of the FPR, solving the joint optimization problem finds appropriate values for each slice. We have found slice-level calibration to be crucial in lowering overall FPR and increasing robotic coverage.

Deployment

To quickly adapt to changing bot patterns, we built an offline system that retrains and recalibrates the model on a daily basis. For incoming traffic requests, the real-time component computes the feature values using a combination of Redis and read-only DB caches and runs the neural-network inference on a horizontally scalable fleet of GPU instances. To meet the real-time constraint, the entire inference service, which runs on AWS, has a p99.9 latency below five milliseconds.

SLIDR architecture 16x9.png
The SLIDR system design.

To address data and model anomalies during retraining and recalibration, we put certain guardrails on the input training data and the model performance. For example, when purchase labels are missing for a few hours, the model can learn to invalidate a large amount of traffic. Guardrails such as minimum human density in every hour of a week prevent such behavior.

Related content
Expo cochair and Amazon scientist Alice Zheng on the respective strengths of industry and academic machine learning research.

We have also developed disaster recovery mechanisms such as quick rollbacks to a previously stable model when a sharp metric deviation is observed and a replay tool that can replay traffic through a previously stable model or recompute real-time features and publish delayed decisions, which help prevent high-impact events.

In the future, we plan to add more features to the model, such as learned representations for users, IPs, UserAgents, and search queries. We presented our initial work in that direction in our NeurIPS 2022 paper, “Self supervised pre-training for large scale tabular data”. We also plan to experiment with advanced neural architectures such as deep and cross-networks, which can effectively capture feature interactions in tabular data.

Acknowledgements: Muneeb Ahmed

Related content

US, WA, Virtual Contact Center-WA
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. About the team The Selling Partner Fees team owns the end-to-end fees experience for two million active third party sellers. We own the fee strategy, fee seller experience, fee accuracy and integrity, fee science and analytics, and we provide scalable technology to monetize all services available to third-party sellers. Within the Science team, our goal is to understand the impact of changing fees on Seller (supply) and Customers (demand) behavior (e.g. price changes, advertising strategy changes, introducing new selection etc.) as well as using this information to optimize our fee structure and maximizing our long term profitability.
US, WA, Seattle
This is a unique opportunity to build technology and science that millions of people will use every day. Are you excited about working on large scale Natural Language Processing (NLP), Machine Learning (ML), and Deep Learning (DL)? We are embarking on a multi-year journey to improve the shopping experience for customers globally. Amazon Search team creates customer-focused search solutions and technologies that makes shopping delightful and effortless for our customers. Our goal is to understand what customers are looking for in whatever language happens to be their choice at the moment and help them find what they need in Amazon's vast catalog of billions of products. As Amazon expands to new geographies, we are faced with the unique challenge of maintaining the bar on Search Quality due to the diversity in user preferences, multilingual search and data scarcity in new locales. We are looking for an applied researcher to work on improving search on Amazon using NLP, ML, and DL technology. As an Applied Scientist, you will lead our efforts in query understanding, semantic matching (e.g. is a drone the same as quadcopter?), relevance ranking (what is a "funny halloween costume"?), language identification (did the customer just switch to their mother tongue?), machine translation (猫の餌を注文する). This is a highly visible role with a huge impact on Amazon customers and business. As part of this role, you will develop high precision, high recall, and low latency solutions for search. Your solutions should work for all languages that Amazon supports and will be used in all Amazon locales world-wide. You will develop scalable science and engineering solutions that work successfully in production. You will work with leaders to develop a strategic vision and long term plans to improve search globally. We are growing our collaborative group of engineers and applied scientists by expanding into new areas. This is a position on Global Search Quality team in Seattle Washington. We are moving fast to change the way Amazon search works. Together with a multi-disciplinary team you will work on building solutions with NLP/ML/DL at its core. Along the way, you’ll learn a ton, have fun and make a positive impact on millions of people. Come and join us as we invent new ways to delight Amazon customers.
US, WA, Seattle
This is a unique opportunity to build technology and science that millions of people will use every day. Are you excited about working on large scale Natural Language Processing (NLP), Machine Learning (ML), and Deep Learning (DL)? We are embarking on a multi-year journey to improve the shopping experience for customers globally. Amazon Search team creates customer-focused search solutions and technologies that makes shopping delightful and effortless for our customers. Our goal is to understand what customers are looking for in whatever language happens to be their choice at the moment and help them find what they need in Amazon's vast catalog of billions of products. As Amazon expands to new geographies, we are faced with the unique challenge of maintaining the bar on Search Quality due to the diversity in user preferences, multilingual search and data scarcity in new locales. We are looking for an applied researcher to work on improving search on Amazon using NLP, ML, and DL technology. As an Applied Scientist, you will lead our efforts in query understanding, semantic matching (e.g. is a drone the same as quadcopter?), relevance ranking (what is a "funny halloween costume"?), language identification (did the customer just switch to their mother tongue?), machine translation (猫の餌を注文する). This is a highly visible role with a huge impact on Amazon customers and business. As part of this role, you will develop high precision, high recall, and low latency solutions for search. Your solutions should work for all languages that Amazon supports and will be used in all Amazon locales world-wide. You will develop scalable science and engineering solutions that work successfully in production. You will work with leaders to develop a strategic vision and long term plans to improve search globally. We are growing our collaborative group of engineers and applied scientists by expanding into new areas. This is a position on Global Search Quality team in Seattle Washington. We are moving fast to change the way Amazon search works. Together with a multi-disciplinary team you will work on building solutions with NLP/ML/DL at its core. Along the way, you’ll learn a ton, have fun and make a positive impact on millions of people. Come and join us as we invent new ways to delight Amazon customers.
US, WA, Seattle
The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon’s on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon’s goods and services are aligned with Amazon’s corporate goals. We are seeking an experienced high-energy Economist to help envision, design and build the next generation of retail pricing capabilities. You will work at the intersection of economic theory, statistical inference, and machine learning to design new methods and pricing strategies to deliver game changing value to our customers. Roughly 85% of previous intern cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Key job responsibilities Amazon’s Pricing Science and Research team is seeking an Economist to help envision, design and build the next generation of pricing capabilities behind Amazon’s on-line retail business. As an economist on our team, you will work at the intersection of economic theory, statistical inference, and machine learning to design new methods and pricing strategies with the potential to deliver game changing value to our customers. This is an opportunity for a high-energy individual to work with our unprecedented retail data to bring cutting edge research into real world applications, and communicate the insights we produce to our leadership. This position is perfect for someone who has a deep and broad analytic background and is passionate about using mathematical modeling and statistical analysis to make a real difference. You should be familiar with modern tools for data science and business analysis. We are particularly interested in candidates with research background in applied microeconomics, econometrics, statistical inference and/or finance. A day in the life Discussions with business partners, as well as product managers and tech leaders to understand the business problem. Brainstorming with other scientists and economists to design the right model for the problem in hand. Present the results and new ideas for existing or forward looking problems to leadership. Deep dive into the data. Modeling and creating working prototypes. Analyze the results and review with partners. Partnering with other scientists for research problems. About the team The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon’s on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon’s goods and services are aligned with Amazon’s corporate goals.
US, CA, San Francisco
The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon's on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon's goods and services are aligned with Amazon's corporate goals. We are seeking an experienced high-energy Economist to help envision, design and build the next generation of retail pricing capabilities. You will work at the intersection of statistical inference, experimentation design, economic theory and machine learning to design new methods and pricing strategies for assessing pricing innovations. Roughly 85% of previous intern cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Key job responsibilities Amazon's Pricing Science and Research team is seeking an Economist to help envision, design and build the next generation of pricing capabilities behind Amazon's on-line retail business. As an economist on our team, you will will have the opportunity to work with our unprecedented retail data to bring cutting edge research into real world applications, and communicate the insights we produce to our leadership. This position is perfect for someone who has a deep and broad analytic background and is passionate about using mathematical modeling and statistical analysis to make a real difference. You should be familiar with modern tools for data science and business analysis. We are particularly interested in candidates with research background in experimentation design, applied microeconomics, econometrics, statistical inference and/or finance. A day in the life Discussions with business partners, as well as product managers and tech leaders to understand the business problem. Brainstorming with other scientists and economists to design the right model for the problem in hand. Present the results and new ideas for existing or forward looking problems to leadership. Deep dive into the data. Modeling and creating working prototypes. Analyze the results and review with partners. Partnering with other scientists for research problems. About the team The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon's on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon's goods and services are aligned with Amazon's corporate goals.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US
The Amazon Supply Chain Optimization Technology (SCOT) organization is looking for an Intern in Economics to work on exciting and challenging problems related to Amazon's worldwide inventory planning. SCOT provides unique opportunities to both create and see the direct impact of your work on billions of dollars’ worth of inventory, in one of the world’s most advanced supply chains, and at massive scale. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. We are looking for a PhD candidate with exposure to Program Evaluation/Causal Inference. Knowledge of econometrics and Stata/R/or Python is necessary, and experience with SQL, Hadoop, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Seattle
The Selling Partner Fees team owns the end-to-end fees experience for two million active third party sellers. We own the fee strategy, fee seller experience, fee accuracy and integrity, fee science and analytics, and we provide scalable technology to monetize all services available to third-party sellers. We are looking for an Intern Economist with excellent coding skills to design and develop rigorous models to assess the causal impact of fees on third party sellers’ behavior and business performance. As a Science Intern, you will have access to large datasets with billions of transactions and will translate ambiguous fee related business problems into rigorous scientific models. You will work on real world problems which will help to inform strategic direction and have the opportunity to make an impact for both Amazon and our Selling Partners.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. We are looking for a PhD candidate with exposure to Program Evaluation/Causal Inference. Some knowledge of econometrics, as well as basic familiarity with Stata or R is necessary, and experience with SQL, Hadoop, Spark and Python would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, MA, Boston
Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Amazon Robotics, a wholly owned subsidiary of Amazon.com, empowers a smarter, faster, more consistent customer experience through automation. Amazon Robotics automates fulfillment center operations using various methods of robotic technology including autonomous mobile robots, sophisticated control software, language perception, power management, computer vision, depth sensing, machine learning, object recognition, and semantic understanding of commands. Amazon Robotics has a dedicated focus on research and development to continuously explore new opportunities to extend its product lines into new areas. AR is seeking uniquely talented and motivated data scientists to join our Global Services and Support (GSS) Tools Team. GSS Tools focuses on improving the supportability of the Amazon Robotics solutions through automation, with the explicit goal of simplifying issue resolution for our global network of Fulfillment Centers. The candidate will work closely with software engineers, Fulfillment Center operation teams, system engineers, and product managers in the development, qualification, documentation, and deployment of new - as well as enhancements to existing - operational models, metrics, and data driven dashboards. As such, this individual must possess the technical aptitude to pick-up new BI tools and programming languages to interface with different data access layers for metric computation, data mining, and data modeling. This role is a 6 month co-op to join AR full time (40 hours/week) from July – December 2023. The Co-op will be responsible for: Diving deep into operational data and metrics to identify and communicate trends used to drive development of new tools for supportability Translating operational metrics into functional requirements for BI-tools, models, and reporting Collaborating with cross functional teams to automate AR problem detection and diagnostics