Invalidating robotic ad clicks in real time

Slice-level detection of robots (SLIDR) uses deep-learning and optimization techniques to ensure that advertisers aren’t charged for robotic or fraudulent ad clicks.

Robotic-ad-click detection is the task of determining whether an ad click on an e-commerce website was initiated by a human or a software agent. Its goal is to ensure that advertisers’ campaigns are not billed for robotic activity and that human clicks are not invalidated. It must act in real time, to cause minimal disruption to the advertiser experience, and it must be scalable, comprehensive, precise, and able to respond rapidly to changing traffic patterns.

At this year’s Conference on Innovative Applications of Artificial Intelligence (IAAI) — part of AAAI, the annual meeting of the Association for the Advancement of Artificial Intelligence — we presented SLIDR, or SLIce-Level Detection of Robots, a real-time deep-neural-network model trained with weak supervision to identify invalid clicks on online ads. SLIDR has been deployed on Amazon since 2021, safeguarding advertiser campaigns against robotic clicks.

Related content
Paper introduces a unified view of the learning-to-bid problem and presents AuctionGym, a simulation environment that enables reproducible validation of new solutions.

In the paper, we formulate a convex optimization problem that enables SLIDR to achieve optimal performance on individual traffic slices, with a budget of overall false positives. We also describe our system design, which enables continuous offline retraining and large-scale real-time inference, and we share some of the important lessons we’ve learned from deploying SLIDR, including the use of guardrails to prevent updates of anomalous models and disaster recovery mechanisms to mitigate or correct decisions made by a faulty model.

Challenges

Detecting robotic activity in online advertising faces various challenges: (1) precise ground-truth labels with high coverage are hard to come by; (2) bot behavior patterns are continuously evolving; (3) bot behavior patterns vary significantly across different traffic slices (e.g., desktop vs, mobile); and (4) false positives reduce ad revenue.

Labels

Since accurate ground truth is unavailable at scale, we generate data labels by identifying two high-hurdle activities that are very unlikely to be performed by a bot: (1) ad clicks that lead to purchases and (2) ad clicks from customer accounts with high RFM scores. RFM scores represent the recency (R), frequency (F), and monetary (M) value of customers’ purchasing patterns on Amazon. Clicks of either sort are labeled as human; all remaining clicks are marked as non-human.

Metrics

Due to the lack of reliable ground truth labels, typical metrics such as accuracy cannot be used to evaluate the model performance. So we turn to a trio of more-specific metrics.

Related content
Amazon VP and chief economist for digital streaming and advertising Phil Leslie on economists’ role in industry.

Invalidation rate (IVR) is defined as the fraction of total clicks marked as robotic by the algorithm. IVR is indicative of the recall of our model, since a model with a higher IVR is more likely to invalidate robotic clicks.

On its own, however, IVR can be misleading, since a poorly performing model will invalidate human and robot clicks. Hence we measure IVR in conjunction with the false-positive rate (FPR). We consider purchasing clicks as a proxy for the distribution of human clicks and define FPR as the fraction of purchasing clicks invalidated by the algorithm. Here, we make two assumptions: (1) all purchasing clicks are human, and (2) purchasing clicks are a representative sample of all human clicks.

We also define a more precise variant of recall by checking the model’s coverage over a heuristic that identifies clicks with a high likelihood to be robotic. The heuristic labels all clicks in user sessions with more than k ad clicks in an hour as robotic. We call this metric robotic coverage.

A neural model for detecting bots

We consider various input features for our model that will enable it to disambiguate robotic and human behavior:

  1. User-level frequency and velocity counters compute volumes and rates of clicks from users over various time periods. These enable identification of emergent robotic attacks that involve sudden bursts of clicks.
  2. User entity counters keep track of statistics such as number of distinct sessions or users from an IP. These features help to identify IP addresses that may be gateways with many users behind them.
  3. Time of click tracks hour of day and day of week, which are mapped to a unit circle. Although human activity follows diurnal and weekly activity patterns, robotic activity often does not.
  4. Logged-in status differentiates between customers and non-logged-in sessions as we expect a lot more robotic traffic in the latter.

The neural network is a binary classifier consisting of three fully connected layers with ReLU activations and L2 regularization in the intermediate layers.

DNN architecture.png
Neural-network architecture.

While training our model, we use sample weights that weigh clicks equivalently across hour of day, day of the week, logged-in status, and the label value. We have found sample weights to be crucial in improving the model’s performance and stability, especially with respect to sparse data slices such as night hours.

Baseline comparison.png
Baseline comparison.

We compare our model against baselines such as logistic regression and a heuristic rule that computes velocity scores of clicks. Both the baselines lack the ability to model complex patterns and hence are unable to perform as well as the neural network.

Calibration

Calibration involves choosing a threshold for the model’s output probability above which all clicks are marked as invalid. The model should invalidate certain highly robotic clicks but at the same time not incur high revenue loss by invalidating human clicks. Toward this, one option is to pick the “knee” of the IVR-FPR curve, beyond which the false positive rate increases sharply when compared to the increase in IVR.

Full traffic.png
IVR-FPR curve of full traffic.

But calibrating the model across all traffic slices together leads to different behaviors for different slices. For example, a decision threshold obtained via overall calibration, when applied to the desktop slice, could be undercalibrated: a lower probability threshold could invalidate more bots. Similarly, when the global decision threshold is applied to the mobile slice, it could be overcalibrated: a higher probability threshold might be able to recover some revenue loss without compromising on the bot coverage.

To ensure fairness across all traffic slices, we formulate calibration as a convex optimization problem. We perform joint optimization across all slices by fixing an overall FPR budget (an upper limit to the FPR of all slices combined) and solve to maximize the combined IVR on all slices together. The optimization must meet two conditions: (1) each slice has a minimum robotic coverage, which establishes a lower found for its FPR, and (2) the combined FPR of all slices should not exceed the FPR budget.

Traffic slices.png
IVR-FPR curve of traffic slices.

Since the IVR-FPR curve of each slice can be approximated as a quadratic function of the FPR, solving the joint optimization problem finds appropriate values for each slice. We have found slice-level calibration to be crucial in lowering overall FPR and increasing robotic coverage.

Deployment

To quickly adapt to changing bot patterns, we built an offline system that retrains and recalibrates the model on a daily basis. For incoming traffic requests, the real-time component computes the feature values using a combination of Redis and read-only DB caches and runs the neural-network inference on a horizontally scalable fleet of GPU instances. To meet the real-time constraint, the entire inference service, which runs on AWS, has a p99.9 latency below five milliseconds.

SLIDR architecture 16x9.png
The SLIDR system design.

To address data and model anomalies during retraining and recalibration, we put certain guardrails on the input training data and the model performance. For example, when purchase labels are missing for a few hours, the model can learn to invalidate a large amount of traffic. Guardrails such as minimum human density in every hour of a week prevent such behavior.

Related content
Expo cochair and Amazon scientist Alice Zheng on the respective strengths of industry and academic machine learning research.

We have also developed disaster recovery mechanisms such as quick rollbacks to a previously stable model when a sharp metric deviation is observed and a replay tool that can replay traffic through a previously stable model or recompute real-time features and publish delayed decisions, which help prevent high-impact events.

In the future, we plan to add more features to the model, such as learned representations for users, IPs, UserAgents, and search queries. We presented our initial work in that direction in our NeurIPS 2022 paper, “Self supervised pre-training for large scale tabular data”. We also plan to experiment with advanced neural architectures such as deep and cross-networks, which can effectively capture feature interactions in tabular data.

Acknowledgements: Muneeb Ahmed

Related content

US, WA, Seattle
The Amazon Economics Team is hiring Economist Interns. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets to solve real-world business problems. Some knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL, UNIX, Sawtooth, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, data scientists and MBAʼs. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with future job market placement. Roughly 85% of interns from previous cohorts have converted to full-time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
GB, Cambridge
Our team undertakes research together with multiple organizations to advance the state-of-the-art in speech technologies. We not only work on giving Alexa, the ground-breaking service that powers Echo, her voice, but we also develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video. Do you want to be part of the team developing the latest technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Senior Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language and Video technology. As a Senior Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and modelling techniques to drive the state of the art in speech and vocal arts synthesis. Position Responsibilities: - Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. - Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. - Research and implement novel ML and statistical approaches to add value to the business. - Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
US, VA, Arlington
The People eXperience and Technology Central Science Team (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. We are an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. We are looking for economists who are able to apply economic methods to address business problems. The ideal candidate will work with engineers and computer scientists to estimate models and algorithms on large scale data, design pilots and measure their impact, and transform successful prototypes into improved policies and programs at scale. We are looking for creative thinkers who can combine a strong technical economic toolbox with a desire to learn from other disciplines, and who know how to execute and deliver on big ideas as part of an interdisciplinary technical team. Ideal candidates will work in a team setting with individuals from diverse disciplines and backgrounds. They will work with teammates to develop scientific models and conduct the data analysis, modeling, and experimentation that is necessary for estimating and validating models. They will work closely with engineering teams to develop scalable data resources to support rapid insights, and take successful models and findings into production as new products and services. They will be customer-centric and will communicate scientific approaches and findings to business leaders, listening to and incorporate their feedback, and delivering successful scientific solutions. Key job responsibilities Use reduced-form causal analysis and/or structural economic modeling methods to evaluate the impact of policies on employee outcomes, and examine how external labor market and economic conditions impact Amazon's ability to hire and retain talent. A day in the life Work with teammates to apply economic methods to business problems. This might include identifying the appropriate research questions, writing code to implement a DID analysis or estimate a structural model, or writing and presenting a document with findings to business leaders. Our economists also collaborate with partner teams throughout the process, from understanding their challenges, to developing a research agenda that will address those challenges, to help them implement solutions. About the team We are a multidisciplinary team that combines the talents of science and engineering to develop innovative solutions to make Amazon Earth's Best Employer. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA
US, WA, Seattle
We are expanding our Global Risk Management & Claims team and insurance program support for Amazon’s growing risk portfolio. This role will partner with our risk managers to develop pricing models, determine rate adequacy, build underwriting and claims dashboards, estimate reserves, and provide other analytical support for financially prudent decision making. As a member of the Global Risk Management team, this role will provide actuarial support for Amazon’s worldwide operation. Key job responsibilities ● Collaborate with risk management and claims team to identify insurance gaps, propose solutions, and measure impacts insurance brings to the business ● Develop pricing mechanisms for new and existing insurance programs utilizing actuarial skills and training in innovative ways ● Build actuarial forecasts and analyses for businesses under rapid growth, including trend studies, loss distribution analysis, ILF development, and industry benchmarks ● Design actual vs expected and other metrics dashboards to assist decision makings in pricing analysis ● Create processes to monitor loss cost and trends ● Propose and implement loss prevention initiatives with impact on insurance pricing in mind ● Advise underwriting decisions with analysis on driver risk profile ● Support insurance cost budgeting activities ● Collaborate with external vendors and other internal analytics teams to extract insurance insight ● Conduct other ad hoc pricing analyses and risk modeling as needed We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | New York, NY, USA | Seattle, WA, USA
US, NY, New York
The Amazon SCOT Forecasting team seeks a Senior Applied Scientist to join our team. Our research team conducts research into the theory and application of reinforcement learning. This research is shared in top journals and conferences and has a significant impact on the field. Through our launch of several Deep RL models into production, our work also affects decision making in the real world. Members of our group have varied interests—from the mathematical foundations of reinforcement learning, to language modeling, to maintaining the performance of generative models in the face of copyrights, and more. Recent work has focused on sample efficiency of RL algorithms, treatment effect estimation, and RL agents integrating real-world constraints, as applied in supply chains. Previous publications include: - Linear Reinforcement Learning with Ball Structure Action Space - Meta-Analysis of Randomized Experiments with Applications to Heavy-Tailed Response Data - A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation - Deep Inventory Management - What are the Statistical Limits of Offline RL with Linear Function Approximation? Working collaboratively with a group of fellow scientists and engineers, you will identify complex problems and develop solutions in the RL space. We encourage collaboration across teammates and their areas of specialty, leading to creative and ambitious projects with the goal of publication and production. Key job responsibilities - Drive collaborative research and creative problem solving - Constructively critique peer research; mentor junior scientists - Create experiments and prototype implementations of new algorithms and techniques - Collaborate with engineering teams to design and implement software built on these new algorithms - Contribute to progress of the Amazon and broader research communities by producing publications We are open to hiring candidates to work out of one of the following locations: New York, NY, USA
US, CA, Virtual Location - California
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate and grow their personal interests and passions. We're always live at Twitch. About the Role: As a Data Scientist, Analytics member of the Data Platform - Insights team, you'll provide data analysis and support for platform, service, and operational engineering teams at Twitch, shaping the way success is measured. Defining what questions should be asked and scaling analytics methods and tools to support our growing business. Additionally, you will help support the vision for business analytics, solutions architecture for data related business constructs, as well as tactical execution such as experiment analysis and campaign performance reporting. You are paving the way for high-quality, high-velocity decisions and will report to the Manager, Data Science. For this role, we're looking for an experienced data staff who will oversee data instrumentation, dashboard/report building, metrics reviews, inform team investments, guidance on success/failure metrics and ad-hoc analysis. You will also work with technical and non-technical staff members throughout the company, and your effort will have an impact on hundreds of partners at Twitch You Will: - Work with members of Platforms & Services to guide them towards better decision making from the available data. - Promote data knowledge and insights through managing communications with partners and other teams, collaborate with colleagues to complete data projects and ensure all parties can use the insights to further improve. - Maintain a customer-centric focus while being a domain and product expert through data, develop trust amongst peers, and ensure that the teams and programs have access to data to make decisions - Manage ambiguous problems and adapt tools to answer complicated questions. - Identify the trade-offs between speed and quality of different approaches. - Create analytical frameworks to measure team success by partnering with teams to establish success metrics, create approaches to track the data and troubleshoot errors, measure and evaluate the data to develop a common language for all colleagues to understand these metrics. - Operationalize data processes to provide partners with ad-hoc analysis, automated dashboards, and self-service reporting tools so that everyone gets a good sense of the state of the business Perks: - Medical, Dental, Vision & Disability Insurance - 401(k), Maternity & Parental Leave - Flexible PTO - Commuter Benefits - Amazon Employee Discount - Monthly Contribution & Discounts for Wellness Related Activities & Programs (e.g., gym memberships, off-site massages), -Breakfast, Lunch & Dinner Served Daily - Free Snacks & Beverages We are open to hiring candidates to work out of one of the following locations: Irvine, CA, USA | Seattle, WA, USA | Virtual Location - CA
US, WA, Bellevue
Have you ever ordered a product on Amazon and when that box with the smile arrived you wondered how it got to you so fast? Have you wondered where it came from and how much it cost Amazon to deliver it to you? Have you also wondered what are different ways that the transportation assets can be used to delight the customer even more. If so, the Amazon transportation Services, Product and Science is for you . We manage the delivery of tens of millions of products every week to Amazon’s customers, achieving on-time delivery in a cost-effective manner. We are looking for an enthusiastic, customer obsessed Applied Scientist with strong scientific thinking, good software and statistics experience, skills to help manage projects and operations, improve metrics, and develop scalable processes and tools. The primary role of an Applied Scientist within Amazon is to address business challenges through building a compelling case, and using data to influence change across the organization. This individual will be given responsibility on their first day to own those business challenges and the autonomy to think strategically and make data driven decisions. Decisions and tools made in this role will have significant impact to the customer experience, as it will have a major impact on how we operate the middle mile network. Ideal candidates will be a high potential, strategic and analytic graduate with a PhD in (Operations Research, Statistics, Engineering, and Supply Chain) ready for challenging opportunities in the core of our world class operations space. Great candidates have a history of operations research, machine learning , and the ability to use data and research to make changes. This role requires robust skills in research and implementation of scalable products and models . This individual will need to be able to work with a team, but also be comfortable making decisions independently, in what is often times an ambiguous environment. Responsibilities may include: - Develop input and assumptions based preexisting models to estimate the costs and savings opportunities associated with varying levels of network growth and operations - Creating metrics to measure business performance, identify root causes and trends, and prescribe action plans - Managing multiple projects simultaneously - Working with technology teams and product managers to develop new tools and systems to support the growth of the business - Communicating with and supporting various internal stakeholders and external audiences We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, CA, Los Angeles
The Alexa team is looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background, to help build industry-leading Speech and Language technology. Key job responsibilities As an Applied Scientist with the Alexa team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art in spoken language understanding. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding. About the team The Alexa team has a mission to push the envelope in Automatic Speech Recognition (ASR), Natural Language Understanding (NLU), and Audio Signal Processing, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Los Angeles, CA, USA
US, WA, Seattle
Are you fascinated by the power of Natural Language Processing (NLP) and Large Language Models (LLM) to transform the way we interact with technology? Are you passionate about applying advanced machine learning techniques to solve complex challenges in the e-commerce space? If so, Amazon's International Seller Services team has an exciting opportunity for you as an Applied Scientist. At Amazon, we strive to be Earth's most customer-centric company, where customers can find and discover anything they want to buy online. Our International Seller Services team plays a pivotal role in expanding the reach of our marketplace to sellers worldwide, ensuring customers have access to a vast selection of products. As an Applied Scientist, you will join a talented and collaborative team that is dedicated to driving innovation and delivering exceptional experiences for our customers and sellers. You will be part of a global team that is focused on acquiring new merchants from around the world to sell on Amazon’s global marketplaces around the world. The position is based in Seattle but will interact with global leaders and teams in Europe, Japan, China, Australia, and other regions. Join us at the Central Science Team of Amazon's International Seller Services and become part of a global team that is redefining the future of e-commerce. With access to vast amounts of data, cutting-edge technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way sellers engage with our platform and customers worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Please visit https://www.amazon.science for more information Key job responsibilities - Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language-related challenges in the international seller services domain. - Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. - Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. - Continuously explore and evaluate state-of-the-art NLP techniques and methodologies to improve the accuracy and efficiency of language-related systems. - Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, Palo Alto
We’re working to improve shopping on Amazon using the conversational capabilities of large language models. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA