Jeff Wilke, who was then Amazon's consumer worldwide CEO, delivering a keynote presentation at re:MARS 2019
Jeff Wilke, who was then Amazon's consumer worldwide CEO, delivering a keynote presentation at re:MARS 2019

The history of Amazon's recommendation algorithm

Collaborative filtering and beyond.

In 2017, when the journal IEEE Internet Computing was celebrating its 20th anniversary, its editorial board decided to identify the single paper from its publication history that had best withstood the “test of time”. The honor went to a 2003 paper called “Amazon.com Recommendations: Item-to-Item Collaborative Filtering”, by then Amazon researchers Greg Linden, Brent Smith, and Jeremy York.

Collaborative filtering is the most common way to do product recommendation online. It’s “collaborative” because it predicts a given customer’s preferences on the basis of other customers’.

“There was already a lot of interest and work in it,” says Smith, now the leader of Amazon’s Weblab, which does A/B testing (structured testing of variant offerings) at scale to enable data-driven business decisions. “The world was focused on user-based collaborative filtering. A user comes to the website: What other users are like them? We sort of turned it on its head and found a different way of doing it that had a lot better scaling and quality characteristics for online recommendations.”

Related content
The story of a decade-plus long journey toward a unified forecasting model.

The better way was to base product recommendations not on similarities between customers but on correlations between products. With user-based collaborative filtering, a visitor to Amazon.com would be matched with other customers who had similar purchase histories, and those purchase histories would suggest recommendations for the visitor.

With item-to-item collaborative filtering, on the other hand, the recommendation algorithm would review the visitor’s recent purchase history and, for each purchase, pull up a list of related items. Items that showed up repeatedly across all the lists were candidates for recommendation to the visitor. But those candidates were given greater or lesser weight depending on how related they were to the visitor's prior purchases.

Related content
How Amazon’s scientists developed a first-of-its-kind multi-echelon system for inventory buying and placement.

That notion of relatedness is still derived from customers’ purchase histories: item B is related to item A if customers who buy A are unusually likely to buy B as well. But Amazon’s Personalization team found, empirically, that analyzing purchase histories at the item level yielded better recommendations than analyzing them at the customer level.

Family ties

Beyond improving recommendations, item-to-item collaborative filtering also offered significant computational advantages. Finding the group of customers whose purchase histories most closely resemble a given visitor’s would require comparing purchase histories across Amazon’s entire customer database. That would be prohibitively time consuming during a single site visit.

The history of Amazon's recommendation algorithm | Amazon Science

The alternatives are either to randomly sample other customers in real time and settle for the best matches found or to build a huge offline similarity index by comparing every customer to every other. Because Amazon customers’ purchase histories can change dramatically in the course of a single day, that index would have to be updated regularly. Even offline indexing presents a huge computational burden.

On average, however, a given product sold on the Amazom Store purchased by only a tiny subset of the site’s customers. That means that inspecting the recent-purchase histories of everyone who bought a given item requires far fewer lookups than identifying the customers who most resemble a given site visitor. Smith and his colleagues found that even with early-2000s technology, it was computationally feasible to produce an updated list of related items for every product on the Amazon site on a daily basis.

Related content
Dual embeddings of each node, as both source and target, and a novel loss function enable 30% to 160% improvements over predecessors.

The crucial question: how to measure relatedness. Simply counting how often purchasers of item A also bought item B wouldn’t do; that would make a few bestsellers like Harry Potter books and trash bags the top recommendations for every customer on every purchase.

Instead, the Amazon researchers used a relatedness metric based on differential probabilities: item B is related to item A if purchasers of A are more likely to buy B than the average Amazon customer is. The greater the difference in probability, the greater the items’ relatedness.

When Linden, Smith, and York published their paper in IEEE Internet Computing, their item-based recommendation algorithm had already been in use for six years. But it took several more years to identify and correct a fundamental flaw in the relatedness measure.

Getting the math right

The problem: the algorithm was systematically underestimating the baseline likelihood that someone who bought A would also buy B. Since a customer who buys a lot of products is more likely to buy A than a customer who buys few products, A buyers are, on average, heavier buyers than the typical Amazon customer. But because they’re heavy buyers, they’re also unusually likely to buy B.

Smith and his colleagues realized that it wasn’t enough to assess the increased likelihood of buying product B given the purchase of product A; they had to assess the increased likelihood of buying product B with any given purchase. That is, they discounted heavy buyers’ increased likelihood of buying B according to the heaviness of their buying.

“That was a large improvement to recommendations quality, when we got the math right,” Smith says.

Related content
Danielle Maddix Robinson's mathematics background helps inform robust models that can predict everything from retail demand to epidemiology.

That was more than a decade ago. Since then, Amazon researchers have been investigating a wide variety of ways to make customer recommendations more useful: moving beyond collaborative filtering to factor in personal preferences such as brands or fashion styles; learning to time recommendations (you may want to order more diapers!); and learning to target recommendations to different users of the same account, among many other things.

In June 2019, during a keynote address at Amazon’s first re:MARS conference, Jeff Wilke, then the CEO of Amazon’s consumer division, highlighted one particular advance, in the algorithm for recommending movies to Amazon’s Prime Video customers. Amazon researchers’ innovations led to a twofold improvement in that algorithm’s performance, which Wilke described as a “once-in-a-decade leap”.

Entering the matrix

Recommendation is often modeled as a matrix completion problem. Imagine a huge grid, whose rows represent Prime Video customers and whose columns represent the movies in the Prime Video catalogue. If a customer has seen a particular movie, the corresponding cell in the grid contains a one; if not, it’s blank. The goal of matrix completion is to fill in the grid with the probabilities that any given customer will watch any given movie.

In 2014, Vijai Mohan’s team in the Personalization group — Avishkar Misra, Jane You, Rejith Joseph, Scott Le Grand, and Eric Nalisnick — was asked to design a new recommendation algorithm for Prime Video. At the time, the standard technique for generating personalized recommendations was matrix factorization, which identifies relatively small matrices that, multiplied together, will approximate a much larger matrix.

Related content
The switch to WebAssembly increases stability, speed.

Inspired by work done by Ruslan Salakhutdinov — then an assistant professor of computer science at the University of Toronto — Mohan’s team instead decided to apply deep neural networks to the problem of matrix completion.

The typical deep neural network contains thousands or even millions of simple processing nodes, arranged into layers. Data is fed into the nodes of the bottom layer, which process it and pass their results to the next layer, and so on; the output of the top layer represents the result of some computation.

Training the network consists of feeding it lots of sample inputs and outputs. During training, the network’s settings are constantly adjusted, until they minimize the average discrepancy between the top layer’s output and the target outputs in the training examples.

Reconstruction

Matrix completion methods commonly use a type of neural network called an autoencoder. The autoencoder is trained simply to output the same data it takes as input. But in-between the input and output layers is a bottleneck, a layer with relatively few nodes — in this case, only 100, versus tens of thousands of input and output nodes.

We had to go and doublecheck and re-run the experiments multiple times, I was giving a hard time to the scientists. I was saying, ‘You probably made a mistake.’
Vijai Mohan

As a consequence, the network can’t just copy inputs directly to outputs; it must learn a general procedure for compressing and then re-expanding every example in the training set. The re-expansion will be imperfect: in the movie recommendation setting, the network will guess that customers have seen movies they haven’t. But when, for a given customer-movie pair, it guesses wrong with high confidence, that’s a good sign that the customer would be interested in that movie.

To benchmark the autoencoder’s performance, the researchers compared it to two baseline systems. One was the latest version of Smith and his colleagues’ collaborative-filtering algorithm. The other was a simple listing of the most popular movie rentals of the previous two weeks. “In the recommendations world, there’s a cardinal rule,” Mohan says. “If I know nothing about you, then the best things to recommend to you are the most popular things in the world.”

To their mild surprise, the item-to-item collaborative-filtering algorithm outperformed the autoencoder. But to their much greater surprise, so did the simple bestseller list. The autoencoder’s performance was “so bad that we had to go and doublecheck and re-run the experiments multiple times,” Mohan says. “I was giving a hard time to the scientists. I was saying, ‘You probably made a mistake.’”

Once they were sure the results were valid, however, they were quick to see why. In a vacuum, matrix completion may give the best overview of a particular customer’s tastes. But at any given time, most movie watchers will probably opt for recent releases over neglected classics in their preferred genres.

Neural network classifiers with time considerations
Amazon researchers found that using neural networks to generate movie recommendations worked much better when they sorted the input data chronologically and used it to predict future movie preferences over a short (one- to two-week) period.

So Mohan’s team re-framed the problem. They still used an autoencoder, but they trained it on movie-viewing data that had been sorted chronologically. During training, the autoencoder saw data on movies that customers had watched before some cutoff time. But it was evaluated on how well it predicted the movies they had watched in the two-week period after the cutoff time.

Because Prime Video’s Web interface displays six movie recommendations on the page associated with each title in its catalogue, the researchers evaluated their system on whether at least one of its top six recommendations for a given customer was in fact a movie that that customer watched in the two-week period after the cutoff date. By that measure, not only did the autoencoder outperform the bestseller list, but it also outperformed item-to-item collaborative filtering, two to one. As Wilke put it at re:MARS, “We had a winner.”

Whether any of the work that Amazon researchers are doing now will win test-of-time awards two decades hence remains to be seen. But Smith, Mohan, and their colleagues will continue to pursue new approaches to designing recommendation algorithms, in the hope of making Amazon.com that much more useful for customers.

Related content

IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field - 2-7 years experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. - Papers published in AI/ML venues of repute Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment
GB, London
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply cutting edge Generative AI algorithms to solve real world problems with significant impact? The AWS Industries Team at AWS helps AWS customers implement Generative AI solutions and realize transformational business opportunities for AWS customers in the most strategic industry verticals. This is a team of data scientists, engineers, and architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, select and train and fine tune the right models, define paths to navigate technical or business challenges, develop proof-of-concepts, and build applications to launch these solutions at scale. The AWS Industries team provides guidance and implements best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. In this Data Scientist role you will be capable of using GenAI and other techniques to design, evangelize, and implement and scale cutting-edge solutions for never-before-solved problems. Key job responsibilities - Collaborate with AI/ML scientists, engineers, and architects to research, design, develop, and evaluate cutting-edge generative AI algorithms and build ML systems to address real-world challenges - Interact with customers directly to understand the business problem, help and aid them in implementation of generative AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production - Create and deliver best practice recommendations, tutorials, blog posts, publications, sample code, and presentations adapted to technical, business, and executive stakeholder - Provide customer and market feedback to Product and Engineering teams to help define product direction About the team Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, WA, Seattle
The Seller Fees organization drives the monetization infrastructure powering Amazon's global marketplace, processing billions of transactions for over two million active third-party sellers worldwide. Our team owns the complete technical stack and strategic vision for fee computation systems, leveraging advanced machine learning to optimize seller experiences and maintain fee integrity at unprecedented scale. We're seeking an exceptional Applied Scientist to push the boundaries of large-scale ML systems in a business-critical domain. This role presents unique opportunities to • Architect and deploy state-of-the-art transformer-based models for fee classification and anomaly detection across hundreds of millions of products • Pioneer novel applications of multimodal LLMs to analyze product attributes, images, and seller metadata for intelligent fee determination • Build production-scale generative AI systems for fee integrity and seller communications • Advance the field of ML through novel research in high-stakes, large-scale transaction processing • Develop SOTA causal inference frameworks integrated with deep learning to understand fee impacts and optimize seller outcomes • Collaborate with world-class scientists and engineers to solve complex problems at the intersection of deep learning, economics, and large business systems. If you're passionate about advancing the state-of-the-art in applied ML/AI while tackling challenging problems at global scale, we want you on our team! Key job responsibilities Responsibilities: . Design measurable and scalable science solutions that can be adopted across stores worldwide with different languages, policy and requirements. · Integrate AI (both generative and symbolic) into compound agentic workflows to transform complex business systems into intelligent ones for both internal and external customers. · Develop large scale classification and prediction models using the rich features of text, image and customer interactions and state-of-the-art techniques. · Research and implement novel machine learning, statistical and econometrics approaches. · Write high quality code and implement scalable models within the production systems. · Stay up to date with relevant scientific publications. · Collaborate with business and software teams both within and outside of the fees organization.
US, WA, Seattle
The Selling Partner Experience (SPX) organization strives to make Amazon the best place for Selling Partners to do business. The SPX Science team is building an AI-powered conversational assistant to transform the Selling Partner experience. The Selling Assistant is a trusted partner and a seasoned advisor that’s always available to enable our partners to thrive in Amazon’s stores. It takes away the cognitive load of selling on Amazon by providing a single interface to handle a diverse set of selling needs. The assistant always stays by the seller's side, talks to them in their language, enables them to capitalize on opportunities, and helps them accomplish their business goals with ease. It is powered by the state-of-the-art Generative AI, going beyond a typical chatbot to provide a personalized experience to sellers running real businesses, large and small. Do you want to join an innovative team of scientists, engineers, product and program managers who use the latest Generative AI and Machine Learning technologies to help Amazon create a delightful Selling Partner experience? Do you want to build solutions to real business problems by automatically understanding and addressing sellers’ challenges, needs and opportunities? Are you excited by the prospect of contributing to one of Amazon’s most strategic Generative AI initiatives? If yes, then you may be a great fit to join the Selling Partner Experience Science team. Key job responsibilities - Use state-of-the-art Machine Learning and Generative AI techniques to create the next generation of the tools that empower Amazon's Selling Partners to succeed. - Design, develop and deploy highly innovative models to interact with Sellers and delight them with solutions. - Work closely with teams of scientists and software engineers to drive real-time model implementations and deliver novel and highly impactful features. - Establish scalable, efficient, automated processes for large scale data analyses, model benchmarking, model validation and model implementation. - Research and implement novel machine learning and statistical approaches. - Participate in strategic initiatives to employ the most recent advances in ML in a fast-paced, experimental environment. About the team Selling Partner Experience Science is a growing team of scientists, engineers and product leaders engaged in the research and development of the next generation of ML-driven technology to empower Amazon's Selling Partners to succeed. We draw from many science domains, from Natural Language Processing to Computer Vision to Optimization to Economics, to create solutions that seamlessly and automatically engage with Sellers, solve their problems, and help them grow. We are focused on building seller facing AI-powered tools using the latest science advancements to empower sellers to drive the growth of their business. We strive to radically simplify the seller experience, lowering the cognitive burden of selling on Amazon by making it easy to accomplish critical tasks such as launching new products, understanding and complying with Amazon’s policies and taking actions to grow their business.
US, WA, Seattle
Join us in the evolution of Amazon’s Seller business! The Selling Partner Growth organization is the growth and development engine for our Store. Partnering with business, product, and engineering, we catalyze SP growth with comprehensive and accurate data, unique insights, and actionable recommendations and collaborate with WW SP facing teams to drive adoption and create feedback loops. We strongly believe that any motivated SP should be able to grow their businesses and reach their full potential supported by Amazon tools and resources. We are looking for a Senior Applied Scientist to lead us to identify data-driven insight and opportunities to improve our SP growth strategy and drive new seller success. As a successful applied scientist on our talented team of scientists and engineers, you will solve complex problems to identify actionable opportunities, and collaborate with engineering, research, and business teams for future innovation. You need to have deep understanding on the business domain and have the ability to connect business with science. You are also strong in ML modeling and scientific foundation with the ability to collaborate with engineering to put models in production to answer specific business questions. You are an expert at synthesizing and communicating insights and recommendations to audiences of varying levels of technical sophistication. You will continue to contribute to the research community, by working with scientists across Amazon, as well as collaborating with academic researchers and publishing papers (www.aboutamazon.com/research). Key job responsibilities As a Sr. Applied Scientist in the team, you will: - Identify opportunities to improve SP growth and translate those opportunities into science problems via principled statistical solutions (e.g. ML, causal, RL). - Mentor and guide the applied scientists in our organization and hold us to a high standard of technical rigor and excellence in MLOps. - Design and lead roadmaps for complex science projects to help SP have a delightful selling experience while creating long term value for our shoppers. - Work with our engineering partners and draw upon your experience to meet latency and other system constraints. - Identify untapped, high-risk technical and scientific directions, and simulate new research directions that you will drive to completion and deliver. - Be responsible for communicating our science innovations to the broader internal & external scientific community.
US, CA, Sunnyvale
Our team leads the development and optimization of on-device ML models for Amazon's hardware products, including audio, vision, and multi-modal AI features. We work at the critical intersection of ML innovation and silicon design, ensuring AI capabilities can run efficiently on resource-constrained devices. Currently, we enable production ML models across multiple device families, including Echo, Ring/Blink, and other consumer devices. Our work directly impacts Amazon's customer experiences in consumer AI device market. The solutions we develop determine which AI features can be offered on-device versus requiring cloud connectivity, ultimately shaping product capabilities and customer experience across Amazon's hardware portfolio. This is a unique opportunity to shape the future of AI in consumer devices at unprecedented scale. You'll be at the forefront of developing industry-first model architectures and compression techniques that will power AI features across millions of Amazon devices worldwide. Your innovations will directly enable new AI features that enhance how customers interact with Amazon products every day. Come join our team! Key job responsibilities As a Principal Applied Scientist, you will: • Own the technical architecture and optimization strategy for ML models deployed across Amazon's device ecosystem, from existing to yet-to-be-shipped products. • Develop novel model architectures optimized for our custom silicon, establishing new methodologies for model compression and quantization. • Create an evaluation framework for model efficiency and implement multimodal optimization techniques that work across vision, language, and audio tasks. • Define technical standards for model deployment and drive research initiatives in model efficiency to guide future silicon designs. • Spend the majority of your time doing deep technical work - developing novel ML architectures, writing critical optimization code, and creating proof-of-concept implementations that demonstrate breakthrough efficiency gains. • Influence architecture decisions impacting future silicon generations, establish standards for model optimization, and mentor others in advanced ML techniques.
US, CO, Boulder
The Advertising Incrementality Measurement (AIM) team is looking for an Applied Scientist II with experience in causal inference, experimentation, and ML development to help us expand our causal modeling solutions for understanding advertising effectiveness. Our work is foundational to providing customer-facing experimentation tools, furthering internal research & development, and building out Amazon's new Multi-Touch Attribution (MTA) measurement offerings. Incrementality measurement is a lynchpin for the next generation of Amazon Advertising measurement solutions and this role will play a key role in the release and expansion of these offerings. Key job responsibilities * Partner with economists and senior team members to drive science improvements and implement technical solutions at the state-of-the-art of machine learning and econometrics * Partner with engineering and other science collaborators to design, implement, prototype, deploy, and maintain large-scale causal ML models. * Carry out in-depth research and analysis exploring advertising-related data sets, including large sets of real-world experimental data, to understand advertiser behavior, highlight model improvement opportunities, and understand shortcomings and limitations. * Define data quality standards for understanding typical behavior, capturing outliers, and detecting model performance issues. * Work with product stakeholders to help improve our ability to provide quality measurement of advertising effectiveness for our customers. About the team AIM is a cross disciplinary team of engineers, product managers, economists, data scientists, and applied scientists with a charter to build scientifically-rigorous causal inference methodologies at scale. Our job is to help customers cut through the noise of the modern advertising landscape and understand what actions, behaviors, and strategies actually have a real, measurable impact on key outcomes. The data we produce becomes the effective ground truth for advertisers and partners making decisions affecting $10s to $100s of millions in advertising spend.
US, WA, Seattle
The Measurement, Ad Tech, and Data Science (MADS) team at Amazon Ads is at the forefront of developing cutting-edge solutions that help our tens of millions of advertisers understand the value of their ad spend while prioritizing customer privacy and measurement quality. We develop cutting-edge deterministic algorithms, machine learning models, causal models, and statistical approaches to empower advertisers with insights on the effectiveness of their ads in guiding customers from awareness to purchases. Our insights help advertisers build full-funnel advertising strategies. We maximize the information we extract from incomplete traffic signals and alternative sources to capture the impact of their ad spending for both Amazon recognized and anonymous traffic. Our vision is to lead the industry in extracting and combining information from several sources to enable advertisers to optimize their return on their ad spend. As an Applied Scientist on this team, you will: - Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Perform hands-on analysis and modeling of enormous data sets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Research new and innovative machine learning approaches. - Recruit Applied Scientists to the team and provide mentorship. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video https://youtu.be/zD_6Lzw8raE Key job responsibilities * Lead the development of ad measurement models and solutions that address the full spectrum of an advertiser's investment, focusing on scalable and efficient methodologies. * Collaborate closely with cross-functional teams including engineering, product management, and business teams to define and implement measurement solutions. * Use state-of-the-art scientific technologies including Generative AI, Classical Machine Learning, Causal Inference, Natural Language Processing, and Computer Vision to develop cutting-edge models that measure the impact of ad spend across multiple platforms and timescales. * Drive experimentation and the continuous improvement of ML models through iterative development, testing, and optimization. * Translate complex scientific challenges into clear and impactful solutions for business stakeholders. * Mentor and guide junior scientists, fostering a collaborative and high-performing team culture. * Foster collaborations between scientists to move faster, with broader impact. * Regularly engage with the broader scientific community with presentations, publications, and patents. A day in the life You will solve real-world problems by getting and analyzing large amounts of data, generate business insights and opportunities, design simulations and experiments, and develop statistical and ML models. The team is driven by business needs, which requires collaboration with other Scientists, Engineers, and Product Managers across the advertising organization. You will prepare written and verbal presentations to share insights to audiences of varying levels of technical sophistication. Team video https://advertising.amazon.com/help/G4LNN5YWHP6SM9TJ