FiddlerAI_LeadImage.gif

Fiddler's Model Performance Monitoring service is an all-in-one platform that allows customers to monitor, observe, explain, and analyze their AI systems.
Credit: Fiddler

Fiddler.ai CEO Krishna Gade on the emerging category of explainable AI

The founder and CEO of this Alexa Fund portfolio company answers three questions about ‘responsible AI’.

Editor’s Note: This interview is the latest installment within a series Amazon Science is publishing related to the science behind products and services from companies in which Amazon has invested. The Alexa Fund first invested in Fiddler.ai in August 2020, and then in June of this year participated in the company’s $32 million funding round.

Gartner Group, the world’s leading research and advisory company, recently published its top strategic technology trends for 2022. Among them is what Gartner terms “AI Engineering”, or the discipline of operationalizing updates to artificial intelligence models by “using integrated data and model and development pipelines to deliver consistent business value from AI,” and by combining “automated update pipelines with strong AI governance.”

Gartner analysts further stated that by 2025 “the 10% of enterprises that establish AI engineering best practices will generate at least three times more value from their AI efforts than the 90% of enterprises that do not.”

Krishna_Gade_Fiddler_AIportrait (002).jpg
Krishna Gade, a founder and CEO of Fiddler.ai.
Credit: Fiddler.ai

That report, and the surging interest in the topic of explainable AI, or XAI, is validation for Krishna Gade and his co-founders of Fiddler.ai, who started the company in 2018 with the belief that businesses needed a new kind of explainable AI service to address issues of fairness, accountability, transparency, and trust.

The idea behind the company’s formation emerged from Gade’s previous engineering manager role at Facebook, where he led a team that built tools to help the company’s developers find bugs, and make the company’s News Feed more transparent.

“When I joined Facebook [in 2016], the problem we were addressing was one of having hundreds of models coming together to make decisions about how likely it would be for an individual to engage with the content, or how likely they would comment on a post, or share it. But it was very difficult to answer questions like ‘Why am I seeing this story?’ or ‘Why is this story going viral?’”.

That experience, Gade says, is what led him to form Fiddler.ai with his co-founders, Amit Paka and Manoj Cheenath.

“I realized this wasn’t a problem that just Facebook had to solve, but that it was a very general machine learning workflow problem,” Gade adds. “Until that point, we had lots of tools focused on helping data scientists and machine learning engineers to build and deploy models, but people weren’t focused on what happened after the models went into production. How do you monitor them? How do you explain them? How do you know that you can continue to trust them? Our vision was to create a Tableau-like tool for machine learning that could unify the management of these ML models, instrument them, monitor them, and explain how they’re behaving to various stakeholders.”

Amazon Science connected with Gade recently, and asked him three questions about AI’s “black box” problem, some of the biggest challenges and opportunities being addressed in the emerging field of explainable AI, and about his company’s machine learning model operations and monitoring solutions.

Q. A quick search of XAI on arXiv produces a large body of research focusing on AI’s “black box” problem. How is Fiddler addressing this challenge, and how do you differentiate your approach from others?

With AI, you’re training a system; you’re feeding it large volumes of data, historical data, both good and bad. For example, let's say you're trying to use AI to classify fraud, or to figure out the credit risk of your customers, or which customers are likely to churn in the future.

Fiddler.ai CEO Krishna Gade talks explainable AI

In this process you’re feeding the system this data and you're building a system that encodes patterns in the data into some sort of a structure. That structure is called the model architecture. It could be a neural network, a decision tree or a random forest; there are so many different model architectures that are available.

You then use this structure to attempt to predict the future. The problem with this approach is that these structures are artifacts that become more and more complex over time. Twenty years ago when financial services companies were assessing credit risk, they were building mostly linear models where you could see the weights of the equation and actually read and interpret them.

Whereas today’s machine learning and deep learning models are not human interpretable (sometimes simply because of their complexity) in the sense that you cannot understand how the structure is coming together to arrive at its prediction. This is where explainability becomes important because now you've got a black box system that could actually be highly accurate but is not human-readable. Without human understanding of how the model works, there is no way to fully trust the results which should make stakeholders uneasy. This is where explainability is adding business value to companies so that they can bridge this human-machine trust gap.

Without human understanding of how the model works, there is no way to fully trust the results which should make stakeholders uneasy.
Krishna Gade

We’ve devised our explainable AI user experience to cater to different model types to ensure explanations allow for the various factors that go into making predictions. Perhaps you have a credit underwriting model that is predicting the risk of a particular loan. These types of models typically are ingesting attributes like the amount of the loan request, the income of the person that's requesting the loan, their FICO score, tenure of employment, and many other inputs.

These attributes go into the model as inputs and the model outputs a probability of how risky you are for approving this loan. The model could be any type, it could be a traditional machine learning model, or a deep learning model. We visualize explanations in context of the inputs so a data scientist can understand which predictive features have the most impact on results.

We provide ways for you to understand that this particular loan risk probability is, for example, 30 percent, and here are the reasons why — these inputs are contributing positively by this magnitude, these inputs are contributing negatively by this magnitude. It is like a detective plot figuring out root-cause, and the practitioner can interactively fiddle with the value and weighting of inputs — hence the name Fiddler.

So you can ask questions like ‘Okay, the loan risk probability right now is 30% because the customer is asking for $10,000 loan. What if the customer asked for an $8,000 loan? Would the loan risk go down? What if the customer was making $10,000 more in income? Or what if the customer’s FICO score was 10 points higher’? You can ask these counterfactual questions by fiddling with inputs and you'll get real-time explanations in an interactive manner so you can understand not only why the model is making its predictions, but also what would happen if the person requesting the loan had a different profile. You can actually provide the human in the loop with decision support.

We provide a pluggable service which is differentiated from other monolithic, rigid products. Our customers can develop their AI systems however they want. They can build their own, use third-party, or open-source solutions. Or they can bring their models together with ours, which is what we call BYOM, or bring your own model, and we’ll help them explain it. We then visualize these explanations in various ways so they can show it not only to the technical people who built the models, but also to business stakeholders, or regulatory compliance stakeholders.

Q. What do you consider to be some of the biggest opportunities and challenges being addressed within the field of explainable AI today?

So today there are four problems that are introduced when you put machine learning models into production.

One is the black box aspect that I talked about earlier. Most models are becoming increasingly complex. It is hard to know how they work and that creates a mistrust in how to use it and how to assure customers your AI solutions are fair.

Number two is model performance in terms of accuracy, fairness, and data quality. Unlike traditional software performance, model building is not static. Traditional software will behave the same way whenever you interact with it. But machine learning model performance can go up and down. This is called model drift. Teams who developed these models realized this more acutely during the pandemic, finding that they had trained their models on the pre-pandemic data, and now the pandemic had completely changed user behavior.

On an e-commerce site, for example, customers were asking for different types of things, toilet paper being one of those early examples. We had all kinds of varying factors — people losing jobs, working from home, and the lack of travel — any one of which would impact pricing algorithms for the airlines.

Most models are becoming increasingly complex. It is hard to know how they work and that creates a mistrust in how to use it and how to assure customers your AI solutions are fair.
Krishna Gade

Model drift has always been there, but the pandemic showed us how much impact drift can have. This dramatic, mass-drift event is an opportunity for businesses that realize they not only need monitoring at the high level of business metrics, but they also need monitoring at the model level because it is too late to recover by the time issues show up in the business metrics. Having early warning systems for how your AI product is behaving has become essential for agility — identifying when and how model drift is happening has become table-stakes.

Third is bias. As you know, some of these models have a direct impact on customers’ lives. For example, getting a loan approved or not, getting a job, getting a clinical diagnosis. Any of these events can change a person’s life, so a model going wrong, and going wrong in a big way for a certain sector of society, be it demographic, ethnicity, or gender or other factors can be really harmful to people. And that can seriously damage a company’s reputation and customer trust.

We’ve seen examples where a new credit card is launched and customers complain about gender discrimination where husbands and wives are getting 10x differences in credit limits, even though they have similar incomes and FICO scores. And when customers complain, customer support representatives might say ‘Oh, it’s just the algorithm, we don’t know how it works.’ We can’t abdicate our responsibility to an algorithm. Detecting bias earlier in the lifecycle of models and continuously monitoring for bias is super critical in many industries and high-stakes use cases.

The fourth aspect is governance and compliance. There is a lot of news these days about AI and the need for regulation. There is likely regulation coming, or in certain countries it already has come. Businesses now have to focus on how to make their models compliant. For example, regulation is top of mind in some sectors like financial services where there already are well defined regulations for how to build compliant models.

These are the four factors creating an opportunity for Fiddler to help our customers address these challenges, and they’re all linked by a common goal to build trust, both for those building the models, and for customers knowing they can believe in the integrity of our customers’ products.

Q. Fiddler provides machine learning operations and monitoring solutions. Can you explain some of the science behind these solutions, and how customers are utilizing them to accelerate model deployment?

There are two main use cases for which customers turn to Fiddler. The first is pre-production model validation. So even before customers put the model into production, they need to understand how it is working: from an explainability standpoint, from a bias perspective, from understanding data imbalance issues, and so on.

Fiddler offers its customers many insights that can help them understand more about how the model they've created is going to work. For example, customers in the banking sector may use Fiddler for model validation to understand the risks of those models even before they’re deployed.

The second use case is post-production model monitoring. So now a business deploys a model into production – how is that model behaving? With Fiddler, users can set up alerts for when things go wrong so their machine learning engineers or data scientists can diagnose what’s happening.

Let’s say there’s model drift, or there are data-quality issues coming into your pipelines, and the accuracy of your model is going down. You can now figure out what's going on and then fix those issues. Any business or team that is deploying machine learning models needs to understand what is going on.

FiddlerAI_FeedbackLoop_02.jpg
Fiddler CEO Krishna Gade says there are two main reasons customers turn to Fiddler: The first is pre-production model validation, the second is post-production model monitoring.
Credit: Fiddler

We are seeing traction, in particular, within a couple of sectors. One is digital-native companies that need to quickly deploy models and proactively monitor models. They need to observe how their models are performing in production, and how they're affecting their business metrics.

When it comes to financial services it’s interesting because they have experienced increased regulation, particularly since 2008. Even before they were starting to use machine learning models, they were building handcrafted quantitative models. In 2008 we had the economic crisis, bank bail outs, and the Fed institutionalized the SR 11-7 regulation, which mandated risk management of every bank model with stricter requirements for high-risk models like credit risk. So model risk management is a process that every bank in the United States, Europe and elsewhere must follow.

Today, the quantitative models that banks use are being replaced or complemented by machine learning models due to the availability of a lot more data, specialized talent, and the tools to build more machine learning and deep learning models. Unfortunately, the governance approaches used to minimize risk and validate models in the past are no longer applicable for today’s more sophisticated and complex models.

The whole pre-production model validation — understanding all the risks around models — and then post production model monitoring, which combined is called model risk management, is leading banks to look to Fiddler and others to help them address these challenges.

All of this comes together with our model management platform (MPM); it is a unified platform that provides a common language, metrics, and centralized controls that are required for operationalizing ML/AI with trust.

Our pluggable service allows our customers to bring a variety of models. They can be trained on structured data sets or unstructured data sets, tabular data or text or image data, and they can be visualized for both technical and non-technical people at scale. Our customers can run their models wherever they want. They can use our managed cloud service, but they can also run it within their own environments, whether that’s a data center or their favorite cloud provider of choice. So the plugability of our solution, and the fact that we’re cloud and model agnostic is what differentiates our product.

View from space of a connected network around planet Earth representing the Internet of Things.
Sign up for our newsletter

Research areas

Related content

US, CA, Palo Alto
Amazon is the 4th most popular site in the US. Our product search engine, one of the most heavily used services in the world, indexes billions of products and serves hundreds of millions of customers world-wide. We are working on a new initiative to transform our search engine into a shopping engine that assists customers with their shopping missions. We look at all aspects of search CX, query understanding, Ranking, Indexing and ask how we can make big step improvements by applying advanced Machine Learning (ML) and Deep Learning (DL) techniques. We’re seeking a thought leader to direct science initiatives for the Search Relevance and Ranking at Amazon. This person will also be a deep learning practitioner/thinker and guide the research in these three areas. They’ll also have the ability to drive cutting edge, product oriented research and should have a notable publication record. This intellectual thought leader will help enhance the science in addition to developing the thinking of our team. This leader will direct and shape the science philosophy, planning and strategy for the team, as we explore multi-modal, multi lingual search through the use of deep learning . We’re seeking an individual that can enhance the science thinking of our team: The org is made of 60+ applied scientists, (2 Principal scientists and 5 Senior ASMs). This person will lead and shape the science philosophy, planning and strategy for the team, as we push into Deep Learning to solve problems like cold start, discovery and personalization in the Search domain. Joining this team, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon [Earth's most customer-centric internet company]. We provide a highly customer-centric, team-oriented environment in our offices located in Palo Alto, California.
JP, 13, Tokyo
Our mission is to help every vendor drive the most significant impact selling on Amazon. Our team invent, test and launch some of the most innovative services, technology, processes for our global vendors. Our new AVS Professional Services (ProServe) team will go deep with our largest and most sophisticated vendor customers, combining elite client-service skills with cutting edge applied science techniques, backed up by Amazon’s 20+ years of experience in Japan. We start from the customer’s problem and work backwards to apply distinctive results that “only Amazon” can deliver. Amazon is looking for a talented and passionate Applied Science Manager to manage our growing team of Applied Scientists and Business Intelligence Engineers to build world class statistical and machine learning models to be delivered directly to our largest vendors, and working closely with the vendors' senior leaders. The Applied Science Manager will set the strategy for the services to invent, collaborating with the AVS business consultants team to determine customer needs and translating them to a science and development roadmap, and finally coordinating its execution through the team. In this position, you will be part of a larger team touching all areas of science-based development in the vendor domain, not limited to Japan-only products, but collaborating with worldwide science and business leaders. Our current projects touch on the areas of causal inference, reinforcement learning, representation learning, anomaly detection, NLP and forecasting. As the AVS ProServe Applied Science Manager, you will be empowered to expand your scope of influence, and use ProServe as an incubator for products that can be expanded to all Amazon vendors, worldwide. We place strong emphasis on talent growth. As the Applied Science Manager, you will be expected in actively growing future Amazon science leaders, and providing mentoring inside and outside of your team. Key job responsibilities The Applied Science Manager is accountable for: (1) Creating a vision, a strategy, and a roadmap tackling the most challenging business questions from our leading vendors, assess quantitatively their feasibility and entitlement, and scale their scope beyond the ProServe team. (2) Coordinate execution of the roadmap, through direct reports, consisting of scientists and business intelligence engineers. (3) Grow and manage a technical team, actively mentoring, developing, and promoting team members. (4) Work closely with other science managers, program/product managers, and business leadership worldwide to scope new areas of growth, creating new partnerships, and proposing new business initiatives. (5) Act as a technical supervisor, able to assess scientific direction, technical design documents, and steer development efforts to maximize project delivery.
US, NY, New York
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses. As a core product offering within our advertising portfolio, Sponsored Products (SP) helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The SP team's primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! The Supply team (within Sponsored Products) is looking for an Applied Scientist to join a fast-growing team with the mandate of creating new ad experiences that elevate the shopping experience for hundreds of millions customers worldwide. The Applied Scientist will take end-to-end ownership of driving new product/feature innovation by applying advanced statistical and machine learning models. The role will handle petabytes of unstructured data (images, text, videos) to extract insights into what metadata can be useful for us to highlight to simplify purchase decisions, and propose new experiences that increase shopper engagement. Why you love this opportunity Amazon is investing heavily in building a world-class advertising business. This team is responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Key job responsibilities As an Applied Scientist on this team you will: Build machine learning models and perform data analysis to deliver scalable solutions to business problems. Perform hands-on analysis and modeling with very large data sets to develop insights that increase traffic monetization and merchandise sales without compromising shopper experience. Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production. Design and run A/B experiments that affect hundreds of millions of customers, evaluate the impact of your optimizations and communicate your results to various business stakeholders. Work with scientists and economists to model the interaction between organic sales and sponsored content and to further evolve Amazon's marketplace. Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. Research new predictive learning approaches for the sponsored products business. Write production code to bring models into production. A day in the life You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven fundamentally from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding.
US, CA, Palo Alto
The Amazon Search team creates powerful, customer-focused search solutions and technologies. Whenever a customer visits an Amazon site worldwide and types in a query or browses through product categories, Amazon Search services go to work. We design, develop, and deploy high performance, fault-tolerant distributed search systems used by millions of Amazon customers every day. We are seeking a Principal Scientist with deep expertise in Search. Your responsibility will be to advance the state-of-the-art for search science that leads to world-class products that impact Amazon's customers. Key job responsibilities You will be responsible for defining key research directions, adopting or inventing new machine learning techniques, conducting rigorous experiments, publishing results, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. You will also participate in organizational planning, hiring, mentorship and leadership development. You will be technically fearless and with a passion for building scalable science and engineering solutions. You will serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance). About the team This is a position on Core Ranking and Experimentation team in Palo Alto, CA. The team works on a variety of topics in search ranking and relevance, such as multi-objective optimization, personalization, and fast online experimentation. We work closely with teams in various parts of the stack to ensure that our science is translated to customer facing products.
US, WA, Bellevue
Amazon is looking for a passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Speech and Language technology. Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV). As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use of speech and language technology. You will gain hands on experience with Amazon’s heterogeneous speech, text, and structured data sources, and large-scale computing resources to accelerate advances in spoken language understanding. We are hiring in all areas of human language technology: ASR, MT, NLU, text-to-speech (TTS), and Dialog Management, in addition to Computer Vision.
IN, KA, Bangalore
Amazon is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. The ATT team, based in Bangalore, is responsible for ensuring that ads are compliant to world-wide advertising policies and are of high quality, leading to higher conversion for the advertisers and providing a great experience for the shoppers. Machine learning, particularly multi-modal data understanding, is fundamental to the way we drive our business, meet our goals and satisfy our customers. ATT team invests in researching and developing state of art models that analyze various type of ad assets – text, audio, images and videos - to ensure compliance to advertising policies. We also help advertisers create more successful ads by creating ML models to assist ad generation as well as to provide data-driven interpretable insights. Key job responsibilities Major responsibilities · Deliver key goals to enhance advertiser experience and protect shopper trust by innovative use of computer vision, NLP and statistical techniques · Drive core business analytics and data science explorations to inform key business decisions and algorithm roadmap · Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation · Hire and develop top talent in machine learning and data science and accelerate the pace of innovation in the group · Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production
US, WA, Seattle
We are seeking a talented applied researcher to join the Search team responsible for developing reinforcement learning systems for Amazon's shopping experience and delivering it to millions of customers. We believe that shopping on Amazon should be simple, delightful, and full of "wow" moments for everyone.
US, NY, New York
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. About the team Amazon's Weblab team enables experimentation at massive scale to help Amazon build better products for customers. A/B testing is in Amazon's DNA and we're at the core of how Amazon innovates on behalf of customers.
US, WA, Seattle
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses, responsible for defining and delivering a collection of advertising products that drive discovery and sales. As a core product offering within our advertising portfolio, Sponsored Products (SP) helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The SP team's primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! As an Applied Science Manager in Machine Learning, you will: Directly manage and lead a cross-functional team of Applied Scientists, Data Scientists, Economists, and Business Intelligence Engineers. Develop and manage a research agenda that balances short term deliverables with measurable business impact as well as long term investments. Lead marketplace design and development based on economic theory and data analysis. Provide technical and scientific guidance to team members. Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative and business judgment Advance the team's engineering craftsmanship and drive continued scientific innovation as a thought leader and practitioner. Develop science and engineering roadmaps, run annual planning, and foster cross-team collaboration to execute complex projects. Perform hands-on data analysis, build machine-learning models, run regular A/B tests, and communicate the impact to senior management. Collaborate with business and software teams across Amazon Ads. Stay up to date with recent scientific publications relevant to the team. Hire and develop top talent, provide technical and career development guidance to scientists and engineers within and across the organization. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video ~ https://youtu.be/zD_6Lzw8raE
US, CA, Palo Alto
The Amazon Search team creates powerful, customer-focused search and advertising solutions and technologies. Whenever a customer visits an Amazon site worldwide and types in a query or browses through product categories, Amazon Search services go to work. We design, develop, and deploy high performance, fault-tolerant distributed search systems used by millions of Amazon customers every day. Our Search Relevance team works to maximize the quality and effectiveness of the search experience for visitors to Amazon websites worldwide. Amazon’s large scale brings with it unique problems to solve in designing, testing, and deploying relevance models. We are seeking a strong applied Scientist to join the Experimentation Infrastructure and Methods team. This team’s charter is to innovate and evaluate ranking at Amazon Search. In practice, we aim to create infrastructure and metrics, enable new experimental methods, and do proof-of-concept experiments, that enable Search Relevance teams to introduce new features faster, reduce the cost of experimentation, and deliver faster against Search goals. Key job responsibilities You will build search ranking systems and evaluation framework that extend to Amazon scale -- thousands of product types, billions of queries, and hundreds of millions of customers spread around the world. As a Senior Applied Scientist you will find the next set of big improvements to ranking evaluation, get your hands dirty by building models to help understand complexities of customer behavior, and mentor junior engineers and scientists. In addition to typical topics in ranking, we are particularly interested in evaluation, feature selection, explainability. A day in the life Our primary focus is improving search ranking systems. On a day-to-day this means building ML models, analyzing data from your recent A/B tests, and guiding teams on best practices. You will also find yourself in meetings with business and tech leaders at Amazon communicating your next big initiative. About the team We are a team consisting of software engineers and applied scientists. Our interests and activities span machine learning for better ranking, experimentation, statistics for better decision making, and infrastructure to make it all happen efficiently at scale.