FiddlerAI_LeadImage.gif

Fiddler's Model Performance Monitoring service is an all-in-one platform that allows customers to monitor, observe, explain, and analyze their AI systems.
Credit: Fiddler

Fiddler.ai CEO Krishna Gade on the emerging category of explainable AI

The founder and CEO of this Alexa Fund portfolio company answers three questions about ‘responsible AI’.

Editor’s Note: This interview is the latest installment within a series Amazon Science is publishing related to the science behind products and services from companies in which Amazon has invested. The Alexa Fund first invested in Fiddler.ai in August 2020, and then in June of this year participated in the company’s $32 million funding round.

Gartner Group, the world’s leading research and advisory company, recently published its top strategic technology trends for 2022. Among them is what Gartner terms “AI Engineering”, or the discipline of operationalizing updates to artificial intelligence models by “using integrated data and model and development pipelines to deliver consistent business value from AI,” and by combining “automated update pipelines with strong AI governance.”

Gartner analysts further stated that by 2025 “the 10% of enterprises that establish AI engineering best practices will generate at least three times more value from their AI efforts than the 90% of enterprises that do not.”

Krishna_Gade_Fiddler_AIportrait (002).jpg
Krishna Gade, a founder and CEO of Fiddler.ai.
Credit: Fiddler.ai

That report, and the surging interest in the topic of explainable AI, or XAI, is validation for Krishna Gade and his co-founders of Fiddler.ai, who started the company in 2018 with the belief that businesses needed a new kind of explainable AI service to address issues of fairness, accountability, transparency, and trust.

The idea behind the company’s formation emerged from Gade’s previous engineering manager role at Facebook, where he led a team that built tools to help the company’s developers find bugs, and make the company’s News Feed more transparent.

“When I joined Facebook [in 2016], the problem we were addressing was one of having hundreds of models coming together to make decisions about how likely it would be for an individual to engage with the content, or how likely they would comment on a post, or share it. But it was very difficult to answer questions like ‘Why am I seeing this story?’ or ‘Why is this story going viral?’”.

That experience, Gade says, is what led him to form Fiddler.ai with his co-founders, Amit Paka and Manoj Cheenath.

“I realized this wasn’t a problem that just Facebook had to solve, but that it was a very general machine learning workflow problem,” Gade adds. “Until that point, we had lots of tools focused on helping data scientists and machine learning engineers to build and deploy models, but people weren’t focused on what happened after the models went into production. How do you monitor them? How do you explain them? How do you know that you can continue to trust them? Our vision was to create a Tableau-like tool for machine learning that could unify the management of these ML models, instrument them, monitor them, and explain how they’re behaving to various stakeholders.”

Amazon Science connected with Gade recently, and asked him three questions about AI’s “black box” problem, some of the biggest challenges and opportunities being addressed in the emerging field of explainable AI, and about his company’s machine learning model operations and monitoring solutions.

Q. A quick search of XAI on arXiv produces a large body of research focusing on AI’s “black box” problem. How is Fiddler addressing this challenge, and how do you differentiate your approach from others?

With AI, you’re training a system; you’re feeding it large volumes of data, historical data, both good and bad. For example, let's say you're trying to use AI to classify fraud, or to figure out the credit risk of your customers, or which customers are likely to churn in the future.

Fiddler.ai CEO Krishna Gade talks explainable AI

In this process you’re feeding the system this data and you're building a system that encodes patterns in the data into some sort of a structure. That structure is called the model architecture. It could be a neural network, a decision tree or a random forest; there are so many different model architectures that are available.

You then use this structure to attempt to predict the future. The problem with this approach is that these structures are artifacts that become more and more complex over time. Twenty years ago when financial services companies were assessing credit risk, they were building mostly linear models where you could see the weights of the equation and actually read and interpret them.

Whereas today’s machine learning and deep learning models are not human interpretable (sometimes simply because of their complexity) in the sense that you cannot understand how the structure is coming together to arrive at its prediction. This is where explainability becomes important because now you've got a black box system that could actually be highly accurate but is not human-readable. Without human understanding of how the model works, there is no way to fully trust the results which should make stakeholders uneasy. This is where explainability is adding business value to companies so that they can bridge this human-machine trust gap.

Without human understanding of how the model works, there is no way to fully trust the results which should make stakeholders uneasy.
Krishna Gade

We’ve devised our explainable AI user experience to cater to different model types to ensure explanations allow for the various factors that go into making predictions. Perhaps you have a credit underwriting model that is predicting the risk of a particular loan. These types of models typically are ingesting attributes like the amount of the loan request, the income of the person that's requesting the loan, their FICO score, tenure of employment, and many other inputs.

These attributes go into the model as inputs and the model outputs a probability of how risky you are for approving this loan. The model could be any type, it could be a traditional machine learning model, or a deep learning model. We visualize explanations in context of the inputs so a data scientist can understand which predictive features have the most impact on results.

We provide ways for you to understand that this particular loan risk probability is, for example, 30 percent, and here are the reasons why — these inputs are contributing positively by this magnitude, these inputs are contributing negatively by this magnitude. It is like a detective plot figuring out root-cause, and the practitioner can interactively fiddle with the value and weighting of inputs — hence the name Fiddler.

So you can ask questions like ‘Okay, the loan risk probability right now is 30% because the customer is asking for $10,000 loan. What if the customer asked for an $8,000 loan? Would the loan risk go down? What if the customer was making $10,000 more in income? Or what if the customer’s FICO score was 10 points higher’? You can ask these counterfactual questions by fiddling with inputs and you'll get real-time explanations in an interactive manner so you can understand not only why the model is making its predictions, but also what would happen if the person requesting the loan had a different profile. You can actually provide the human in the loop with decision support.

We provide a pluggable service which is differentiated from other monolithic, rigid products. Our customers can develop their AI systems however they want. They can build their own, use third-party, or open-source solutions. Or they can bring their models together with ours, which is what we call BYOM, or bring your own model, and we’ll help them explain it. We then visualize these explanations in various ways so they can show it not only to the technical people who built the models, but also to business stakeholders, or regulatory compliance stakeholders.

Q. What do you consider to be some of the biggest opportunities and challenges being addressed within the field of explainable AI today?

So today there are four problems that are introduced when you put machine learning models into production.

One is the black box aspect that I talked about earlier. Most models are becoming increasingly complex. It is hard to know how they work and that creates a mistrust in how to use it and how to assure customers your AI solutions are fair.

Number two is model performance in terms of accuracy, fairness, and data quality. Unlike traditional software performance, model building is not static. Traditional software will behave the same way whenever you interact with it. But machine learning model performance can go up and down. This is called model drift. Teams who developed these models realized this more acutely during the pandemic, finding that they had trained their models on the pre-pandemic data, and now the pandemic had completely changed user behavior.

On an e-commerce site, for example, customers were asking for different types of things, toilet paper being one of those early examples. We had all kinds of varying factors — people losing jobs, working from home, and the lack of travel — any one of which would impact pricing algorithms for the airlines.

Most models are becoming increasingly complex. It is hard to know how they work and that creates a mistrust in how to use it and how to assure customers your AI solutions are fair.
Krishna Gade

Model drift has always been there, but the pandemic showed us how much impact drift can have. This dramatic, mass-drift event is an opportunity for businesses that realize they not only need monitoring at the high level of business metrics, but they also need monitoring at the model level because it is too late to recover by the time issues show up in the business metrics. Having early warning systems for how your AI product is behaving has become essential for agility — identifying when and how model drift is happening has become table-stakes.

Third is bias. As you know, some of these models have a direct impact on customers’ lives. For example, getting a loan approved or not, getting a job, getting a clinical diagnosis. Any of these events can change a person’s life, so a model going wrong, and going wrong in a big way for a certain sector of society, be it demographic, ethnicity, or gender or other factors can be really harmful to people. And that can seriously damage a company’s reputation and customer trust.

We’ve seen examples where a new credit card is launched and customers complain about gender discrimination where husbands and wives are getting 10x differences in credit limits, even though they have similar incomes and FICO scores. And when customers complain, customer support representatives might say ‘Oh, it’s just the algorithm, we don’t know how it works.’ We can’t abdicate our responsibility to an algorithm. Detecting bias earlier in the lifecycle of models and continuously monitoring for bias is super critical in many industries and high-stakes use cases.

The fourth aspect is governance and compliance. There is a lot of news these days about AI and the need for regulation. There is likely regulation coming, or in certain countries it already has come. Businesses now have to focus on how to make their models compliant. For example, regulation is top of mind in some sectors like financial services where there already are well defined regulations for how to build compliant models.

These are the four factors creating an opportunity for Fiddler to help our customers address these challenges, and they’re all linked by a common goal to build trust, both for those building the models, and for customers knowing they can believe in the integrity of our customers’ products.

Q. Fiddler provides machine learning operations and monitoring solutions. Can you explain some of the science behind these solutions, and how customers are utilizing them to accelerate model deployment?

There are two main use cases for which customers turn to Fiddler. The first is pre-production model validation. So even before customers put the model into production, they need to understand how it is working: from an explainability standpoint, from a bias perspective, from understanding data imbalance issues, and so on.

Fiddler offers its customers many insights that can help them understand more about how the model they've created is going to work. For example, customers in the banking sector may use Fiddler for model validation to understand the risks of those models even before they’re deployed.

The second use case is post-production model monitoring. So now a business deploys a model into production – how is that model behaving? With Fiddler, users can set up alerts for when things go wrong so their machine learning engineers or data scientists can diagnose what’s happening.

Let’s say there’s model drift, or there are data-quality issues coming into your pipelines, and the accuracy of your model is going down. You can now figure out what's going on and then fix those issues. Any business or team that is deploying machine learning models needs to understand what is going on.

FiddlerAI_FeedbackLoop_02.jpg
Fiddler CEO Krishna Gade says there are two main reasons customers turn to Fiddler: The first is pre-production model validation, the second is post-production model monitoring.
Credit: Fiddler

We are seeing traction, in particular, within a couple of sectors. One is digital-native companies that need to quickly deploy models and proactively monitor models. They need to observe how their models are performing in production, and how they're affecting their business metrics.

When it comes to financial services it’s interesting because they have experienced increased regulation, particularly since 2008. Even before they were starting to use machine learning models, they were building handcrafted quantitative models. In 2008 we had the economic crisis, bank bail outs, and the Fed institutionalized the SR 11-7 regulation, which mandated risk management of every bank model with stricter requirements for high-risk models like credit risk. So model risk management is a process that every bank in the United States, Europe and elsewhere must follow.

Today, the quantitative models that banks use are being replaced or complemented by machine learning models due to the availability of a lot more data, specialized talent, and the tools to build more machine learning and deep learning models. Unfortunately, the governance approaches used to minimize risk and validate models in the past are no longer applicable for today’s more sophisticated and complex models.

The whole pre-production model validation — understanding all the risks around models — and then post production model monitoring, which combined is called model risk management, is leading banks to look to Fiddler and others to help them address these challenges.

All of this comes together with our model management platform (MPM); it is a unified platform that provides a common language, metrics, and centralized controls that are required for operationalizing ML/AI with trust.

Our pluggable service allows our customers to bring a variety of models. They can be trained on structured data sets or unstructured data sets, tabular data or text or image data, and they can be visualized for both technical and non-technical people at scale. Our customers can run their models wherever they want. They can use our managed cloud service, but they can also run it within their own environments, whether that’s a data center or their favorite cloud provider of choice. So the plugability of our solution, and the fact that we’re cloud and model agnostic is what differentiates our product.

Research areas

Related content

CA, QC, Montreal
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, CA, Santa Clara
The AWS Neuron Science Team is looking for talented scientists to enhance our software stack, accelerating customer adoption of Trainium and Inferentia accelerators. In this role, you will work directly with external and internal customers to identify key adoption barriers and optimization opportunities. You'll collaborate closely with our engineering teams to implement innovative solutions and engage with academic and research communities to advance state-of-the-art ML systems. As part of a strategic growth area for AWS, you'll work alongside distinguished engineers and scientists in an exciting and impactful environment. We actively work on these areas: - AI for Systems: Developing and applying ML/RL approaches for kernel/code generation and optimization - Machine Learning Compiler: Creating advanced compiler techniques for ML workloads - System Robustness: Building tools for accuracy and reliability validation - Efficient Kernel Development: Designing high-performance kernels optimized for our ML accelerator architectures A day in the life AWS Utility Computing (UC) provides product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Additionally, this role may involve exposure to and experience with Amazon's growing suite of generative AI services and other cloud computing offerings across the AWS portfolio. About the team AWS Neuron is the software of Trainium and Inferentia, the AWS Machine Learning chips. Inferentia delivers best-in-class ML inference performance at the lowest cost in the cloud to our AWS customers. Trainium is designed to deliver the best-in-class ML training performance at the lowest training cost in the cloud, and it’s all being enabled by AWS Neuron. Neuron is a Software that include ML compiler and native integration into popular ML frameworks. Our products are being used at scale with external customers like Anthropic and Databricks as well as internal customers like Alexa, Amazon Bedrocks, Amazon Robotics, Amazon Ads, Amazon Rekognition and many more. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, WA, Seattle
Application deadline: Applications will be accepted on an ongoing basis Amazon Ads is re-imagining advertising through cutting-edge generative artificial intelligence (AI) technologies. We combine human creativity with AI to transform every aspect of the advertising life cycle—from ad creation and optimization to performance analysis and customer insights. Our solutions help advertisers grow their brands while enabling millions of customers to discover and purchase products through delightful experiences. We deliver billions of ad impressions and millions of clicks daily, breaking fresh ground in product and technical innovations. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Why you’ll love this role: This role offers unprecedented breadth in ML applications and access to extensive computational resources and rich datasets that will enable you to build truly innovative solutions. You'll work on projects that span the full advertising life cycle, from sophisticated ranking algorithms and real-time bidding systems to creative optimization and measurement solutions. You'll work alongside talented engineers, scientists, and product leaders in a culture that encourages innovation, experimentation, and bias for action, and you’ll directly influence business strategy through your scientific expertise. What makes this role unique is the combination of scientific rigor with real-world impact. You’ll re-imagine advertising through the lens of advanced ML while solving problems that balance the needs of advertisers, customers, and Amazon's business objectives. Your impact and career growth: Amazon Ads is investing heavily in AI and ML capabilities, creating opportunities for scientists to innovate and make their marks. Your work will directly impact millions. Whether you see yourself growing as an individual contributor or moving into people management, there are clear paths for career progression. This role combines scientific leadership, organizational ability, technical strength, and business understanding. You'll have opportunities to lead technical initiatives, mentor other scientists, and collaborate with senior leadership to shape the future of advertising technology. Most importantly, you'll be part of a community that values scientific excellence and encourages you to push the boundaries of what's possible with AI. Watch two Applied Scientists at Amazon Ads talk about their work: https://www.youtube.com/watch?v=vvHsURsIPEA Learn more about Amazon Ads: https://advertising.amazon.com/ Key job responsibilities As a Senior Applied Scientist in Amazon Ads, you will: - Research and implement cutting-edge ML approaches, including applications of generative AI and large language models - Develop and deploy innovative ML solutions spanning multiple disciplines – from ranking and personalization to natural language processing, computer vision, recommender systems, and large language models - Drive end-to-end projects that tackle ambiguous problems at massive scale, often working with petabytes of data - Build and optimize models that balance multiple stakeholder needs - helping customers discover relevant products while enabling advertisers to achieve their goals efficiently - Build ML models, perform proof-of-concept, experiment, optimize, and deploy your models into production, working closely with cross-functional teams including engineers, product managers, and other scientists - Design and run A/B experiments to validate hypotheses, gather insights from large-scale data analysis, and measure business impact - Develop scalable, efficient processes for model development, validation, and deployment that optimize traffic monetization while maintaining customer experience
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Applied Scientist, to support the development and implementation of state-of-the-art algorithms and models for supervised fine-tuning and reinforcement learning through human feedback and and complex reasoning; with a focus across text, image, and video modalities. As an Applied Scientist, you will play a critical role in supporting the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Gen AI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports
US, NY, New York
We are looking for a passionate Applied Scientist to help pioneer the next generation of agentic AI applications for Amazon advertisers. In this role, you will design agentic architectures, develop tools and datasets, and contribute to building systems that can reason, plan, and act autonomously across complex advertiser workflows. You will work at the forefront of applied AI, developing methods for fine-tuning, reinforcement learning, and preference optimization, while helping create evaluation frameworks that ensure safety, reliability, and trust at scale. You will work backwards from the needs of advertisers—delivering customer-facing products that directly help them create, optimize, and grow their campaigns. Beyond building models, you will advance the agent ecosystem by experimenting with and applying core primitives such as tool orchestration, multi-step reasoning, and adaptive preference-driven behavior. This role requires working independently on ambiguous technical problems, collaborating closely with scientists, engineers, and product managers to bring innovative solutions into production. Key job responsibilities - Design and build agents for our autonomous campaigns experience. - Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). - Curate datasets and tools for MCP. - Build evaluation pipelines for agent workflows, including automated benchmarks, multi-step reasoning tests, and safety guardrails. - Develop agentic architectures (e.g., CoT, ToT, ReAct) that integrate planning, tool use, and long-horizon reasoning. - Prototype and iterate on multi-agent orchestration frameworks and workflows. - Collaborate with peers across engineering and product to bring scientific innovations into production. - Stay current with the latest research in LLMs, RL, and agent-based AI, and translate findings into practical applications. About the team The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Autonomous Campaigns team within Sponsored Products and Brands is focused on guiding and supporting 1.6MM advertisers to meet their advertising needs of creating and managing ad campaigns. At this scale, the complexity of diverse advertiser goals, campaign types, and market dynamics creates both a massive technical challenge and a transformative opportunity: even small improvements in guidance systems can have outsized impact on advertiser success and Amazon’s retail ecosystem. Our vision is to build a highly personalized, context-aware campaign creation and management system that leverages LLMs together with tools such as auction simulations, ML models, and optimization algorithms. This agentic framework, will operate across both chat and non-chat experiences in the ad console, scaling to natural language queries as well as proactively delivering guidance based on deep understanding of the advertiser. To execute this vision, we collaborate closely with stakeholders across Ad Console, Sales, and Marketing to identify opportunities—from high-level product guidance down to granular keyword recommendations—and deliver them through a tailored, personalized experience. Our work is grounded in state-of-the-art agent architectures, tool integration, reasoning frameworks, and model customization approaches (including tuning, MCP, and preference optimization), ensuring our systems are both scalable and adaptive.
US, CA, San Francisco
The AGI Autonomy Perception team performs applied machine learning research, including model training, dataset design, pre- and post- training. We train Nova Act, our state-of-the art computer use agent, to understand arbitrary human interfaces in the digital world. We are seeking a Machine Learning Engineer who combines strong ML expertise with software engineering excellence to scale and optimize our ML workflows. You will be a key member on our research team, helping accelerate the development of our leading computer-use agent. We are seeking a strong engineer who has a passion for scaling ML models and datasets, designing new ML frameworks, improving engineering practices, and accelerating the velocity of AI development. You will be hired as a Member of Technical Staff. Key job responsibilities * Design, build, and deploy machine learning models, frameworks, and data pipelines * Optimize ML training, inference, and evaluation workflows for reliability and performance * Evaluate and improve ML model performance and metrics * Develop tools and infrastructure to enhance ML development productivity
US, CA, San Francisco
Do you want to create intelligent, adaptable robots with global impact? We are seeking an experienced Applied Science Manager to lead a team of talented applied scientists and software engineers developing and deploying advanced manipulation strategies and algorithms. You will drive innovation that enables manipulation in high-contact, high-density, and diverse conditions with the speed and reliability that will delight our customers. Collaborating with cross-functional teams across hardware, software, and science, you will deliver reliable and high-performing solutions that will scale across geographies, applications, and conditions. You should enjoy the process of solving real-world problems that, quite frankly, haven’t been solved at scale anywhere before. Along the way, we guarantee you’ll get opportunities to be a disruptor, prolific innovator, and a reputed problem solver—someone who truly enables robotics to significantly impact the lives of millions of consumers. A day in the life - Prioritize being a great people manager: motivating, rewarding, and coaching your diverse team is the most important part of this role. You will recruit and retain top talent and excel in people and performance management tasks. - Set a vision for the team and create the technical roadmap that deliver results for customers while thinking big for future applications. - Guide the research, design, deployment, and evaluation of complex motion planning and control algorithms for contact-rich, cluttered, real-world manipulation problems. - Work closely with perception, hardware, and software teams to create integrated robotic solutions that are better than the sum of their parts. - Implement best practices in applied research and software development, managing project timelines, resources, and deliverables effectively. Amazon offers a full range of benefits for you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!
US, WA, Seattle
Amazon Economics is seeking Structural Economist (STRUC) Interns who are passionate about applying structural econometric methods to solve real-world business challenges. STRUC economists specialize in the econometric analysis of models that involve the estimation of fundamental preferences and strategic effects. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to model strategic decision-making and inform business optimization, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As a STRUC Economist Intern, you'll specialize in structural econometric analysis to estimate fundamental preferences and strategic effects in complex business environments. Your responsibilities include: - Analyze large-scale datasets using structural econometric techniques to solve complex business challenges - Applying discrete choice models and methods, including logistic regression family models (such as BLP, nested logit) and models with alternative distributional assumptions - Utilizing advanced structural methods including dynamic models of customer or firm decisions over time, applied game theory (entry and exit of firms), auction models, and labor market models - Building datasets and performing data analysis at scale - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including pricing analysis, competition modeling, strategic behavior estimation, contract design, and marketing strategy optimization - Helping business partners formalize and estimate business objectives to drive optimal decision-making and customer value - Build and refine comprehensive datasets for in-depth structural economic analysis - Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
Amazon Economics is seeking Reduced Form Causal Analysis (RFCA) Economist Interns who are passionate about applying econometric methods to solve real-world business challenges. RFCA represents the largest group of economists at Amazon, and these core econometric methods are fundamental to economic analysis across the company. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to analyze causal relationships and inform strategic business decisions, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As an RFCA Economist Intern, you'll specialize in econometric analysis to determine causal relationships in complex business environments. Your responsibilities include: - Analyze large-scale datasets using advanced econometric techniques to solve complex business challenges - Applying econometric techniques such as regression analysis, binary variable models, cross-section and panel data analysis, instrumental variables, and treatment effects estimation - Utilizing advanced methods including differences-in-differences, propensity score matching, synthetic controls, and experimental design - Building datasets and performing data analysis at scale - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including program evaluation, elasticity estimation, customer behavior analysis, and predictive modeling that accounts for seasonality and time trends - Build and refine comprehensive datasets for in-depth economic analysis - Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
Amazon Economics is seeking Forecasting, Macroeconomics and Finance (FMF) Economist Interns who are passionate about applying time-series econometric methods to solve real-world business challenges. FMF economists interpret and forecast Amazon business dynamics by combining advanced time-series statistical methods with strong economic analysis and intuition. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to forecast business trends and inform strategic decisions, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As an FMF Economist Intern, you'll specialize in time-series econometric analysis to understand, predict, and optimize Amazon's business dynamics. Your responsibilities include: - Analyze large-scale datasets using advanced time-series econometric techniques to solve complex business challenges - Applying frontier methods in time series econometrics, including forecasting models, dynamic systems analysis, and econometric models that combine macro and micro data - Developing formal models to understand past and present business dynamics, predict future trends, and identify relevant risks and opportunities - Building datasets and performing data analysis at scale using world-class data tools - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including analyzing drivers of growth and profitability, forecasting business metrics, understanding how customer experience interacts with external conditions, and evaluating short, medium, and long-term business dynamics - Build and refine comprehensive datasets for in-depth time-series economic analysis - Present complex analytical findings to business leaders and stakeholders