Automated evaluation of RAG pipelines with exam generation

The fight against hallucination in retrieval-augmented-generation models starts with a method for accurately assessing it.

In the swiftly evolving domain of large language models (LLMs), the accurate evaluation of retrieval-augmented-generation (RAG) models is paramount. In this blog, we introduce a pioneering methodology that employs an automated exam generation process, enhanced by item response theory (IRT), to evaluate the factual accuracy of RAG models on specific tasks. Our approach is not only robust and interpretable but also cost efficient, strategically identifying model strengths and refining exams to optimize their evaluative utility. We describe our methodology in a paper we will present in July at the 2024 International Conference on Machine Learning (ICML).

Exam generation process

RAG is a method for handling natural-language queries by retrieving relevant documents and using text from them to seed the response generated by an LLM. The expectation is that factual assertions from reliable documents will curb the LLM’s tendency to “hallucinate”, or generate reasonable-sounding but false sentences.

To evaluate a RAG model on a particular task, we use an LLM to generate multiple-choice questions from a task-specific knowledge corpus. Our method is agnostic to the retriever and generative model used in both the RAG system and the exam generation task.

RAG diagram.png
Summary of the proposed exam generation, evaluation, and iterative-improvement processes.

Our approach has two steps. For each document in the knowledge corpus, we use an LLM and several prompt-engineering strategies to create candidate questions. Then we use several natural-language-processing filters to remove low-quality questions along various axes, such as length, incorrectness, and self-containment.

We note an interesting asymmetry: given a document corpus, it is relatively easy for an LLM to generate a question and the correct answer, as the content of both is contained in the prompt. However, it is considerably more difficult to create high-quality incorrect answers, commonly referred to as discriminators.

To filter out degenerate questions, we use the Jaccard similarity coefficient and embedding-based similarity metrics.

Here is the prompt that we used for exam generation:

Human: Here is some documentation from {task_domain}: {documentation}.\n
From this generate a difficult multi-form question for an exam.
It should have 4 candidates, 1 correct answer, and explanations.

Syntax should be Question: {question}\n
A){candidate A}\n
B){candidate B}\n
C){candidate C}\n
D){candidate D}

Correct Answer: {correct answer}\n
### Assistant:"

In our research, we analyzed several RAG pipeline variants, including closed-book (no knowledge from the document corpus is provided to the LLM), oracle (the exam taker has access to the specific document used to generate the question-and-answer pair, in addition to the question itself and all possible candidate answers), and classical retrieval models such as MultiQA embeddings, Siamese network embeddings, and BM25. Our evaluations also extended to different scales of language models, from 7 billion parameters to 70 billion, to understand the impact of model scale on performance.

To demonstrate the practical utility of this methodology, we deployed it across a wide range of domains. These include Amazon Web Services (AWS) DevOps, where troubleshooting guides for cloud-based services tests the models' operational effectiveness; arXiv abstracts, which challenge the models' ability to parse and generate insights from dense scientific texts; StackExchange questions, which probe the models' responsiveness and accuracy; and SEC filings, where the complexity of financial reporting tests the models’ capacity to extract nuanced information from structured corporate documents. This multi-domain approach not only enhances the robustness of our evaluations but also ensures that our models are versatile and reliable across various real-world applications.

Evaluating the exam generation model

The following figure shows granular results of our evaluation method for the task of AWS DevOps troubleshooting. We report accuracy for different retrieval approaches and retriever sizes, on a percentage scale. Labels on the diameter show the AWS resources we’re using. Colors correspond to different retrieval approaches (Oracle, DPRV2, MultiQA, ClosedBook), and solid and broken lines correspond to different base LLM sizes (7B, 13B, and 70B). For instance, we observe that a small model such as Mistral-7B with MultiQA embeddings has an accuracy of around 80% for the AWS resource Relational Database Service (RDS).

Granular results of our exam evaluation for the task of AWS DevOps troubleshooting.png
A comparison of several different models, at a range of sizes, on the task of DevOps troubleshooting for eight different AWS resources.

Our experiments yielded four key findings. First, there’s no one-size-fits-all solution; the optimal choice of retrieval method, and to a lesser extent LLM, is typically task dependent. For example, in tasks such as SEC filings and arXiv abstracts, BM25 outperforms MultiQA and Siamese network embeddings, indicating that sparse retrieval is generally more effective than dense retrieval. This could be because such tasks often contain easily identifiable terms (e.g., AWS service names in AWS DevOps) that can be retrieved with keyword search, while other tasks, such as StackExchange, mostly contain common words.

Second, the right choice of retrieval method can lead to greater performance improvements than simply using larger LLMs. For instance, in SEC filings, we observed a greater performance gain from switching from Siamese network embeddings to DPRV2 than from switching to larger LLMs.

Third, for tasks involving closed-source knowledge, the accuracy bottleneck is typically the LLM rather than the retrieval method. Finally, a poorly aligned retriever component can result in worse accuracy than having no retrieval at all.

Exam enhancements through item response theory

Integrating item response theory (IRT) into our process has significantly improved the quality of the exams. IRT models the likelihood of a correct response based on characteristics of a question and the capabilities of a model. It uses three factors — difficulty, discrimination, and guessing chance — to create exams that more accurately reflect and predict model performance.

IRT posits that a model’s probability of correctly answering a question is correlated with a latent variable known as ability, and it provides a method for estimating the value of that variable. As such, it offers a way to quantify a model’s ability level.

Our process begins with an initial exam assessment, identifying and removing questions that contribute minimally to discriminative insights. The exam is then refined iteratively, based on updated IRT parameters, which helps it accurately gauge nuanced model behaviors.

By continuously analyzing and adjusting exams based on IRT parameters, we have seen substantial improvements in the exams’ ability to discriminate among models. For instance, we use Fisher information to quantify the informativeness of exam questions. Fisher information measures the amount of information that an observable random variable provides about an unknown parameter, offering a way to gauge the precision of statistical estimators in parameter estimation theory.

During iterative improvements for the arXiv task, the Fisher information function consistently showed progress, marking a considerable enhancement of the exams' capacity to differentiate model capabilities. This iterative process ensures that each new version of the exam is more informative than the last and effectively evaluates the RAG model’s abilities.

Evaluating the generated exams

To further enhance the assessment of RAG models, we categorize exam questions using both semantic analysis and Bloom’s revised taxonomy, devised by the University of Chicago psychologist Benjamin Bloom. Bloom’s taxonomy helps classify questions by cognitive complexity — from basic recall to analytical tasks — enabling structured evaluation of model capabilities.

Different levels in Bloom's taxonomy differentiate between the knowledge dimension (factual, conceptual, procedural, and meta-cognitive) and the cognitive-process dimension (remember, understand, apply, analyze, evaluate, and create). Additionally, we classify questions semantically by identifying keywords like “what” and “which.” These additional classifications allow us to assess how well models perform at different ability levels.

Bloom's Taxonomy.png
Average Fisher information for each category in Bloom’s taxonomy category (left) and semantic category (right) for the StackExchange task.

The above two figures present the average Fisher information value for each Bloom category (left) and semantic category (right) for the StackExchange task. For this specific task, we observe that “evaluating” and “understanding” are the most discriminate dimensions in Bloom’s taxonomy across different ability levels, while “remembering” is the least discriminatory.

On the semantic categories, we observe that “what” and “which” were the most discriminatory terms for lower ability levels, and “when” discriminated more at higher ability levels. One interpretation is that “what” and “how” questions tend to be more factual and syntax-based in the StackExchange domain, so at lower ability levels, RAG struggles more with these genres of questions.

The following figure illustrates the maximization process for the arXiv task as the exam and IRT estimation evolve. We show the results for three incremental steps. We observe a 0.05 increase in Fisher information even with a single iteration. This progress reaches a 0.1 increase in the subsequent steps.

Exam Information Curve.png
The maximization process, as the exam and IRT estimation evolve, for the task of generating abstracts for arXiv papers.

To expand our approach beyond Q&A applications, our future research will focus on domains such as summarization, translation, and sentiment analysis. We are also addressing the complex task of meta-evaluation, comparing and refining our evaluation methods to account for the multidimensional nature of LLM performance. Additionally, we will continuously update our methodologies to accommodate the rapid evolution of LLM technology, ensuring robust and comprehensive assessment of emerging models.

Acknowledgments: Laurent Callot

Research areas

Related content

US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Applied Scientist, to support the development and implementation of state-of-the-art algorithms and models for supervised fine-tuning and reinforcement learning through human feedback and and complex reasoning; with a focus across text, image, and video modalities. As an Applied Scientist, you will play a critical role in supporting the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Gen AI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports
US, CA, Santa Clara
The AWS Neuron Science Team is looking for talented scientists to enhance our software stack, accelerating customer adoption of Trainium and Inferentia accelerators. In this role, you will work directly with external and internal customers to identify key adoption barriers and optimization opportunities. You'll collaborate closely with our engineering teams to implement innovative solutions and engage with academic and research communities to advance state-of-the-art ML systems. As part of a strategic growth area for AWS, you'll work alongside distinguished engineers and scientists in an exciting and impactful environment. We actively work on these areas: - AI for Systems: Developing and applying ML/RL approaches for kernel/code generation and optimization - Machine Learning Compiler: Creating advanced compiler techniques for ML workloads - System Robustness: Building tools for accuracy and reliability validation - Efficient Kernel Development: Designing high-performance kernels optimized for our ML accelerator architectures A day in the life AWS Utility Computing (UC) provides product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Additionally, this role may involve exposure to and experience with Amazon's growing suite of generative AI services and other cloud computing offerings across the AWS portfolio. About the team AWS Neuron is the software of Trainium and Inferentia, the AWS Machine Learning chips. Inferentia delivers best-in-class ML inference performance at the lowest cost in the cloud to our AWS customers. Trainium is designed to deliver the best-in-class ML training performance at the lowest training cost in the cloud, and it’s all being enabled by AWS Neuron. Neuron is a Software that include ML compiler and native integration into popular ML frameworks. Our products are being used at scale with external customers like Anthropic and Databricks as well as internal customers like Alexa, Amazon Bedrocks, Amazon Robotics, Amazon Ads, Amazon Rekognition and many more. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, WA, Seattle
Application deadline: Applications will be accepted on an ongoing basis Amazon Ads is re-imagining advertising through cutting-edge generative artificial intelligence (AI) technologies. We combine human creativity with AI to transform every aspect of the advertising life cycle—from ad creation and optimization to performance analysis and customer insights. Our solutions help advertisers grow their brands while enabling millions of customers to discover and purchase products through delightful experiences. We deliver billions of ad impressions and millions of clicks daily, breaking fresh ground in product and technical innovations. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. As a Senior Applied Scientist at Amazon Ads, you will: • Research and implement cutting-edge machine learning (ML) approaches, including applications of generative AI and large language models • Develop and deploy innovative ML solutions spanning multiple disciplines, from ranking and personalization to natural language processing, computer vision, recommender systems, and large language models • Drive end-to-end projects that tackle ambiguous problems at massive scale, often working with petabytes of data • Build and optimize models that balance multiple stakeholder needs, helping customers discover relevant products while enabling advertisers to achieve their goals efficiently • Build ML models, perform proof-of-concept, experiment, optimize, and deploy your models into production, working closely with cross-functional teams that include engineers, product managers, and other scientists • Design and run A/B experiments to validate hypotheses, gather insights from large-scale data analysis, and measure business impact • Develop scalable, efficient processes for model development, validation, and deployment that optimize traffic monetization while maintaining customer experience Why you’ll love this role: This role offers unprecedented breadth in ML applications and access to extensive computational resources and rich datasets that will enable you to build truly innovative solutions. You'll work on projects that span the full advertising life cycle, from sophisticated ranking algorithms and real-time bidding systems to creative optimization and measurement solutions. You'll work alongside talented engineers, scientists, and product leaders in a culture that encourages innovation, experimentation, and bias for action, and you’ll directly influence business strategy through your scientific expertise. What makes this role unique is the combination of scientific rigor with real-world impact. You’ll re-imagine advertising through the lens of advanced ML while solving problems that balance the needs of advertisers, customers, and Amazon's business objectives. Your impact and career growth: Amazon Ads is investing heavily in AI and ML capabilities, creating opportunities for scientists to innovate and make their marks. Your work will directly impact millions. Whether you see yourself growing as an individual contributor or moving into people management, there are clear paths for career progression. This role combines scientific leadership, organizational ability, technical strength, and business understanding. You'll have opportunities to lead technical initiatives, mentor other scientists, and collaborate with senior leadership to shape the future of advertising technology. Most importantly, you'll be part of a community that values scientific excellence and encourages you to push the boundaries of what's possible with AI. Watch two Applied Scientists at Amazon Ads talk about their work: https://www.youtube.com/watch?v=vvHsURsIPEA Learn more about Amazon Ads: https://advertising.amazon.com/ Key job responsibilities As an Applied Scientist in Amazon Ads, you will: - Research and implement cutting-edge ML approaches, including applications of generative AI and large language models - Develop and deploy innovative ML solutions spanning multiple disciplines – from ranking and personalization to natural language processing, computer vision, recommender systems, and large language models - Drive end-to-end projects that tackle ambiguous problems at massive scale, often working with petabytes of data - Build and optimize models that balance multiple stakeholder needs - helping customers discover relevant products while enabling advertisers to achieve their goals efficiently - Build ML models, perform proof-of-concept, experiment, optimize, and deploy your models into production, working closely with cross-functional teams including engineers, product managers, and other scientists - Design and run A/B experiments to validate hypotheses, gather insights from large-scale data analysis, and measure business impact - Develop scalable, efficient processes for model development, validation, and deployment that optimize traffic monetization while maintaining customer experience A day in the life Why you will love this role: This role offers unprecedented breadth in ML applications, and access to extensive computational resources and rich datasets that enable you to build truly innovative solutions. You'll work on projects that span the full advertising lifecycle - from sophisticated ranking algorithms and real-time bidding systems to creative optimization and measurement solutions. You'll also work alongside talented engineers, scientists and product leaders in a culture that encourages innovation, experimentation, and bias for action where you’ll directly influence business strategy through your scientific expertise. What makes this role unique is the combination of scientific rigor with real-world impact. You’ll re-imagine advertising through the lens of advanced ML while solving problems that balance the needs of advertisers, customers, and Amazon's business objectives. About the team Your impact and career growth: Amazon Ads is investing heavily in AI and ML capabilities, creating opportunities for scientists to innovate and make their mark. Your work will directly impact millions. Whether you see yourself growing as an individual contributor or moving into people management, there are clear paths for career progression. This role combines scientific leadership, organizational ability, technical strength, and business understanding. You'll have opportunities to lead technical initiatives, mentor other scientists, and collaborate with senior leadership to shape the future of advertising technology. Most importantly, you'll be part of a community that values scientific excellence and encourages you to push the boundaries of what's possible with AI. Watch two applied scientists at Amazon Ads talk about their work: https://www.youtube.com/watch?v=vvHsURsIPEA Learn more about Amazon Ads: https://advertising.amazon.com/
US, NY, New York
We are looking for a passionate Applied Scientist to help pioneer the next generation of agentic AI applications for Amazon advertisers. In this role, you will design agentic architectures, develop tools and datasets, and contribute to building systems that can reason, plan, and act autonomously across complex advertiser workflows. You will work at the forefront of applied AI, developing methods for fine-tuning, reinforcement learning, and preference optimization, while helping create evaluation frameworks that ensure safety, reliability, and trust at scale. You will work backwards from the needs of advertisers—delivering customer-facing products that directly help them create, optimize, and grow their campaigns. Beyond building models, you will advance the agent ecosystem by experimenting with and applying core primitives such as tool orchestration, multi-step reasoning, and adaptive preference-driven behavior. This role requires working independently on ambiguous technical problems, collaborating closely with scientists, engineers, and product managers to bring innovative solutions into production. Key job responsibilities - Design and build agents for our autonomous campaigns experience. - Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). - Curate datasets and tools for MCP. - Build evaluation pipelines for agent workflows, including automated benchmarks, multi-step reasoning tests, and safety guardrails. - Develop agentic architectures (e.g., CoT, ToT, ReAct) that integrate planning, tool use, and long-horizon reasoning. - Prototype and iterate on multi-agent orchestration frameworks and workflows. - Collaborate with peers across engineering and product to bring scientific innovations into production. - Stay current with the latest research in LLMs, RL, and agent-based AI, and translate findings into practical applications. About the team The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Autonomous Campaigns team within Sponsored Products and Brands is focused on guiding and supporting 1.6MM advertisers to meet their advertising needs of creating and managing ad campaigns. At this scale, the complexity of diverse advertiser goals, campaign types, and market dynamics creates both a massive technical challenge and a transformative opportunity: even small improvements in guidance systems can have outsized impact on advertiser success and Amazon’s retail ecosystem. Our vision is to build a highly personalized, context-aware campaign creation and management system that leverages LLMs together with tools such as auction simulations, ML models, and optimization algorithms. This agentic framework, will operate across both chat and non-chat experiences in the ad console, scaling to natural language queries as well as proactively delivering guidance based on deep understanding of the advertiser. To execute this vision, we collaborate closely with stakeholders across Ad Console, Sales, and Marketing to identify opportunities—from high-level product guidance down to granular keyword recommendations—and deliver them through a tailored, personalized experience. Our work is grounded in state-of-the-art agent architectures, tool integration, reasoning frameworks, and model customization approaches (including tuning, MCP, and preference optimization), ensuring our systems are both scalable and adaptive.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with generative AI (GenAI) and multi-modal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to develop algorithms and modeling techniques to advance the state of the art with multi-modal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s large-scale computing resources to accelerate development with multi-modal Large Language Models (LLMs) and GenAI in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The AGI Autonomy Perception team performs applied machine learning research, including model training, dataset design, pre- and post- training. We train Nova Act, our state-of-the art computer use agent, to understand arbitrary human interfaces in the digital world. We are seeking a Machine Learning Engineer who combines strong ML expertise with software engineering excellence to scale and optimize our ML workflows. You will be a key member on our research team, helping accelerate the development of our leading computer-use agent. We are seeking a strong engineer who has a passion for scaling ML models and datasets, designing new ML frameworks, improving engineering practices, and accelerating the velocity of AI development. You will be hired as a Member of Technical Staff. Key job responsibilities * Design, build, and deploy machine learning models, frameworks, and data pipelines * Optimize ML training, inference, and evaluation workflows for reliability and performance * Evaluate and improve ML model performance and metrics * Develop tools and infrastructure to enhance ML development productivity
US, WA, Seattle
The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. This position will be part of the Conversational Ad Experiences team within the Amazon Advertising organization. Our cross-functional team focuses on designing, developing and launching innovative ad experiences delivered to shoppers in conversational contexts. We utilize leading-edge engineering and science technologies in generative AI to help shoppers discover new products and brands through intuitive, conversational, multi-turn interfaces. We also empower advertisers to reach shoppers, using their own voice to explain and demonstrate how their products meet shoppers' needs. We collaborate with various teams across multiple Amazon organizations to push the boundary of what's possible in these fields. We are seeking a science leader for our team within the Sponsored Products & Brands organization. You'll be working with talented scientists, engineers, and product managers to innovate on behalf of our customers. An ideal candidate is able to navigate through ambiguous requirements, working with various partner teams, and has experience in generative AI, large language models (LLMs), information retrieval, and ads recommendation systems. Using a combination of generative AI and online experimentation, our scientists develop insights and optimizations that enable the monetization of Amazon properties while enhancing the experience of hundreds of millions of Amazon shoppers worldwide. If you're fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey! Key job responsibilities - Serve as a tech lead for defining the science roadmap for multiple projects in the conversational ad experiences space powered by LLMs. - Build POCs, optimize and deploy models into production, run experiments, perform deep dives on experiment data to gather actionable learnings and communicate them to senior leadership - Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production. - Work closely with product managers to contribute to our mission, and proactively identify opportunities where science can help improve customer experience - Research new machine learning approaches to drive continued scientific innovation - Be a member of the Amazon-wide machine learning community, participating in internal and external meetups, hackathons and conferences - Help attract and recruit technical talent, mentor scientists and engineers in the team
US, WA, Seattle
Amazon Economics is seeking Structural Economist (STRUC) Interns who are passionate about applying structural econometric methods to solve real-world business challenges. STRUC economists specialize in the econometric analysis of models that involve the estimation of fundamental preferences and strategic effects. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to model strategic decision-making and inform business optimization, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As a STRUC Economist Intern, you'll specialize in structural econometric analysis to estimate fundamental preferences and strategic effects in complex business environments. Your responsibilities include: - Analyze large-scale datasets using structural econometric techniques to solve complex business challenges - Applying discrete choice models and methods, including logistic regression family models (such as BLP, nested logit) and models with alternative distributional assumptions - Utilizing advanced structural methods including dynamic models of customer or firm decisions over time, applied game theory (entry and exit of firms), auction models, and labor market models - Building datasets and performing data analysis at scale - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including pricing analysis, competition modeling, strategic behavior estimation, contract design, and marketing strategy optimization - Helping business partners formalize and estimate business objectives to drive optimal decision-making and customer value - Build and refine comprehensive datasets for in-depth structural economic analysis - Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
Amazon Economics is seeking Reduced Form Causal Analysis (RFCA) Economist Interns who are passionate about applying econometric methods to solve real-world business challenges. RFCA represents the largest group of economists at Amazon, and these core econometric methods are fundamental to economic analysis across the company. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to analyze causal relationships and inform strategic business decisions, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As an RFCA Economist Intern, you'll specialize in econometric analysis to determine causal relationships in complex business environments. Your responsibilities include: - Analyze large-scale datasets using advanced econometric techniques to solve complex business challenges - Applying econometric techniques such as regression analysis, binary variable models, cross-section and panel data analysis, instrumental variables, and treatment effects estimation - Utilizing advanced methods including differences-in-differences, propensity score matching, synthetic controls, and experimental design - Building datasets and performing data analysis at scale - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including program evaluation, elasticity estimation, customer behavior analysis, and predictive modeling that accounts for seasonality and time trends - Build and refine comprehensive datasets for in-depth economic analysis - Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
Amazon Economics is seeking Forecasting, Macroeconomics and Finance (FMF) Economist Interns who are passionate about applying time-series econometric methods to solve real-world business challenges. FMF economists interpret and forecast Amazon business dynamics by combining advanced time-series statistical methods with strong economic analysis and intuition. In this full-time internship (40 hours per week, with hourly compensation), you'll work with large-scale datasets to forecast business trends and inform strategic decisions, gaining hands-on experience that's directly applicable to dissertation writing and future career placement. Key job responsibilities As an FMF Economist Intern, you'll specialize in time-series econometric analysis to understand, predict, and optimize Amazon's business dynamics. Your responsibilities include: - Analyze large-scale datasets using advanced time-series econometric techniques to solve complex business challenges - Applying frontier methods in time series econometrics, including forecasting models, dynamic systems analysis, and econometric models that combine macro and micro data - Developing formal models to understand past and present business dynamics, predict future trends, and identify relevant risks and opportunities - Building datasets and performing data analysis at scale using world-class data tools - Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations - Tackling diverse challenges including analyzing drivers of growth and profitability, forecasting business metrics, understanding how customer experience interacts with external conditions, and evaluating short, medium, and long-term business dynamics - Build and refine comprehensive datasets for in-depth time-series economic analysis - Present complex analytical findings to business leaders and stakeholders