Customer-obsessed science
Research areas
-
February 2, 202610 min readEvery NFL game generates millions of tracking data points from 22 RFID-equipped players. Seventy-five machine learning models running on AWS process that data in under a second, transforming football into a sport where every movement is measured, modeled, and instantly analyzed.
-
January 13, 20267 min read
-
January 8, 20264 min read
-
-
December 29, 20256 min read
Featured news
-
KDD 2025 Workshop on Prompt Optimization2025Prompt engineering represents a critical bottleneck to harness the full potential of Large Language Models (LLMs) for solving complex tasks, as it requires specialized expertise, significant trial-and-error, and manual intervention. This challenge is particularly pronounced for tasks involving subjective quality assessment, where defining explicit optimization objectives becomes fundamentally problematic
-
ICML 2025 Workshop on Multi-Agent Systems, KDD 2025 Workshop on Machine Learning in Finance (MLF)2025Enterprise accounting data is complex, ambiguous, and shaped by evolving systems and regulations. The institutional knowledge needed to reason over the data is sparse, scattered and rarely structurally documented—posing major challenges for LLM agents. We introduce a multi-agent financial research framework that mimics a junior analyst’s onboarding and growth. The Analyst Agent learns proactively from repeated
-
2025In this paper, we investigate the problem of quantifying fairness in Retrieval-Augmented Generation (RAG) systems, particularly for complex cognitive tasks that go beyond factual question-answering. While RAG systems have demonstrated effectiveness in information extraction tasks, their fairness implications for cognitively complex tasks - including ideation, content creation, and analytical reasoning —
-
ICML 2025 Workshop on Efficient Systems for Foundation Models2025Speculative decoding has emerged as a promising approach to accelerating large language model (LLM) generation using a fast drafter while maintaining alignment with the target model’s distribution. However, existing approaches face a tradeoff: external drafters offer flexibility but can suffer from slower drafting, while self-speculation methods use drafters tailored to the target model but require re-training
-
ICML 2025 Workshop on Efficient Systems for Foundation Models2025As large language models increasingly gain popularity in real-world applications, processing extremely long contexts, often exceeding the model’s pre-trained context limits, has emerged as a critical challenge. While existing approaches to efficient long-context processing show promise, recurrent compression-based methods struggle with information preservation, whereas random access approaches require substantial
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all