Customer-obsessed science
Research areas
-
January 13, 20267 min readLeveraging existing environment simulators and reward functions based on verifiable ground truth boosts task success rate, even with small models and small training datasets.
-
December 29, 20256 min read
-
December 29, 20259 min read
-
December 8, 20258 min read
-
December 5, 20256 min read
Featured news
-
HRI 20262026As autonomous social robots become more prevalent in home environments, they must decide where to position themselves within many different types of rooms or spaces, balancing accessibility with staying out of the way. This paper presents a machine learning approach to modeling user preferences for robot parking spots in the home using standard 2D occupancy maps. Our method learns spatial patterns from
-
2026With recent advancements in video backbone architectures, combined with the remarkable achievements of large language models (LLMs), the analysis of long-form videos spanning tens of minutes has become both feasible and increasingly prevalent. However, the inherently redundant nature of video sequences poses significant challenges for contemporary state-of-the-art models. These challenges stem from two
-
AAAI 2026 Workshop on Agentic AI in Financial Services2026Large Language Models (LLMs) have shown remarkable capabilities in document processing, but their inability to provide visual grounding without OCR dependencies poses significant challenges in business-critical applications. Current solutions either require model fine-tuning or rely on external OCR services, introducing additional costs, latency, and limitations in handling derived information. This paper
-
WSDM 20262026For e-commerce retailers, high-quality product catalogs are vital to customer experience. Yet, despite lots of data cleaning efforts, catalog quality, especially in large catalogs, remains suboptimal. This paper shows how to use unstructured brand knowledge base data as a reference and a large language model agent to automatically enhance an e-commerce retailer's catalog quality. Unlike prior methods that
-
NeurIPS 2025 Workshop on Mechanistic Interpretability2026Recent work has shown that fine-tuning on insecure code data can trigger an emergent misalignment (EMA) phenomenon, where models generate malicious responses even to prompts unrelated to the original insecure code-writing task. Such cross-domain generalization of harmful behavior underscores the need for a deeper understanding of the algorithms, tasks, and datasets that induce emergent misalignment. In
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all