Customer-obsessed science
Research areas
-
March 20, 202615 min readSimplifying and clarifying the assembly code for core operations enabled automated optimization and verification.
-
March 19, 202611 min read
-
February 25, 202611 min read
-
February 17, 20263 min read
-
Featured news
-
ECIR 2026 Industry Day2026E-commerce search faces challenges such as sparse data and poor generalization from issues like multi-attribute resolution, multihop reasoning, and implicit intent. We propose iterative reranking as a compute-scaling strategy for LLM-based rankers, repeatedly applying listwise rankers to refine results by exploiting LLM non-determinism. Evaluated on three open datasets with three open-source LLMs, the method
-
EACL 2026 Industry Track2026Job postings are critical for recruitment, yet large enterprises struggle with standardization and consistency, requiring significant time and effort from hiring managers and recruiters. We present a feedback-aware prompt optimization framework that automates high-quality job posting generation through iterative human-in-the-loop refinement. Our system integrates multiple data sources: job metadata, competencies
-
2026Large language models (LLMs) demonstrate superior reasoning capabilities compared to small language models (SLMs), but incur substantially higher costs. We propose COllaborative REAsoner (COREA), a system that cascades an SLM with an LLM to achieve a balance between accuracy and cost in complex reasoning tasks. COREA first attempts to answer questions using the SLM, which outputs both an answer and a verbalized
-
2026Large language models (LLMs) have demonstrated remarkable capabilities across diverse tasks, and LLM-based agents further extend these abilities to various practical workflows. While recent progress shows that multi-agent systems (MAS) can outperform single agents by coordinating specialized roles, designing effective MAS remains difficult due to prompt sensitivity and the compounded instability MAS creates
-
2026Evaluating the quality of search systems traditionally requires a significant number of human relevance annotations. In recent times, several systems have explored the usage of Large Language Models (LLMs) as automated judges for this task while their inherent biases prevent direct use for metric estimation. We present a statistical framework extending Prediction-Powered Inference (PPI) (Angelopoulos, Duchi
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all