Customer-obsessed science


Research areas
-
July 31, 2025Using ensembles of agents to generate and refine interactions annotated with chains of thought improves performance on a battery of benchmarks by an average of 29%.
Featured news
-
EACL 2024 Workshop on Natural Language Processing for Human Resources2024Recent advancements in Large Language Models (LLMs) have been reshaping Natural Language Processing (NLP) task in several domains. Their use in the field of Human Resources (HR) has still room for expansions and could be beneficial for several time consuming tasks. Examples such as time-off submissions, medical claims filing, and access requests are noteworthy, but they are by no means the sole instances
-
WSDM 2024 Workshop on Interactive and Scalable Information Retrieval Methods for E-Commerce2024Query Autocomplete (QAC) systems predict the best query suggestions based on customer typed prefix and other contextual signals. Conventional techniques employ the Most Popular Completion (MPC) method, where query suggestions that are popular and begin with the prefix (prefix aware) are retrieved from a pre-computed index. To account for contextual signals like past search activity of the user in the session
-
EACL 20242024In this work, We present Unified Embeddings for Multimodal Retrieval (UNIMUR), a simple but effective approach that embeds multimodal inputs and retrieves visual and textual outputs via frozen Large Language Models (LLMs). Specifically, UNIMUR jointly retrieves multimodal outputs via unified multimodal embedding and applies dual alignment training to account for both visual and textual semantics. Thus,
-
AISTATS 20242024Multi-objective optimization is a class of optimization problems with multiple conflicting objectives. We study offline optimization of multi-objective policies from data collected by a previously deployed policy. We propose a pessimistic estimator for policy values that can be easily plugged into existing formulas for hypervolume computation and optimized. The estimator is based on inverse propensity scores
-
AISTATS 20242024Active learning parallelization is widely used, but typically relies on fixing the batch size throughout experimentation. This fixed ap-proach is inefficient because of a dynamic trade-off between cost and speed—larger batches are more costly, smaller batches lead to slower wall-clock run-times—and the trade-off may change over the run (larger batches are often preferable earlier). To address this trade-off
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all