Customer-obsessed science
Research areas
-
December 10, 20255 min readNew audio-processing technology is making entertainment more accessible for millions of viewers.
-
December 5, 20256 min read
-
-
-
November 20, 20254 min read
Featured news
-
2025E-commerce stores increasingly use Large Language Models (LLMs) to enhance catalog data quality through automated regeneration. A critical challenge is accurately predicting missing structured attribute values across multilingual product catalogs, where LLM performance varies significantly by language. While existing approaches leverage general knowledge through prompt engineering and external retrieval
-
VLDB 20252025Cloud service providers usually leverage standard benchmarks such as TPC-H and TPC-DS to evaluate and optimize the performance of cloud data analytic systems. However, these benchmarks have fixed query patterns and are unable to effectively generate statistics of the cloud workloads in production. For example, they cannot simulate the real workload with the similar performance metrics such as CPU Time and
-
ACM CCS 20252025Motivated by applications to efficient secure computation, we consider the following problem of encrypted matrix-vector product (EMVP). Let F be a finite field. In an offline phase, a client uploads an encryption of a matrix M ∈ F^(m×ℓ) to a server, keeping only a short secret key. The server stores the encrypted matrix M̂. In the online phase, the client may repeatedly send encryptions q̂_i of query vectors
-
2025Quantifying uncertainty in black-box LLMs is vital for reliable responses and scalable oversight. Existing methods, which gauge a model's uncertainty through evaluating self-consistency in responses to the target query, can be misleading: an LLM may confidently provide an incorrect answer to a target query, yet give a confident and accurate answer to that same target query when answering a knowledge-preserving
-
EMNLP 2025 Findings2025Large language models (LLMs) often fail to scale their performance on long-context tasks performance in line with the context lengths they support. This gap is commonly attributed to retrieval failures—the models' inability to identify relevant information in the long inputs. Accordingly, recent efforts often focus on evaluating and improving LLMs' retrieval performance: if retrieval is perfect, a model
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all