Customer-obsessed science


Research areas
-
September 2, 2025Audible's ML algorithms connect users directly to relevant titles, reducing the number of purchase steps for millions of daily users.
-
Featured news
-
2024Recent advancement in large language models (LLMs) has offered a strong potential for natural language systems to process informal language. A representative form of informal language is slang, used commonly in daily conversations and online social media. To date, slang has not been comprehensively evaluated in LLMs due partly to the absence of a carefully designed and publicly accessible benchmark. Using
-
2024Inherent ambiguity in layout annotations poses significant challenges to developing accurate 360◦ room layout estimation models. To address this issue, we propose a novel Bi-Layout model capable of predicting two distinct layout types. One stops at ambiguous regions, while the other ex-tends to encompass all visible areas. Our model employs two global context embeddings, where each embedding is designed
-
2024Machine learning models face generalization challenges when exposed to out-of-distribution (OOD) samples with unforeseen distribution shifts. Recent research reveals that for vision tasks, test-time adaptation employing diffusion models can achieve state-of-the-art accuracy improvements on OOD samples by generating domain-aligned samples without altering the model’s weights. Unfortunately, those studies
-
AISTATS 20242024Conditional independence (CI) tests are widely used in statistical data analysis, e.g., they are the building block of many algorithms for causal graph discovery. The goal of a CI test is to accept or reject the null hypothesis that X ⊥⊥ Y | Z, where X ∈ R, Y ∈ R, Z ∈ Rd. In this work, we investigate conditional independence testing under the constraint of differential privacy. We de-sign two private CI
-
2024We introduce a novel framework, LM-Guided CoT, that leverages a lightweight (i.e., <1B) LM for guiding a black-box large (i.e., >10B) LM in reasoning tasks. Specifically, the lightweight LM first generates a rationale for each input instance. The Frozen large LM is then prompted to predict a task output based on the rationale generated by the lightweight LM. Our approach is resource-efficient in the sense
Conferences
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all