Customer-obsessed science


Research areas
-
September 11, 2025The language AI agents might speak, sharing context without compromising privacy, modeling agentic negotiations, and understanding users’ commonsense policies are some of the open scientific questions that researchers in agentic AI will need to grapple with.
-
Featured news
-
2024Inherent ambiguity in layout annotations poses significant challenges to developing accurate 360◦ room layout estimation models. To address this issue, we propose a novel Bi-Layout model capable of predicting two distinct layout types. One stops at ambiguous regions, while the other ex-tends to encompass all visible areas. Our model employs two global context embeddings, where each embedding is designed
-
2024Machine learning models face generalization challenges when exposed to out-of-distribution (OOD) samples with unforeseen distribution shifts. Recent research reveals that for vision tasks, test-time adaptation employing diffusion models can achieve state-of-the-art accuracy improvements on OOD samples by generating domain-aligned samples without altering the model’s weights. Unfortunately, those studies
-
AISTATS 20242024Conditional independence (CI) tests are widely used in statistical data analysis, e.g., they are the building block of many algorithms for causal graph discovery. The goal of a CI test is to accept or reject the null hypothesis that X ⊥⊥ Y | Z, where X ∈ R, Y ∈ R, Z ∈ Rd. In this work, we investigate conditional independence testing under the constraint of differential privacy. We de-sign two private CI
-
2024We introduce a novel framework, LM-Guided CoT, that leverages a lightweight (i.e., <1B) LM for guiding a black-box large (i.e., >10B) LM in reasoning tasks. Specifically, the lightweight LM first generates a rationale for each input instance. The Frozen large LM is then prompted to predict a task output based on the rationale generated by the lightweight LM. Our approach is resource-efficient in the sense
-
2024This paper introduces Q-tuning, a novel approach for continual prompt tuning that enables the lifelong learning of a pre-trained language model. When learning a new task, Q-tuning trains a task-specific prompt by adding it to a prompt queue consisting of the prompts from older tasks. To better transfer the knowledge of old tasks, we design an adaptive knowledge aggregation technique that reweighs previous
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all