Customer-obsessed science


Research areas
-
August 4, 2025Translating from natural to structured language, defining truth, and definitive reasoning remain topics of central concern in automated reasoning, but Amazon Web Services’ new Automated Reasoning checks help address all of them.
Featured news
-
Task-oriented dialogue systems are essential for applications ranging from customer service to personal assistants and are widely used across various industries. However, developing effective multi-domain systems remains a significant challenge due to the complexity of handling diverse user intents, entity types, and domain-specific knowledge across several domains. In this work, we propose DARD (Domain
-
2024In this paper, we introduce AdaSelection, an adaptive sub-sampling method to identify the most informative sub-samples within each minibatch to speed up the training of large-scale deep learning models without sacrificing model performance. Our method is able to flexibly combines an arbitrary number of baseline sub-sampling methods incorporating the method-level importance and intra-method sample-level
-
2024The rise of large language models (LLMs) has significantly influenced the quality of information in decision-making systems, leading to the prevalence of AI-generated content and challenges in detecting misinformation and managing conflicting information, or "inter-evidence conflicts." This study introduces a method for generating diverse, validated evidence conflicts to simulate real-world misinformation
-
In-Context Learning (ICL) has enabled Large Language Models (LLMs) to excel as generalpurpose models in zero and few-shot task settings. However, since LLMs are often not trained on the downstream tasks, they lack crucial contextual knowledge from the data distributions, which limits their task adaptability. This paper explores using data priors to automatically customize prompts in ICL. We extract these
-
2024Pre-trained language models, trained on largescale corpora, demonstrate strong generalizability across various NLP tasks. Finetuning these models for specific tasks typically involves updating all parameters, which is resource-intensive. Parameter-efficient finetuning (PEFT) methods, such as the popular LoRA family, introduce low-rank matrices to learn only a few parameters efficiently. However, during
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all