Customer-obsessed science


Research areas
-
August 8, 2025A new philosophy for developing LLM architectures reduces energy requirements, speeds up runtime, and preserves pretrained-model performance.
Featured news
-
2024The rise of large language models (LLMs) has significantly influenced the quality of information in decision-making systems, leading to the prevalence of AI-generated content and challenges in detecting misinformation and managing conflicting information, or "inter-evidence conflicts." This study introduces a method for generating diverse, validated evidence conflicts to simulate real-world misinformation
-
In-Context Learning (ICL) has enabled Large Language Models (LLMs) to excel as generalpurpose models in zero and few-shot task settings. However, since LLMs are often not trained on the downstream tasks, they lack crucial contextual knowledge from the data distributions, which limits their task adaptability. This paper explores using data priors to automatically customize prompts in ICL. We extract these
-
2024Pre-trained language models, trained on largescale corpora, demonstrate strong generalizability across various NLP tasks. Finetuning these models for specific tasks typically involves updating all parameters, which is resource-intensive. Parameter-efficient finetuning (PEFT) methods, such as the popular LoRA family, introduce low-rank matrices to learn only a few parameters efficiently. However, during
-
2024As the scale of data and models for video understanding rapidly expand, handling long-form video input in transformer-based models presents a practical challenge. Rather than resorting to input sampling or token dropping, which may result in information loss, token merging shows promising results when used in collaboration with transformers. However, the application of token merging for long-form video
-
2024 Conference on Digital Experimentation @ MIT (CODE@MIT)2024Online sites typically evaluate the impact of new product features on customer behavior using online controlled experiments (or A/B tests). For many business applications, it is important to detect heterogeneity in these experiments [1], as new features often have a differential impact by customer segment, product group, and other variables. Understanding heterogeneity can provide key insights into causal
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all