Customer-obsessed science


Research areas
-
August 11, 2025Trained on millions of hours of data from Amazon fulfillment centers and sortation centers, Amazon’s new DeepFleet models predict future traffic patterns for fleets of mobile robots.
-
Featured news
-
Reasoning and planning with large language models in code development (survey for KDD 2024 tutorial)2024Large Language Models (LLMs) are revolutionizing the field of code development by leveraging their deep understanding of code patterns, syntax, and semantics to assist developers in various tasks, from code generation and testing to code understanding and documentation. In this survey, accompanying our proposed lecture-style tutorial for KDD 2024, we explore the multifaceted impact of LLMs on code development
-
2024Large Language Models (LLMs) tend to be unreliable in the factuality of their answers. To address this problem, NLP researchers have proposed a range of techniques to estimate LLM’s confidence over facts. However, due to the lack of a systematic comparison, it is not clear how the different methods compare to one another. To fill this gap, we present a survey and empirical comparison of estimators of factual
-
Resource, Conservation and Recycling2024The Circular Economy (CE) has been proposed as a strategy to promote the efficient use of resources, maximizing the benefits derived from materials and products through value recovery strategies, and minimizing waste generation. However, ambiguity remains in defining what makes a product circular and its characteristics when adapting the CE concept for application at the product level. More clarity about the
-
FORC 20242024We study the problem of collecting a cohort or set that is balanced with respect to sensitive groups when group membership is unavailable or prohibited from use at deployment time. Specifically, our deployment-time collection mechanism does not reveal significantly more about the group membership of any individual sample than can be ascertained from base rates alone. To do this, we study a learner that
-
2024How do we transfer the relevant knowledge from ever larger foundation models into small, task-specific downstream models that can run at much lower costs? Standard transfer learning using pre-trained weights as the initialization transfers limited information and commits us to often massive pre-trained architectures. This procedure also precludes combining multiple pre-trained models that learn complementary
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all