Customer-obsessed science


Research areas
-
August 11, 2025Trained on millions of hours of data from Amazon fulfillment centers and sortation centers, Amazon’s new DeepFleet models predict future traffic patterns for fleets of mobile robots.
-
Featured news
-
IEEE Robotics and Automation Letters 2024, IROS 20242024In this paper we propose an approach to trajectory planning based on the purpose of the task. For a redundant manipulator, many end effector poses in the task space can be achieved with multiple joint configurations. In planning the motion, we are free to choose the configuration that is optimal for the particular task requirement. Many previous motion-planning approaches have been proposed for the sole
-
In the field of Natural Language Processing (NLP), sentence pair classification is important in various real-world applications. Bi-encoders are commonly used to address these problems due to their low-latency requirements, and their ability to act as effective retrievers. However, bi-encoders often under-perform compared to cross-encoders by a significant margin. To address this gap, many Knowledge Distillation
-
Federated Learning (FL) is a popular algorithm to train ma-chine learning models on user data constrained to edge devices (for example, mobile phones) due to privacy concerns. Typically, FL is trained with the assumption that no part of the user data can be egressed from the edge. However, in many production settings, specific data-modalities/meta-data are limited to be on device while others are not. For
-
AISTATS 20242024Multi-objective optimization (MOO) aims to optimize multiple, possibly conflicting objectives with widespread applications. We introduce a novel interacting particle method for MOO inspired by molecular dynamics simulations. Our approach combines overdamped Langevin and birth-death dynamics, incorporating a “dominance potential” to steer particles toward global Pareto optimality. In contrast to previous
-
2024Large Language models (LLMs) have demonstrated impressive in-context learning (ICL) capabilities, where a LLM makes predictions for a given test input together with a few input-output pairs (demonstrations). Nevertheless, the inclusion of demonstrations leads to a quadratic increase in the computational overhead of the self-attention mechanism. Existing solutions attempt to distill lengthy demonstrations
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all