Customer-obsessed science
Research areas
-
April 17, 20266 min readIsabelle/HOL's balance of expressiveness, automation, and scalability enabled the world's first formally verified cloud hypervisor.
-
April 7, 202613 min read
-
March 20, 202615 min read
-
March 19, 202611 min read
Featured news
-
2025For music streaming services expanding into audiobooks, cold-start personalization presents a critical challenge: as audiobooks are a newly introduced content type, the vast majority of existing users have no audiobook listening history. This domain-level cold-start scenario differs from traditional item or user cold-start scenarios, since personalization must begin before any behavioral data exists in
-
FAIM 20252025Conveyors play a crucial role in transporting packages and containers in manufacturing and production facilities. While computer vision has emerged as a promising technology for real-time monitoring of transportation systems, its application in conveyor operations remains in the early stages. This paper introduces an Industrial Internet of Things (IIoT) framework for real-time conveyor monitoring. We first
-
AISTATS 2025, NeurIPS 2025 Workshop on Efficient Reasoning2025Speculative decoding is an effective technique for accelerating large language model (LLM) inference by drafting multiple tokens in parallel. However, its practical speedup is often limited by a rigid verification step, which strictly enforces that the accepted token distribution exactly matches that of the target model. This constraint leads to the rejection of many plausible tokens, reducing the acceptance
-
2025Since the seminal work of TabPFN, research on tabular foundation models (TFMs) based on in-context learning (ICL) has challenged long-standing paradigms in machine learning. Without seeing any real-world data, models pretrained on purely synthetic datasets generalize remarkably well across diverse datasets, often using only a moderate number of in-context examples. This shifts the focus in tabular machine
-
NeurIPS 2025 Workshop on Multimodal Algorithmic Reasoning2025Large Language Models (LLMs) perform well on short-horizon tasks but struggle with long-horizon, multimodal scenarios that require multi-step reasoning, perception, and adaptive planning. We identify two key challenges in these settings: the difficulty of long-term coordination between planning and execution within single-agent architectures and the inefficiency of indiscriminate visual grounding. To address
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all