Customer-obsessed science
Research areas
-
April 8, 20266 min readAmazon’s RuleForge system uses agentic AI to generate production-ready detection rules 336% faster than traditional methods.
-
April 7, 202613 min read
-
March 20, 202615 min read
-
March 19, 202611 min read
-
Featured news
-
2026Minimizing the inference cost and latency of foundation models has become a crucial area of research. Optimization approaches include theoretically lossless methods and others without accuracy guarantees like quantization. In all of these cases it is crucial to ensure that the model quality has not degraded. However, even at temperature zero, model generations are not necessarily robust even to theoretically
-
2026Knowledge distillation has become a crucial technique to transfer the capacities of large language models (LLMs) to smaller, more efficient models for practical deployment. While recent work exploits rich information from intermediate states of the teacher model for more effective knowledge transfer, imperfect knowledge from the teacher can also mislead student learning, restricting the student’s generalization
-
ICLR 2026 Workshop on Time Series in the Age of Large Models2026Foundation models promise zero-shot forecasting across domains, yet their effectiveness for cold-start scenarios with zero-inflated distributions remains underexplored. We study cross-domain demand forecasting, predicting outcomes for items launching in new domains without historical data where a substantial fraction of launches (≈ 30%) yield zero outcomes and overestimation carries asymmetric costs. We
-
2026Variational Autoencoders (VAEs) are a powerful alternative to matrix factorization for recommendation. A common technique in VAE-based collaborative filtering (CF) consists in applying binary input masking to user interaction vectors, which improves performance but remains underexplored theoretically. In this work, we analyze how collaboration arises in VAE-based CF and show it is governed by latent proximity
-
ACL 2026 Findings2026Recent Long-Context Language Models (LCLMs) can process hundreds of thousands of tokens in a single prompt, enabling new opportunities for knowledge-intensive multi-hop reasoning by integrating large sets of retrieved documents or, in some cases, directly all necessary information. However, simply feeding more documents into the context window fails to capture how evidence should be connected. We address
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all