Customer-obsessed science
Research areas
-
December 10, 20255 min readNew audio-processing technology is making entertainment more accessible for millions of viewers.
-
December 8, 20258 min read
-
December 5, 20256 min read
-
-
Featured news
-
NeurIPS 2025 Workshop on Bridging Language, Agent, and World Models (LAW)2025We observe that current state-of-the-art web-agents are unable to effectively adapt to new environments without neural network fine-tuning, without which they produce inefficient execution plans due to a lack of awareness of the structure and dynamics of the new environment. To address this limitation, we introduce ATLAS (Actor-Critic Task-completion with Look-ahead Action Simulation), a memory-augmented
-
NeurIPS 2025 Workshop on Efficient Reasoning2025As Large Language Models (LLMs) continue to evolve, practitioners face increasing options for enhancing inference-time performance without model retraining, including budget tuning and multi-step techniques like self-reflection. While these methods improve output quality, they create complex trade-offs among accuracy, cost, and latency that remain poorly understood across different domains. This paper systematically
-
Amazon Technical Reports2025We present Amazon Nova 2, a family of four foundation models designed to meet diverse enterprise needs across reasoning, multimodal processing, and real-time conversational AI. The family includes Nova 2 Lite and Nova 2 Pro — multimodal models with dynamic reasoning capabilities that allow customers to balance accuracy, speed, and efficiency through configurable “extended thinking” controls; Nova 2 Omni
-
2025Customer service often relies on human agents, which, while effective, can be costly and slower to scale. Recent advancements in intelligent chatbots, particularly Retrieval-Augmented Generation (RAG) models, have significantly enhanced efficiency by integrating large language models with external knowledge retrieval. However, developing a multi-turn RAG-based chatbot for real-world customer service presents
-
2025Reasoning-enhanced large language models (RLLMs), whether explicitly trained for reasoning or prompted via chain-of-thought (CoT), have achieved state-of-the-art performance on many complex reasoning tasks. However, we uncover a surprising and previously overlooked phenomenon: explicit CoT reasoning can significantly degrade instruction-following accuracy. Evaluating 20+ models on two benchmarks: IFEval
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all