Customer-obsessed science
Research areas
-
May 15, 20265 min readA new scaling law that relates particular architectural choices to loss helps identify models that improve throughput by up to 47% with no loss of accuracy.
-
May 14, 202616 min read
-
-
April 15, 20268 min read
Featured news
-
2026Large Language Model (LLM)-based Multi-Agent Systems (MAS) enable complex problem-solving but introduce significant debugging challenges, characterized by long interaction traces, inter-agent dependencies, and delayed error manifestation. Existing diagnostic approaches often rely on expensive expert annotation or 'LLM-as-a-judge' paradigms, which struggle to pinpoint decisive error steps within extended
-
2026In this paper, we investigate the problem of how to effectively master tool-use to solve complex visual reasoning tasks for Multimodal Large Language Models. To achieve that, we propose a novel Tool-supervised Reinforcement Learning (ToolsRL) framework, with direct tool supervision for more effective tool-use learning. We focus on a series of simple, native, and interpretable visual tools, including zoom-in
-
CVPR 2026 Workshop on TRUE-V2026Vision Language Models (VLMs) are increasingly adopted for document understanding tasks, often replacing traditional OCR systems. However, VLMs exhibit a fundamental difference: they frequently correct or rewrite imperfect text rather than transcribe it literally, a behavior that remains largely underexplored. We present a systematic investigation through controlled experiments with intentionally perturbed
-
CVPR 2026 Findings Track2026Multimodal large language models (MLLMs) have achieved impressive performance on visual perception and reasoning tasks with RGB imagery, yet they remain fragile under common degradations, such as fog, blur, or low-light conditions. Infrared (IR) imaging, a well-established complement to RGB, offers inherent robustness in these conditions, but its integration into MLLMs remains underexplored. To bridge this
-
2026Current multimodal image retrieval benchmarks focus on relatively simple queries where target images are either described directly or by simple composition with an input image. When retrieval requires complex reasoning to determine the target image, the task becomes significantly more challenging, yet standardized benchmarks for this setting do not exist. To fill this gap, we introduce RMIR, a benchmark
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all