Customer-obsessed science
Research areas
-
December 10, 20255 min readNew audio-processing technology is making entertainment more accessible for millions of viewers.
-
December 5, 20256 min read
-
-
-
November 20, 20254 min read
Featured news
-
NeurIPS 2025 Workshop on Evaluating the Evolving LLM Lifecycle2025Rigorous evaluation of Large Language Models (LLMs) is critical for their adoption in high-stakes applications, particularly in highly technical domains that require deep expertise and specialized training. The proliferation of LLMs from vari2025ous providers further underscores the need for comprehensive model performance benchmarking. Like many standardized tests and certification exams, several prominent
-
NeurIPS 2025 Workshop on Efficient Reasoning2025We introduce PHLoRA2 (Post-hoc LoRA), a simple yet powerful method to extract low-rank adaptation adapters from full-rank fine-tuned models without requiring access to training data or gradients. By computing the low-rank decomposition of weight differences between a base model and its fine-tuned counterpart, our method reconstructs adapter modules that can be merged or dynamically routed at inference time
-
NeurIPS 2025 Workshop on ResponsibleFM2025Given the constant flux in the world of geopolitics, staying up to date and compliant with international trade issues is challenging. But exploring if LLMs can aid this task is a frontier hither to unexplored in the LLM evaluation literature - primarily due to the lack of a dataset set for benchmarking the capabilities of LLMs on questions regarding international trade subjects. To address this gap, we
-
Transactions of Machine Learning Research2025Despite fast progress, efficiently training large language models (LLMs) in extremely long contexts remains challenging. Existing methods fall back to training LLMs with short contexts (up to a few thousand tokens) and use inference time techniques when evaluating on very long contexts (above 1M tokens). Training on very long contexts is limited by GPU memory availability and the prohibitively long training
-
2025Large Language Model (LLM)-powered agents have emerged as a new paradigm for complex, multi-turn human-AI interactions, yet most existing systems adopt a one-size-fits-all approach, neglecting the evolving preferences and goals of individual users. This limitation hinders their ability to maintain alignment and coherence over extended multi-turn interactions and dynamic tasks. To address this gap, we propose
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all