Customer-obsessed science


Research areas
-
February 27, 2025Prototype is the first realization of a scalable, hardware-efficient quantum computing architecture based on bosonic quantum error correction.
-
Featured news
-
2025We study the post-training of large language models (LLMs) with human preference data. Recently, direct preference optimization and its variants have shown considerable promise in aligning language models, eliminating the need for reward models and online sampling. Despite these benefits, these methods rely on explicit assumptions about the Bradley-Terry (BT) model, which makes them prone to over-fitting
-
ACM Conference on Intelligent User Interfaces 20252025Machine learning in production needs to balance multiple objectives: This is particularly evident in ranking or recommendation models, where conflicting objectives such as user engagement, satis-faction, diversity, and novelty must be considered at the same time. However, designing multi-objective rankers is inherently a dynamic wicked problem – there is no single optimal solution, and the needs evolve
-
NAACL Findings 20252025Large language models (LLMs) are increasingly used as automated judges to evaluate recommendation systems, search engines, and other subjective tasks, where relying on human evaluators can be costly, time-consuming, and unscalable. LLMs offer an efficient solution for continuous, automated evaluation. However, since the systems that are built and improved with these judgments are ultimately designed for
-
Journal of Applied Physics2025Superconducting micro-resonators have application in sensors and quantum computing. Measurement of the resonator internal loss in the single-photon regime is a common tool to study the origins of dissipation, noise, and decoherence in quantum circuits, as well as characterization of materials used for quantum devices. However, such measurements are challenging and time-consuming with large uncertainties
-
2025Automatic speech recognition (ASR) systems can benefit from incorporating contextual information to improve recognition accuracy, especially for uncommon words or phrases. Current approaches like custom vocabularies or prompting with previous transcript segments provide limited contextual control. Compared to existing context biasing methods, RAG promises more flexible and scalable contextual control by
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all