Customer-obsessed science


Research areas
-
August 4, 2025Translating from natural to structured language, defining truth, and definitive reasoning remain topics of central concern in automated reasoning, but Amazon Web Services’ new Automated Reasoning checks help address all of them.
Featured news
-
Large Language Models (LLMs) are known to hallucinate and generate non-factual outputs which can undermine user trust. Traditional methods to directly mitigate hallucinations, such as representation editing and contrastive decoding, often require additional training data and involve high implementation complexity. While ensemble-based approaches harness multiple LLMs to tap into the "wisdom of crowds",
-
Diffusion models have revolutionized the landscape of generative AI, particularly in the application of text-to-image generation. However, their powerful capability of generating high-fidelity images raises significant security concerns on the malicious use of the state-of-the-art (SOTA) text-to-image diffusion models, notably the risks of misusing personal photos and copyright infringement through the
-
2025In this paper, we present HALLUCANA, a canary lookahead to detect and correct factuality hallucinations of Large Language Models (LLMs) in long-form generation. HALLUCANA detects and intervenes as soon as traces of hallucination emerge, during and even before generation. To support timely detection, we exploit the internal factuality representation in the LLM hidden space, where we investigate various proxies
-
2025General-purpose language models (LMs) are aligned to diverse user intents, but fall short when it comes to specific applications. While finetuning is the default method for customized alignment, human annotations are often unavailable in various customization scenarios. Based on the observation that one of the main issues of LM customization is constraint adherence, we investigate the feasibility of using
-
DVCON 20252025Machine Learning (ML) accelerators are increasingly adopting diverse datatypes and data formats, such as FP16 and microscaling, to optimize key performance metrics such as inference accuracy, latency and power consumption. However, hardware modules like the arithmetic units and signal processing blocks associated with these datatypes pose unique verification challenges. In this work, we present an end-to-end
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all