Customer-obsessed science


Research areas
-
May 21, 2025By combining surveys with ads targeted to metro and commuter rail lines, Amazon researchers identify the fraction of residents of different neighborhoods exposed to the ads and measure ad effectiveness.
Featured news
-
2025The use of human speech to train LLMs poses privacy concerns due to these models’ ability to generate samples that closely resemble artifacts in the training data. We propose a speaker privacy-preserving representation learning method through the Universal Speech Codec (USC), a computationally efficient codec that disentangles speech into: (i) privacy-preserving semantically rich representations, capturing
-
2025This paper introduces MO-LightGBM, an open-source library built upon LightGBM, specifically designed to offer an integrated, versatile, and easily adaptable framework for Multi-objective Learning to Rank (MOLTR). MO-LightGBM supports diverse Multi-objective optimization (MOO) settings and incorporates 12 state-of-the-art optimization strategies. Its modular architecture enhances usability and flexibility
-
2025Language localization is the adaptation of written content to different linguistic and cultural contexts. Ability to localize written content is crucial for global businesses to provide consistent and reliable customer experience across diverse markets. Traditional methods have approached localization as an application of machine translation (MT), but localization requires more than linguistic conversion
-
AAAI 2025 Workshop on Preventing and Detecting LLM Misinformation2025Unlearning aims to remove copyrighted, sensitive, or private content from large language models (LLMs) without a full retraining. In this work, we develop a multi-task unlearning benchmark (LUME) which features three tasks: (1) unlearn synthetically generated creative short novels, (2) unlearn synthetic biographies with sensitive information, and (3) unlearn a collection of public biographies. We further
-
NAACL 2025 Workshop on TrustNLP2025Large Language Models (LLMs) have demonstrated excellent capabilities in Question Answering (QA) tasks, yet their ability to identify and address ambiguous questions remains underdeveloped. Ambiguities in user queries often lead to inaccurate or misleading answers, undermining user trust in these systems. Despite prior attempts using prompt-based methods, performance has largely been equivalent to random
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all