Customer-obsessed science


Research areas
-
September 2, 2025Audible's ML algorithms connect users directly to relevant titles, reducing the number of purchase steps for millions of daily users.
-
Featured news
-
2024Despite being widely spoken, dialectal variants of languages are frequently considered low in resources due to lack of writing standards and orthographic inconsistencies. As a result, training natural language understanding (NLU) systems relies primarily on standard language resources leading to biased and inequitable NLU technology that underserves dialectal speakers. In this paper, we propose to address
-
Transactions of the Association for Computational Linguistics (TACL)2024Answering factual questions from heterogenous sources, such as graphs and text, is a key capacity of intelligent systems. Current approaches either (i) perform question answering over text and structured sources as separate pipelines followed by a merge step or (ii) provide an early integration giving up the strengths of particular information sources. To solve this problem, we present "HumanIQ", a method
-
2024In recent years, deep neural networks (DNNs) have become integral to many real-world applications. A pressing concern in these deployments pertains to their vulnerability to adversarial attacks. In this work, we focus on the transferability of adversarial examples in a real-world deployment setting involving both a cloud model and an edge model. The cloud model is a black-box victim model, while the edge
-
2024Recent advances in machine learning have demonstrated that multi-modal pre-training can improve automatic speech recognition (ASR) performance compared to randomly initialized models, even when models are fine-tuned on uni-modal tasks. Existing multi-modal pre-training methods for the ASR task have primarily focused on single-stage pre-training where a single unsupervised task is used for pre-training followed
-
2024Recent research has shown that large language models (LLMs) can achieve remarkable translation performance through supervised fine tuning (SFT) using only a small amount of parallel data. However, SFT simply instructs the model to imitate the reference translations at the token level, making it vulnerable to the noise present in the references. Hence, the assistance from SFT often reaches a plateau once
Conferences
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all