Customer-obsessed science
Research areas
-
December 10, 20255 min readNew audio-processing technology is making entertainment more accessible for millions of viewers.
-
December 8, 20258 min read
-
December 5, 20256 min read
-
-
Featured news
-
EMNLP 2024 Workshop on Social Influence in Conversations2024Information-Seeking Dialogue (ISD) agents aim to provide accurate responses to user queries. While proficient in directly addressing user queries, these agents, as well as LLMs in general, predominantly exhibit reactive behavior, lacking the ability to generate proactive responses that actively engage users in sustained conversations. However, existing definitions of proactive dialogue in this context do
-
2024Query Autocomplete (QAC) is a critical feature in modern search engines, facilitating user interaction by predicting search queries based on input prefixes. Despite its widespread adoption, the absence of large-scale, realistic datasets has hindered advancements in QAC system development. This paper addresses this gap by introducing AmazonQAC, a new QAC dataset sourced from Amazon Search logs, comprising
-
NeurIPS 2024 Workshop on Fine-Tuning in Modern Machine Learning: Principles and Scalability2024In LLM alignment and many other ML applications, one often faces the MultiObjective Fine-Tuning (MOFT) problem, i.e. fine-tuning an existing model with datasets labeled w.r.t. different objectives simultaneously. To address the challenge, we propose the HyperDPO framework, a hypernetwork-based approach that extends the Direct Preference Optimization (DPO) technique, originally developed for efficient LLM
-
NeurIPS 2024 Workshop on Multimodal Algorithmic Reasoning2024Understanding long-form video content presents significant challenges due to its temporal complexity and the substantial computational resources required. In this work, we propose an agent-based approach to enhance both the efficiency and effectiveness of long-form video understanding by utilizing large language models (LLMs) and their tool-harnessing ability. A key aspect of our method is queryadaptive
-
Findings of EMNLP 20242024Recent advancements in prompt engineering strategies, such as Chain-of-Thought (CoT) and Self-Discover, have demonstrated significant potential in improving the reasoning abilities of Large Language Models (LLMs). However, these state-of-the-art (SOTA) prompting strategies rely on single or fixed set of static seed reasoning modules like "think step by step" or "break down this problem" intended to simulate
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all