-
Findings of EMNLP 20242024Augmenting Large Language Models (LLMs) with information retrieval capabilities (i.e., Retrieval-Augmented Generation (RAG)) has proven beneficial for knowledge-intensive tasks. However, understanding users’ contextual search intent when generating responses is an understudied topic for conversational question answering (QA). This conversational extension leads to additional concerns when compared to single-turn
-
Findings of EMNLP 20242024Large Language Models (LLMs) are widely used in both industry and academia for various tasks, yet evaluating the consistency of generated text responses continues to be a challenge. Traditional metrics like ROUGE and BLEU show a weak correlation with human judgment. More sophisticated metrics using Natural Language Inference (NLI) have shown improved correlations but are complex to implement, require domain-specific
-
2024Large Language Models (LLMs) have the promise to revolutionize computing broadly, but their complexity and extensive training data also expose significant privacy vulnerabilities. One of the simplest privacy risks associated with LLMs is their susceptibility to membership inference attacks (MIAs), wherein an adversary aims to determine whether a specific data point was part of the model’s training set.
-
CIKM 2024 Workshop on Generative AI for E-commerce2024We introduce VARM, variant relationship matcher strategy, to identify pairs of variant products in e-commerce catalogs. Traditional definitions of entity resolution are concerned with whether product mentions refer to the same underlying product. However, this fails to capture product relationships that are critical for e-commerce applications, such as having similar, but not identical, products listed
-
RecSys 20242024Improving search functionality poses challenges such as data scarcity for model training, metadata enrichment for comprehensive document indexing, and the labor-intensive manual annotation for evaluation. Traditionally, iterative methods relying on human annotators and customer feedback have been used. However, recent advancements in Large Language Models (LLMs) offer new solutions. This paper focuses on
Related content
-
July 08, 2021Amazon Visiting Academic Barbara Poblete helps to build safer, more-diverse online communities — and to aid disaster response.
-
July 02, 2021Giving a neural generation model “control knobs” enables modulation of the content of generated language.
-
July 01, 2021Methods share a two-stage training process in which a model learns a representation from audio data, then learns to predict that representation from text.
-
June 24, 2021The organization focuses on furthering the state of the art on discourse- and dialogue-related technologies.
-
June 17, 2021Combining classic signal processing with deep learning makes method efficient enough to run on a phone.
-
June 16, 2021Watch the replay of the June 15 discussion featuring five Amazon scientists.