-
ECCV 2024 Workshop on Unlearning and Model Editing2024In recent years, Vision Language Models (VLMs) have achieved significant advancements due to the success of large language models. The common strategy for aligning vision and language models involves a two-step process: an alignment (or pretraining) stage and an instruction tuning stage. During the alignment stage, a projection module is trained to map image embeddings into the language space using a paired
-
Language Resources and Evaluation2024In Artificial Intelligence research, perspectivism is an approach to machine learning that aims at leveraging data annotated by different individuals in order to model varied perspectives that influence their opinions and world view. We present the first survey of datasets and methods relevant to perspectivism in Natural Language Processing (NLP). We review datasets in which individual annotator labels
-
KDD 2024 Workshop on Talent and Management Computing2024Qualitative data collection and analysis approaches, such as those employing interviews and focus groups, provide rich insights into customer attitudes, sentiment, and behavior. However, manually analyzing qualitative data requires extensive time and effort to identify relevant topics and thematic insights. This study proposes a novel approach to address this challenge by leveraging Retrieval Augmented
-
KDD 2024 Workshop on Generative AI for Recommender Systems and Personalization2024Retrieval Augmented Generation (RAG) is a technique used to augment Large Language Models (LLMs) with contextually relevant, time-critical, or domain-specific information without altering the underlying model parameters. However, constructing RAG systems that can effectively synthesize information from large and diverse set of documents remains a significant challenge. We introduce a novel data-centric
-
KDD 2024 Workshop on GenAI Evaluation2024Large language models (LLMs) have demonstrated remarkable capabilities in natural language processing tasks. However, their practical application in high-stake domains, such as fraud and abuse detection, remains an area that requires further exploration. The existing applications often narrowly focus on specific tasks like toxicity or hate speech detection. In this paper, we present a comprehensive benchmark
Related content
-
November 19, 2018Amazon scientists have shown that our latest text-to-speech (TTS) system, which uses a generative neural network, can learn to employ a newscaster style from just a few hours of training data.
-
October 31, 2018This year, we’ve started to explore ways to make it easier for customers to find and engage with Alexa skills.
-
October 25, 2018At the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), Amazon researchers and their colleagues at the University of Sheffield and Imperial College London will host the first Workshop on Fact Extraction and Verification, which will explore how computer systems can learn to recognize false assertions online.
-
October 4, 2018Parallel processing of microphone inputs and separate detectors for periodicity and dynamics improve performance.
-
October 2, 2018On September 20, Amazon unveiled a host of new products and features, including Alexa Guard, a smart-home feature available on select Echo devices later this year. When activated, Alexa Guard can send a customer alerts if it detects the sound of glass breaking or of smoke or carbon monoxide alarms in the home.
-
September 28, 2018Last week, Amazon announced the release of both a redesigned Echo Show with a bigger screen and the Alexa Presentation Language, which enables third-party developers to build “multimodal” skills that coordinate Alexa’s natural-language-understanding systems with on-screen graphics.