-
ECIR 20232023AI assistants are gradually becoming embedded in our lives, utilized for everyday tasks like shopping or music. In addition to the everyday utilization of AI assistants, many users engage them with playful shopping requests, gauging their ability to understand – or simply seeking amusement. However, these requests are often not being responded to in the same playful manner, causing dissatisfaction and even
-
EACL 20232023This work focuses on in-context data augmenta-tion for intent detection. Having found that aug-mentation via in-context prompting of large pre-trained language models (PLMs) alone does not improve performance, we introduce a novel approach based on PLMs and pointwise V-information (PVI), a metric that can measure the usefulness of a datapoint for training a model. Our method first fine-tunes a PLM on a
-
Frontiers in Artificial Intelligence2023Communication is a dynamic process through which interlocutors adapt to each other. In the development of conversational agents, this core aspect has been put aside for several years since the main challenge was to obtain conversational neural models able to produce utterances and dialogues that at least at the surface level are human-like. Now that this milestone has been achieved, the importance of paying
-
The Web Conference 20232023Interacting with voice assistants, such as Amazon Alexa to aid in day-to-day tasks has become a ubiquitous phenomenon in modernday households. These voice assistants often have screens to provide visual content (e.g., images, videos) to their users. There is an increasing trend of users shopping or searching for products using these devices, yet, these voice assistants do not support commands or queries
-
ICASSP 20232023Recent advances in cross-lingual commonsense reasoning (CSR) are facilitated by the development of multilingual pre-trained models (mPTMs). While mPTMs show the potential to encode commonsense knowledge for different languages, transferring commonsense knowledge learned in large-scale English corpus to other languages is challenging. To address this problem, we propose an attentionbased Cross-LIngual Commonsense
Related content
-
January 25, 2021New approach to few-shot learning improves on state of the art by combining prototypical networks with data augmentation.
-
January 21, 2021Amazon principal applied scientist Yang Liu on the frontiers of speech and dialogue.
-
January 13, 2021In experiments, multilingual models outperform monolingual models.
-
December 18, 2020Researchers propose a method to automatically generate training data for Alexa by identifying cases in which customers rephrase unsuccessful requests.
-
December 14, 2020Parallel speech recognizers, language ID, and translation models geared to conversational speech are among the modifications that make Live Translation possible.
-
December 03, 2020Scientists are recognized for their contributions to conversational understanding systems.