-
EMNLP 20232023In executable task-oriented semantic parsing, the system aims to translate users’ utterances in natural language to machine-interpretable programs (API calls) that can be executed according to pre-defined API specifications. With the popularity of Large Language Models (LLMs), in-context learning offers a strong baseline for such scenarios, especially in data-limited regimes (Hu et al., 2022; Shin et al
-
EMNLP 20232023Instruction-based multitasking has played a critical role in the success of large language models (LLMs) in multi-turn dialog applications. While publicly available LLMs have shown promising performance, when exposed to complex instructions with multiple constraints, they lag against state-of-the-art models like Chat-GPT. In this work, we hypothesize that the availability of large-scale complex demonstrations
-
EMNLP 20232023Document image classification is different from plain-text document classification and consists of classifying a document by understanding the content and structure of documents such as forms, emails, and other such documents. We show that the only existing dataset for this task (Lewis et al., 2006) has several limitations and we introduce two newly curated multilingual datasets (WIKI-DOC and MULTIEURLEX
-
EMNLP 20232023As large language models (LLMs) have shown effectiveness with different prompting methods, such as Chain of Thought, Program of Thought, we find that these methods have formed a great complementarity to each other on math reasoning tasks. In this work, we propose XoT, an integrated problem solving framework by prompting LLMs with diverse reasoning thoughts. For each question, XoT always begins with selecting
-
EMNLP 20232023Text classifiers are an indispensable tool for machine learning practitioners, but adapting them to new classes is expensive. To reduce the cost of new classes, previous work exploits class descriptions and/or labels from existing classes. However, these approaches leave a gap in the model development cycle as they support either zero- or few-shot learning but not both. Existing classifiers either do not
Related content
-
November 29, 2022Using the prior model to rerank outputs of the new model increases backward compatibility.
-
November 17, 2022In 2022, the Alexa Trustworthy AI team helped organize a workshop at NAACL and a special session at Interspeech.
-
November 16, 2022From physical constraints to acoustic challenges, learn how Amazon collaborated with NASA and Lockheed Martin to get Alexa to work in space.
-
November 15, 2022Models that map spoken language to objects in an image would make it easier for customers to communicate with multimodal devices.
-
November 09, 2022At Amazon, he develops machine learning models to help keeping Amazon stores safe and trustworthy for customers and selling partners.
-
October 13, 2022Fellowships will provide support to pursue research in the fields of artificial intelligence and robotics.