Customer-obsessed science
Research areas
-
December 1, 20258 min read“Network language models” will coordinate complex interactions among intelligent components, computational infrastructure, access points, data centers, and more.
-
-
November 20, 20254 min read
-
October 20, 20254 min read
-
October 14, 20257 min read
Featured news
-
EMNLP 2021 Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)2021Knowledge Distillation (KD) offers a natural way to reduce the latency and memory/energy usage of massive pretrained models that have come to dominate Natural Language Processing (NLP) in recent years. While numerous sophisticated variants of KD algorithms have been proposed for NLP applications, the key factors underpinning the optimal distillation performance are often confounded and remain unclear. We
-
EMNLP 2021 Sixth Conference on Machine Translation (WMT21)2021Automatic post-editing (APE) models are used to correct machine translation (MT) system outputs by learning from human post-editing patterns. We present the system used in our submission to the WMT’21 Automatic Post-Editing (APE) English-German (En-De) shared task. We leverage the state-of-the-art MT system (Ng et al., 2019) for this task. For further improvements, we adapt the MT model to the task domain
-
ICMI 20212021Intelligent Voice Assistant (IVA) systems, such as Alexa, Google Assistant and Siri, allow us to interact with them using just the voice commands. IVA systems can seek voice feedback directly from the customers, right after an interaction by simply asking a question such as “did that answer your question?”. We refer to these IVA elicited feedbacks as crowdsourced voice feedback (CVF). In this paper, we
-
ASRU 20212021Virtual assistants such as Google Assistant and Amazon Alexa host thousands of voice applications (skills) that handle a very large and diverse array of customer utterances. However, the number of supported skills may be much lower in some locales, particularly in countries other than the United States. Accordingly, customer utterances handled in a popular locale may be going unclaimed in another locale
-
EMNLP 20212021Sequence labeling aims to predict a fine grained sequence of labels for the text. However, such formulation hinders the effectiveness of supervised methods due to the lack of token-level annotated data. This is exacerbated when we meet a diverse range of languages. In this work, we explore multilingual sequence labeling with minimal supervision using a single unified model for multiple languages. Specifically
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all