-
WeCNLP 20212021Companies rely on large-scale surveys, interviews, and focus groups to gauge customer sentiment about their products or programs, which contain free form text data rich in information. Researchers currently use a manual, time consuming processing which delays the time to get actionable insights. This paper presents a scalable solution where researchers can interact with a custom UI to annotate text data
-
WeCNLP 20212021Text-to-Text (T2T) denoising-pretraining-finetuning (DPF) paradigms (e.g. BERT, BART, GPT) have achieved great success in a wide range of encoding and decoding tasks in NLP. However, little has been explored on data-to-data (D2D) and data-to-text (D2T) tasks using DPF paradigms. This work fills in the gap by investigating D2D and T2T denoising-pretraining for D2T tasks. D2D and T2T DPF paradigms can leverage
-
EMNLP 2021 Workshop on the Fifth Widening NLP (WiNLP)2021Building supervised targeted sentiment analysis models for a new target domain requires substantial annotation effort since most datasets for this task are domain-specific. Domain adaptation for this task has two dimensions: the nature of targets and the opinion words used to describe sentiment towards the target. We present a data sampling strategy informed by domain differences across these two dimensions
-
ICNLSP 20212021A challenge for target-based sentiment analysis is that most datasets are domain-specific and thus building supervised models for a new target domain requires substantial annotation effort. Domain adaptation for this task has two dimensions: the nature of the targets (e.g., entity types, properties associated with entities, or arbitrary spans) and the opinion words used to describe the sentiment towards
-
EMNLP 2021 Workshop on NLP for Conversational AI2021Natural Language Generation (NLG) for task oriented dialogue systems focuses on communicating specific content accurately, fluently, and coherently. While these attributes are crucial for a successful dialogue, it is also desirable to simultaneously accomplish specific stylistic goals, such as response length, point-of-view, descriptiveness, sentiment, formality, and empathy. In this work, we focus on stylistic
Related content
-
April 21, 2020Bezos’s Shareholder Letter has become a must read, along the lines of Warren Buffet’s letter to Berkshire Hathaway shareholders, or the Bill & Melinda Gates Annual Letter.
-
February 25, 2020Two neural-network-based models are currently being evaluated in internal and public-facing applications.
-
February 06, 2020Papers investigate dialogue, question-answering, self-learning, and more.
-
January 30, 2020Improvements come from new transfer learning method, new publicly released data set.
-
January 23, 2020New "Mad Libs" technique for replacing words in individual sentences is grounded in metric differential privacy.
-
January 21, 2020Self-learning system uses customers’ rephrased requests as implicit error signals.