Customer-obsessed science
Research areas
-
December 1, 20258 min read“Network language models” will coordinate complex interactions among intelligent components, computational infrastructure, access points, data centers, and more.
-
-
November 20, 20254 min read
-
October 20, 20254 min read
-
October 14, 20257 min read
Featured news
-
WeCNLP 20212021Companies rely on large-scale surveys, interviews, and focus groups to gauge customer sentiment about their products or programs, which contain free form text data rich in information. Researchers currently use a manual, time consuming processing which delays the time to get actionable insights. This paper presents a scalable solution where researchers can interact with a custom UI to annotate text data
-
WACV 20222021Proxy-based metric learning losses are superior to pair-based losses due to their fast convergence and low training complexity. However, existing proxy-based losses focus on learning class-discriminative features while overlooking the commonalities shared across classes which are potentially useful in describing and matching samples. Moreover, they ignore the implicit hierarchy of categories in real-world
-
NeurIPS 2021 Workshop on Privacy in Machine Learning2021Label inference was recently introduced as the problem of reconstructing the ground truth labels of a private dataset from just the (possibly perturbed) cross entropy loss scores evaluated at carefully crafted prediction vectors. In this paper, we generalize this result to provide necessary and sufficient conditions under which label inference is possible from a broad class of loss functions. We show that
-
WeCNLP 20212021Text-to-Text (T2T) denoising-pretraining-finetuning (DPF) paradigms (e.g. BERT, BART, GPT) have achieved great success in a wide range of encoding and decoding tasks in NLP. However, little has been explored on data-to-data (D2D) and data-to-text (D2T) tasks using DPF paradigms. This work fills in the gap by investigating D2D and T2T denoising-pretraining for D2T tasks. D2D and T2T DPF paradigms can leverage
-
EMNLP 2021 Workshop on the Fifth Widening NLP (WiNLP)2021Building supervised targeted sentiment analysis models for a new target domain requires substantial annotation effort since most datasets for this task are domain-specific. Domain adaptation for this task has two dimensions: the nature of targets and the opinion words used to describe sentiment towards the target. We present a data sampling strategy informed by domain differences across these two dimensions
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all