Customer-obsessed science
Research areas
-
November 20, 20254 min readA new evaluation pipeline called FiSCo uncovers hidden biases and offers an assessment framework that evolves alongside language models.
-
October 2, 20253 min read
-
-
-
September 2, 20253 min read
Featured news
-
NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following2023In recent years, the field of natural language processing (NLP) has witnessed remarkable advancements driven by the development of large language models (LLMs). Various techniques, such as instruction tuning, have emerged as crucial approaches, enhancing LLMs’ adaptability to new tasks guided by instructional prompts. Meanwhile, the phenomenon of memorization within LLMs has garnered considerable attention
-
EMNLP 20232023Statistical significance testing is used in natural language processing (NLP) to determine whether the results of a study or experiment are likely to be due to chance or if they reflect a genuine relationship. A key step in significance testing is the estimation of confidence interval which is a function of sample variance. Sample variance calculation is straightforward when evaluating against ground truth
-
NeurIPS 2023 Workshop on Self-Supervised Learning — Theory and Practice2023Pre-training has been an important ingredient in developing strong monocular depth estimation models in recent years. For instance, self-supervised learning (SSL) is particularly effective by alleviating the need for large datasets with dense ground-truth depth maps. However, despite these improvements, our study reveals that the later layers of the SOTA SSL method are actually suboptimal. By examining
-
2023 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)2023To boost training and adaptation of end to end (E2E) automatic speech recognition (ASR) models, several approaches to use paired speech-text input together with unpaired text input have emerged. They aim at improving the model performance on rare words, personalisation, and long tail. In this work, we present a systematic study of the impact of such training/adaptation and compare it to training with synthetic
-
EMNLP 20232023While recent studies have looked into the abilities of large language models in various benchmark tasks, few studies have looked into the controllability of large language models on generation tasks. We present a systematic and extensive analysis of the controllability of large language models on ten benchmarks, including a new simple yet challenging numerical planning benchmark with different granularities
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all