Customer-obsessed science
Research areas
-
November 20, 20254 min readA new evaluation pipeline called FiSCo uncovers hidden biases and offers an assessment framework that evolves alongside language models.
-
October 20, 20254 min read
-
October 14, 20257 min read
-
October 2, 20253 min read
-
Featured news
-
ICASSP 20232023Attention-based contextual biasing approaches have shown significant improvements in the recognition of generic and/or personal rare-words in End-to-End Automatic Speech Recognition (E2E ASR) systems like neural transducers. These approaches employ cross attention to bias the model towards specific contextual entities injected as bias-phrases to the model. Prior approaches typically relied on subword encoders
-
ICASSP 20232023End-to-end ASR models trained on large amount of data tend to be implicitly biased towards language semantics of the training data. Internal language model estimation (ILME) has been proposed to mitigate this bias for autoregressive models such as attention-based encoder-decoder and RNN-T. Typically, ILME is performed by modularizing the acoustic and language components of the model architecture, and eliminating
-
Journal of Manufacturing Systems2023Localizing defects in products is a critical component of industrial pipelines in manufacturing, retail, and many other industries to ensure consistent delivery of high quality products. Automated anomaly localization systems leveraging computer vision have the potential to replace laborious and subjective manual inspection of products. Recently, there have been tremendous efforts in this research domain
-
ICLR 20232023This work studies the threats of adversarial attack on multivariate probabilistic forecasting models and viable defense mechanisms. Our studies discover a new attack pattern that negatively impact the forecasting of a target time series via making strategic, sparse (imperceptible) modifications to the past observations of a small number of other time series. To mitigate the impact of such attack, we have
-
ICLR 2023 Workshop on Deep Learning for Code (DL4C)2023Code understanding and generation require learning the mapping between human and programming languages. As human and programming languages are different in vocabulary, semantic, and, syntax, it is challenging for an autoregressive model to generate a sequence of tokens that is both semantically (i.e., carry the right meaning) and syntactically correct (i.e., in the right sequence order). Inspired by this
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all