Customer-obsessed science
Research areas
-
November 28, 20254 min readLarge language models are increasing the accuracy, reliability, and consistency of the product catalogue at scale.
-
November 20, 20254 min read
-
October 20, 20254 min read
-
October 14, 20257 min read
-
October 2, 20253 min read
Featured news
-
Interspeech 20222022In this work, we define barge-in verification as a supervised learning task where audio-only information is used to classify user spoken dialogue into true and false barge-ins. Following the success of pre-trained models, we use low-level speech representations from a self-supervised representation learning model for our downstream classification task. Further, we propose a novel technique to infuse lexical
-
Interspeech 20222022In this work, we demonstrate a novel technique for automatically scaling over-the-air acoustic watermarks to maximize amplitude while remaining imperceptible to human listeners. These watermarks have been demonstrated in prior work to be robust to the indoor acoustic channel. However, they require careful calibration to ensure that they are (a) detectable by the device and (b) imperceptible to humans. While
-
Interspeech 20222022We present a novel sub-8-bit quantization-aware training (S8BQAT) scheme for 8-bit neural network accelerators. Our method is inspired from Lloyd-Max compression theory with practical adaptations for a feasible computational overhead during training. With the quantization centroids derived from a 32-bit baseline, we augment training loss with a Multi-Regional Absolute Cosine (MRACos) regularizer that aggregates
-
ICML 20222022Technology improvements have made it easier than ever to collect diverse telemetry at high resolution from any cyber or physical system, for both monitoring and control. In the domain of monitoring, anomaly detection has become an important problem in many research areas ranging from IoT and sensor networks to devOps. These systems operate in real, noisy and non-stationary environments. A fundamental question
-
KDD 20222022We present results from a large-scale experiment on pretraining encoders with non-embedding parameter counts ranging from 700M to 9.3B, their subsequent distillation into smaller models ranging from 17M-170M parameters, and their application to the Natural Language Understanding (NLU) component of a virtual assistant system. Though we train using 70% spoken-form data, our teacher models perform comparably
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all