Customer-obsessed science
Research areas
-
December 5, 20256 min readA multiagent architecture separates data perception, tool knowledge, execution history, and code generation, enabling ML automation that works with messy, real-world inputs.
-
-
-
November 20, 20254 min read
-
October 20, 20254 min read
Featured news
-
Interspeech 2021 Workshop on Speech Synthesis (SSW11)2021We propose a novel Multi-Scale Spectrogram (MSS) modelling approach to synthesise speech with an improved coarse and fine-grained prosody. We present a generic multi-scale spectrogram prediction mechanism where the system first predicts coarser scale mel-spectrograms that capture the suprasegmental information in speech, and later uses these coarser scale mel-spectrograms to predict finer scale mel-spectrograms
-
NAACL 2021 TrustNLP Workshop on Trustworthy Natural Language Processing2021Many existing approaches for interpreting text classification models focus on providing importance scores for parts of the input text, such as words, but without a way to test or improve the interpretation method itself. This has the effect of compounding the problem of understanding or building trust in the model, with the interpretation method itself adding to the opacity of the model. Further, importance
-
ICLR 2021 Workshop on Robust and Reliable Machine Learning in the Real World2021Goal oriented dialogue systems in real-word environments often encounter noisy data. In this work, we investigate how robust these systems are to noisy data. Specifically, our analysis considers intent classification (IC) and slot labeling (SL) models that form the basis of most dialogue systems. We collect a test-suite for six common phenomena found in live human-to-bot conversations (abbreviations, casing
-
ICML 2021 Workshop on Machine Learning for Data: Automated Creation, Privacy, Bias2021Recent advances in deep learning have drastically improved performance on many Natural Language Understanding (NLU) tasks. However, the data used to train NLU models may contain private information such as addresses or phone numbers, particularly when drawn from human subjects. It is desirable that underlying models do not expose private information contained in the training data. Differentially Private
-
ICML 2021 Workshop on Automated Learning (AutoML)2021We consider the problem of repeated hyperparameter and neural architecture search (HNAS).We propose an extension of Successive Halving that leverages information gained in previous HNAS problems with the goal of saving computational resources. We empirically demonstrate that our solution is robust to negative transfer and drastically decreases cost while maintaining accuracy. Our method is significantly
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all