Improved training strategies for end-to-end speech recognition in digital voice assistants
The speech recognition training data corresponding to digital voice assistants is dominated by wake-words. Training endto-end (E2E) speech recognition models without careful attention to such data results in sub-optimal performance as models prioritize learning wake-words. To address this problem, we propose a novel discriminative initialization strategy by introducing a regularization term to penalize model for incorrectly hallucinating wake-words in early phases of training. We also explore other training strategies such as multi-task learning with listen-attend-spell (LAS), label smoothing via probabilistic modelling of silence and use of multiple pronunciations, and show how they can be combined with the proposed initialization technique. In addition, we show the connection between cost function of proposed discriminative initialization technique and minimum word error rate (MWER) criterion. We evaluate our methods on two E2E ASR systems, a phone-based system and a word-piece based system, trained on 6,500 hours of Alexa’s Indian English speech corpus. We show that proposed techniques yield 20% word error rate reductions for phone-based system and 6% for word-piece based system compared to corresponding baselines trained on the same data.