Zero-shot test-time adaptation via knowledge distillation for personalized speech denoising and dereverberation
2024
We propose a personalization framework to adapt compact models to test time environments and improve their speech enhancement performance in noisy and reverberant conditions. The use-cases are when the end-user device encounters only one or a few speakers and noise types that tend to reoccur in the specific acoustic environment. Hence, we postulate a small personalized model that suffices to handle this focused subset of the original universal speech enhancement problem. The study addresses a major data shortage issue: although the goal is to learn from a specific user’s speech signals and the test time environment, the target clean speech is unavailable for model training due to privacy-related concerns and technical difficulty of recording noise and reverberation-free voice signals. The proposed zero-shot personalization method utilizes no clean speech target. Instead, it employs the knowledge distillation framework, where the more advanced denoising results from an overly large teacher work as pseudo targets to train a small student model. Evaluation on various test time conditions suggest that the proposed personalization approach can significantly enhance the compact student model’s test time performance. Personalized models outperform larger non-personalized baseline models, demonstrating that personalization achieves model compression with no loss in dereverberation and denoising performance.
Research areas