Multilingual Grapheme-to-Phoneme Conversion with Byte Representation
Grapheme-to-phoneme (G2P) models convert a written word into its corresponding pronunciation and are essential components in automatic-speech-recognition and text-to-speech systems. Recently, the use of neural encoder-decoder architectures has substantially improved G2P accuracy for monolingual and multilingual cases. However, most multilingual G2P studies focus on sets of languages that share similar graphemes, such as European languages. Multilingual G2P for languages from different writing systems, e.g. European and East Asian, remains an understudied area. In this work, we propose a multilingual G2P model with byte-level input representation to accommodate different grapheme systems, along with an attention-based Transformer architecture. We evaluate the performance of both character-level and byte-level G2P using data from multiple European and East Asian locales. Models using byte representation yield 16.2%– 50.2% relative word error rate improvement over character-based counterparts for monolingual and multilingual use cases. In addition, byte-level models are 15.0%–20.1% smaller in size. Our results show that the byte is an efficient representation for multilingual G2P with languages having large grapheme vocabularies.