A self-learning framework for large-scale conversational AI systems
In the last decade, conversational artificial intelligence (AI) systems have been widely employed to address people’s real-life needs across various different environments and settings. At the same time, users’ expectations of these systems have been on the rise as they expect more contextual and personalized interactions with continuous learning systems, akin to their expectation in human-human interactions. Modular systems constructed as pipelines of machine learning mod-els and trained through supervised learning paradigms often struggle to improve user experience due to the less-than-ideal, slow accuracy improvements they undergo, and the privacy concerns associated with manual annotation. Inspired by how humans learn from their experiences and interactions, this article proposes a comprehensive self-learning framework designed to tackle these challenges for large-scale conversational AI systems, fostering continuous automated learning. The proposed self-learning framework comprises three elements: feedback collection, feedback interpretation, and learning mechanisms. Without the need for annotators in the loop, a self-learning conversational AI system autonomously uses a feedback interpreter to subscribe to, interpret, and utilize user feedback to adapt its behaviors through various learning mechanisms. First, the elements of the self-learning framework are described and then applied to Alexa, a large-scale conversational AI system. Subsequently, this article presents its effectiveness in reducing user-perceived defects. Finally, it explores the implications of self-learning for general AI systems and suggests future directions.