When a voice assistant asks for feedback: An empirical study on customer experience with A/B testing and causal inference methods
Intelligent Voice Assistant (IVA) systems, such as Alexa, Google Assistant and Siri, allow us to interact with them using just the voice commands. IVA systems can seek voice feedback directly from the customers, right after an interaction by simply asking a question such as “did that answer your question?”. We refer to these IVA elicited feedbacks as crowdsourced voice feedback (CVF). In this paper, we look to understand the customer experience (CX) during interactions with an IVA that explicitly seeks feedback. We attempt to quantify the CX of providing feedback, identify the driving factors of CX and offer insights into improving CX with the drivers identified. With an A/B test, we collected data from a leading IVA system and found that feedback elicitations did not impair CX in general. To identify drivers of CX, we performed causal inference with Double Machine Learning. Causal inference teases apart multiple confounding factors and avoids CX risks in experimentation of certain variables. We identified multiple CX drivers including elicitation timing and frequency, which can be useful in establishing guardrails for a CVF system. Our results imply opportunities of CVF systems, and we suggest design specifics that can be leveraged for such feedback collection mechanisms.