Voice-based Reformulation of Community Answers
Community Question Answering (CQA) websites, such as Stack Exchange or Quora, allow users to freely ask questions and obtain answers from other users, i.e., the community. Personal assistants, such as Amazon Alexa or Google Home, can also exploit CQA data to answer a broader range of questions and increase customers’ engagement. However, the voice-based interaction poses new challenges to the Question Answering scenario. Even assuming that we are able to retrieve a previously asked question that perfectly matches the user’s query, we cannot simply read its answer to the user. A major limitation is the answer length. Reading these answers to the user is cumbersome and boring. Furthermore, many answers contain non-voice-friendly parts, such as images, or URLs. In this paper, we define the Answer Reformulation task and propose a novel solution to automatically reformulate a community provided answer making it suitable for a voice interaction. Results on a manually annotated dataset extracted from Stack Exchange show that our models improve strong baselines.