Amazon product pages include sections in which customers can both ask and answer questions, and like most community-access online forums, they’ve become fertile ground for comedy. A banana slicer, for instance, which is slightly curved to fit the shape of a banana, prompts the question “What if the banana bends the other direction?”, while almost all of the roughly 100 questions about canned unicorn meat are jokes.
Providing new opportunities for creative self-expression is one of the delights of building an online community like Amazon’s. But for customers in a hurry to extract essential information about products — and for automated systems that use question-and-answer data to improve Amazon’s recommendation engine — it would be useful to be able to distinguish comic from serious questions.
In a paper presented (virtually) at this year’s SIGIR, the Association for Computing Machinery’s annual conference on information retrieval, my colleagues — Yftah Ziser and Elad Kravi — and I described a new approach to humor detection in product question answering. In experiments, we compared our system to four baselines, and it reduced the error rate of the best-performing of them by 5.4% and 18.3% on two different data sets.
Our system leverages two insights from humor theory. One is that humor is often the result of incongruity — a mismatch between two conceptions of a topic. For instance, the question “Does this make espresso?” might be reasonable when applied to a high-end coffee machine, but applied to a Swiss Army knife, it’s probably a joke.
The other insight is that humor often has a subjective tone, an indication of the speaker’s sentiment or emotional state. For instance, one question asked about the Amazon Echo Show was the comic question “Will this help me find the meaning of life?”, which has a more subjective tone than the question “Can it connect to music speakers?”
The model
The inputs to our system are a question extracted from an Amazon product page and the associated product title. Both the title and the question pass to an incongruity detection module, which scores them according to incongruity, and the question passes to a subjectivity module, which scores it according to subjectivity.
Those scores are concatenated with ordinary word embeddings — vector representations that capture semantic information about the inputs — before passing to a classifier, which makes the ultimate decision about whether the question is comic.
Before training the network as a whole, we pretrain the incongruity and subjectivity modules on automatically labeled data. For the incongruity module, we create positive (incongruous) examples by pairing product names with questions extracted from other products’ pages. Negative (congruous) examples simply pair product names with questions extracted from the associated pages, as the large majority of questions are serious in intent.
For the subjectivity model, we extract positive examples (examples that use subjective language) from product reviews and negative examples from seller-provided product descriptions, which tend to be more objective.
The data sets
After we’ve pretrained the incongruity and subjectivity modules on the automatically generated data, we freeze their parameters and train the network as a whole on a more carefully curated data set. To produce this data set, we created a simple interface that presented crowd workers with product names, product images, and associated questions, and we asked them to click a radio button to assess the question as humorous or not.
Initially, each question was assessed by three crowd workers, and the number of assessments increased, to a maximum of seven, until there was at least 70% agreement on the question label. The high agreement among annotators (Fleiss’s kappa of 0.67 among the first three annotators and an average agreement level of 89.5% among all annotators) indicates great consistency in people’s judgments about humorous content.
Past research has shown that using machine learning to train humor recognition models runs the risk of domain bias. In our case, rather than learning to recognize characteristics of humorous questions, a model might just learn to recognize products — such as canned unicorn meat — that tend to provoke humorous responses.
To test for product bias, we created two data sets. One paired each positive (humorous) example with a negative (serious) example drawn at random from a different product page, and the other matched each humorous example with a serious example drawn from the same product page.
The second data set featured the same number of comic and serious examples for every product included, so the model couldn’t simply learn to recognize products that invited comic questions. We refer to this as the unbiased data set, the other as the biased data set.
Then we used both data sets to train the four baseline models and our model. On the unbiased data set, our model achieved an accuracy rate of 84.4%, a 5.4% error reduction over the best-performing baseline.
On the biased data set, our model’s accuracy exceeds 90.8%. The improvement comes from allowing the model to recognize products that invite comic questions. Whether that improvement will generalize to other test sets, or whether it will prove more practical to focus on unbiased training data — where detection accuracy is relatively lower — is a question for further study.
Recognizing humor is a difficult AI challenge, but meeting it will ensure that the Amazon Store remains a place where customers can find useful product information quickly and have some fun while they’re at it.