Contrastive co-training for diversified recommendation
Beyond accuracy, diversity has become a crucial factor to evaluate a recommendation system as higher diversity helps mitigate echo chamber issue and improve user satisfaction. Recently, great success has been made to improve diversity, but the approaches often sacrifice much lower accuracy. Herein this work, we propose contrastive co-training for diversified recommendation that improves diversity greatly and achieves comparable or even better accuracy. Specifically, we keep two user-item graph views for recommendation and contrastive learning, respectively. Pseudo edges are predicted from current graph view to augment the other graph view by mining the novel items that users might be highly interested in. However, merely leveraging co-training hurts the accuracy since the pseudo labels are sometimes noisy. Therefore, we propose diversified contrastive learning that not only is robust to the noisy pseudo edges but also improves the diversity further by alleviating the popularity and category biases by re-balancing item-level popularity and category-level advantage. The extensive experiments on three public datasets show the superiority of our proposed model in terms of accuracy and diversity compared with strong baselines.