Providing a justification or explanation for a recommendation has been shown to improve the users’ experience with recommender systems, in particular by increasing confidence in the recommendations. However, in order to be effective in a conversational setting, the justifications have to be appropriate for the conversation so far. Previous approaches rely on a user history of reviews and ratings of related items to personalize the recommendation, but this information is not generally available when conversing with a new user, and as such a cold-start problem imposes a challenge in generating suitable justifications. To address this problem, we propose and validate a new method, CONJURE (CONversational JUstificatons for REcommendations) to generate contextually relevant justifications for conversational recommendations. Specifically, we investigate whether the conversation itself can be used effectively to model the user, identify relevant review content from other users, and generate a justification that boosts the user’s confidence in and understanding of the recommendation. To implement CONJURE, we test several novel extensions to prior algorithms, by exploiting an auxiliary corpus of movie reviews to construct the justifications from extracted pieces of those reviews. In particular, we explore different conversation representations and ranking approaches. To evaluate CONJURE, we developed a pairwise crowd task to compare justifications. Our results show large, significant improvements in Efficiency and Transparency metrics over the previous noncontextualized template-based methods. We plan to release our code and an augmented conversation corpus on Github.
Generating and validating contextually relevant justifications for conversational recommendation
2022
Research areas