Providing conversation models with background knowledge has been shown to make open-domain dialogues more informative and engaging. Existing models treat knowledge selection as a sentence ranking or classification problem where each sentence is handled individually, ignoring the internal semantic connection among sentences in background document. In this work, we propose to automatically convert the background knowledge documents into document semantic graphs and then perform knowledge selection over such graphs. Our document semantic graphs preserve sentence-level information through the use of sentence nodes andprovide concept connections between sentences. We apply multitask learning for sentence-level knowledge selection and concept-level knowledge selection jointly, and show that it improves sentencelevel selection. Our experiments show that our semantic graph based knowledge selection improves over sentence selection baselines for both the knowledge selection task and the end-to-end response generation task on HollE (Moghe et al., 2018) and improves generalization on unseen topics in WoW (Dinan et al., 2019).
Enhanced knowledge selection for grounded dialogues via document semantic graphs
2022
Research areas