Entailment-aware Decoder中,使用摘要被原文蕴含的程度作为奖励,并使用奖励增强的极大似然(Reward Augmented Maximum Likelihood, RAML)代替普通的极大似然损失函数(例如:交叉熵)进行训练,鼓励模型生成更能被原文蕴含的摘要。由于RAML解释起来篇幅较长,并且与本文主题关联不大,因此不再赘述,若读者感兴趣,可以阅读原论文Reward Augmented Maximum Likelihood for Neural Structured Prediction以及RAML在文本摘要中的应用。
尽管问答对话或者阅读理解近些年已经在越来越关注深度学习如何更有效的利用知识(例如Concept Net)从而提高逻辑和可解释性,但是一些常见的文本生成任务(文本摘要、评论生成等)都没有有效的利用source中潜在蕴含的knowledge graph。所以,我整理了2个knowledge graph to text 的研究方向如下:
2.1 从长文本构建graph,然后辅助生成文本
paper:Text Generation from Knowledge Graphs with Graph Transformers. NAACL 2019
哟林小平:文本生成7:Long and Diverse Text Generation with PHVMzhuanlan.zhihu.com哟林小平:文本生成4:Syntax-Infused VAE for Text Generationzhuanlan.zhihu.com哟林小平:文本生成1:Towards Generating Long and Coherent Textzhuanlan.zhihu.com
【2】Shao Z, Huang M, Wen J, et al. Long and Diverse Text Generation with Planning-based Hierarchical Variational Model[J]. arXiv preprint arXiv:1908.06605, 2019.
【3】Shen D, Celikyilmaz A, Zhang Y, et al. Towards Generating Long and Coherent Text with Multi-Level Latent Variable Models[J]. arXiv preprint arXiv:1902.00154, 2019.
【4】Zhang X, Yang Y, Yuan S, et al. Syntax-Infused Variational Autoencoder for Text Generation[J]. arXiv preprint arXiv:1906.02181, 2019.
【5】Yang P, Li L, Luo F, et al. Enhancing Topic-to-Essay Generation with External Commonsense Knowledge[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019: 2002-2012.
【6】Li W, Xu J, He Y, et al. Coherent Comment Generation for Chinese Articles with a Graph-to-Sequence Model[J]. arXiv preprint arXiv:1906.01231, 2019.
【7】Zhu C, Hinthorn W, Xu R, et al. Boosting Factual Correctness of Abstractive Summarization with Knowledge Graph[J]. arXiv preprint arXiv:2003.08612, 2020.
【8】Li H, Zhu J, Zhang J, et al. Ensure the correctness of the summary: Incorporate entailment knowledge into abstractive sentence summarization[C]//Proceedings of the 27th International Conference on Computational Linguistics. 2018: 1430-1441.