This paper approaches the generation based text summarization problem with a similar class of models, in a relatively smaller data setting and attempts to alleviate the challenges faced with imitation learning.
Generation based text summarization is a hard task and recent deep learning attempts show that sequence to sequence models hold promise. In this paper, we approach this problem with a similar class of models, in a relatively smaller data setting and attempt to alleviate the challenges faced with imitation learning.