出版社:Electronics and Telecommunications Research Institute
摘要:ive text summarization is a process of making a summary of a given text by paraphrasing the facts of the text while keeping the meaning intact. The manmade summary generation process is laborious and time‐consuming. We present here a summary generation model that is based on multilayered attentional peephole convolutional long short‐term memory (MAPCoL; LSTM) in order to extract ive summaries of large text in an automated manner. We added the concept of attention in a peephole convolutional LSTM to improve the overall quality of a summary by giving weights to important parts of the source text during training. We evaluated the performance with regard to semantic coherence of our MAPCoL model over a popular dataset named CNN/Daily Mail, and found that MAPCoL outperformed other traditional LSTM‐based models. We found improvements in the performance of MAPCoL in different internal settings when compared to state‐of‐the‐art models of ive text summarization.