A Hierarchical Neural Autoencoder for Paragraphs and Documents

Jiwei Li, Thang Luong, Dan Jurafsky


Abstract

Natural language generation of coherent long texts like paragraphs or longer documents is a challenging problem for recurrent networks models. In this paper, we explore an important step toward this generation task: training an LSTM (Long-short term memory) auto-encoder to preserve and reconstruct multi-sentence paragraphs. We introduce a LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, and then decodes this embedding to reconstruct the original paragraph. We evaluate the reconstructed paragraph using standard metrics like ROUGE and Entity Grid, showing that LSTM models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. While only a first step toward generating coherent text units from neural models, our work has the potential to significantly impact natural language processing areas like generation and summarization\footnote{Code for the three models described in this paper can be found at \url{www.stanford.edu/~jiweil/}.}.