Graph Neural Networks for Natural Language Processing: A Survey
A new taxonomy of GNNs for NLP is proposed, which systematically organizes existing research of Gnns forNLP along three axes: graph construction, graph representation learning, and graph based encoder-decoder models.
Abstract
<jats:p>Deep learning has become the dominant approach in addressing various tasks in Natural Language Processing (NLP). Although text inputs are typically represented as a sequence of tokens, there is a rich variety of NLP problems that can be best expressed with a graph structure. As a result, there is a surge of interest in developing new deep learning techniques on graphs for a large number of NLP tasks. In this survey, we present a comprehensive overview on Graph Neural Networks (GNNs) for Natural Language Processing. We propose a new taxonomy of GNNs for NLP, which systematically organizes existing research of GNNs for NLP</jats:p>