Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph Neural Networks
The role of graph convolutional filters in GNNs is discussed and it is shown that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
Abstract
Network data can be conveniently modeled as a graph signal, where data values\nare assigned to nodes of a graph that describes the underlying network\ntopology. Successful learning from network data is built upon methods that\neffectively exploit this graph structure. In this work, we leverage graph\nsignal processing to characterize the representation space of graph neural\nnetworks (GNNs). We discuss the role of graph convolutional filters in GNNs and\nshow that any architecture built with such filters has the fundamental\nproperties of permutation equivariance and stability to changes in the\ntopology. These two properties offer insight about the workings of GNNs and\nhelp explain their scalability and transferability properties which, coupled\nwith their local and distributed nature, make GNNs powerful tools for learning\nin physical networks. We also introduce GNN extensions using edge-varying and\nautoregressive moving average graph filters and discuss their properties.\nFinally, we study the use of GNNs in recommender systems and learning\ndecentralized controllers for robot swarms.\n