login
Home / Papers / Neural Bellman-Ford Networks: A General Graph Neural Network Framework for...

Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction

244 Citations•2021•
Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhonneux
journal unavailable

The Neural Bellman-Ford Network (NBFNet) is proposed, a general graph neural network framework that solves the path formulation with learned operators in the generalized Bell man-Ford algorithm, and outperforms existing methods by a large margin in both transductive and inductive settings.

Abstract

Link prediction is a very fundamental task on graphs. Inspired by traditional path-based methods, in this paper we propose a general and flexible representation learning framework based on paths for link prediction. Specifically, we define the representation of a pair of nodes as the generalized sum of all path representations, with each path representation as the generalized product of the edge representations in the path. Motivated by the Bellman-Ford algorithm for solving the shortest path problem, we show that the proposed path formulation can be efficiently solved by the generalized Bellman-Ford algorithm. To further improve the capacity of the path formulation, we propose the Neural Bellman-Ford Network (NBFNet), a general graph neural network framework that solves the path formulation with learned operators in the generalized Bellman-Ford algorithm. The NBFNet parameterizes the generalized Bellman-Ford algorithm with 3 neural components, namely INDICATOR, MESSAGE and AGGREGATE functions, which corresponds to the boundary condition, multiplication operator, and summation operator respectively. The NBFNet is very general, covers many traditional path-based methods, and can be applied to both homogeneous graphs and multi-relational graphs (e.g., knowledge graphs) in both transductive and inductive settings. Experiments on both homogeneous graphs and knowledge graphs show that the proposed NBFNet outperforms existing methods by a large margin in both transductive and inductive settings, achieving new state-of-the-art results.