Graph neural network
Class of artificial neural networks / From Wikipedia, the free encyclopedia
A graph neural network (GNN) belongs to a class of artificial neural networks for processing data that can be represented as graphs.[1][2][3][4][5]
![](http://upload.wikimedia.org/wikipedia/commons/thumb/1/1e/GNN_building_blocks.png/640px-GNN_building_blocks.png)
In the more general subject of "geometric deep learning", certain existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs.[6] A convolutional neural network layer, in the context of computer vision, can be seen as a GNN applied to graphs whose nodes are pixels and only adjacent pixels are connected by edges in the graph. A transformer layer, in natural language processing, can be seen as a GNN applied to complete graphs whose nodes are words or tokens in a passage of natural language text.
The key design element of GNNs is the use of pairwise message passing, such that graph nodes iteratively update their representations by exchanging information with their neighbors. Since their inception, several different GNN architectures have been proposed,[2][3][7][8][9] which implement different flavors of message passing,[6][10] started by recursive[2] or convolutional constructive[3] approaches. As of 2022[update], whether it is possible to define GNN architectures "going beyond" message passing, or if every GNN can be built on message passing over suitably defined graphs, is an open research question.[11]
Relevant application domains for GNNs include Natural Language Processing,[12] social networks,[13] citation networks,[14] molecular biology,[15] chemistry,[16][17] physics[18] and NP-hard combinatorial optimization problems.[19]
Several open source libraries implementing graph neural networks are available, such as PyTorch Geometric[20] (PyTorch), TensorFlow GNN[21] (TensorFlow), jraph[22] (Google JAX), and GraphNeuralNetworks.jl[23]/GeometricFlux.jl[24] (Julia, Flux).