Graph Structure of Neural Networks

Neural graph architecture and function

This paper proposes a method to represent the neural network architecture into a graph (“relational graph” – probably not the best name…) and argues that this reveals the relationships between neural architecture and performance.

The relational graph is constructed by considering each layer from the perspective of the Graph neural network. One neuron from a layer and the corresponding neuron from the next layer can be considered the same neuron and we can imagine a corresponding graph where the information gets exchanged.

How about layers with a smaller number of neurons? Multiple channels (Neurons) are concatenated into a single node. For instance, if one layer has a width of 5 and the other layer has a width of 9, then

the 4 nodes have dimensions {2,1,1,1} in the first layer and {3,2,2,2} in the second layer.

The number of nodes in the relational graph is limited by the “width of the narrowest layer” in the neural network.

The paper then systematically generates a wide variety of networks (Multi layer perceptron) based on the design (relational graph) sampled from a large space of possible configurations and tests their performance. They show that the Clustering coefficient and Average path length affect the performance. (How about other characteristics? Are they varying every relevant characteristic?)