# Graph embedding

Continuous embedding of graphs. Network geometry is a related field. A brief review in Korean: 20210703 연결망임베딩.

For an overview, check out the Graph Representation Learning Book written by William L. Hamilton, and reviews such as Graph Embedding Techniques, Applications, and Performance: A Survey.

It aims to represent (or compress) the original graph with a more compact, low-rank representation that captures the most salient information in the original graph. One way to think about how well the embedding captures the information is by thinking about the reconstruction problem. That is, can we reconstruct the original network with the compressed representation? Seshadhri2020impossibility argued that it is not possible for triangle-rich networks. However, Chanpuriya2020node argued that this conclusion is only valid in a narrow set of models.

A common technique is using (random) walks on graphs. Each walk can be considered a “sentence” with nodes as “words”. Then these walks can be directly fed into the word embedding methods like word2vec. The walks can be adjusted in many ways, for instance by introducing a bias against already visited nodes (e.g. node2vec). The other paradigm is Graph neural network. Similar to Message passing, it is about aggregating information around each node iteratively to obtain each node’s representation.

How about using other dynamics on networks? There have been several attempts. For instance, epidemiological processes from Network epidemiology have been used in Zhan2020SI and Zhan2020susceptible. At the same time, we can also think about multilayer or Temporal network structure. Yu2020identifying, Torricelli2020weg2vec, and Sato2019DyANE are examples.

One can also think about obtaining the representation of objects other than the nodes. For instance, can we obtain the embedding of edges or the embedding of relationships?

Can a graph embedding capture a hierarchical structure? One idea is coarse-graining a network into increasingly smaller, higher-level networks and then use that hierarchical relationship to learn fine-grained embedding (Chen2017HARP, Hu2019HGCN).