My personal favorite from ICLR 2020. The paper shows on which conditions GNN can compute any function and that the product of depth*width of GNN should be of size ~n in order to compute popular statistics on graphs (e.g. diameter, vertex cover, coloring, etc.).
The paper kind of shows that graphs are not necessary for graph classification. If you represent a graph as just a set of nodes without any information on their adjacency and train MLP model, you can get SOTA results. Important lesson to learn when we make judgments about the quality of the idea/paper based on empirical results.
Forwarded from Sergey Ivanov
Shows a significant boost in IQ-like tests (originally introduced by deepmind https://deepmind.com/blog/article/measuring-abstract-reasoning) if we use graphs to represent diagrams.
I had very high expectations for this paper, as I also had some convincing studies showing that you don't need graphs in many datasets. Essentially authors decompose an embedding from the neighborhood into signal and noise and show that with lots of noise you don't need any topology. They also propose an aggregation function that decides on how to filter noise from the neighbors. What's interesting is that they say that mean aggregation is better than sum, while in GIN network the aggregation is proved better with sum instead of mean.
Authors are super-confident about what they say but the conclusions are quite important (if correct). What they show is that you can use structural-based embeddings instead of distance-based embeddings and vice versa as they are equivalent. Structural-based embeddings are used for node classification task and distance-based embeddings are used for link prediction task, but apparently they are not that different.
