If We Draw Graphs Like This, We Can Change Computers Forever
The title is catchy, but the article is "only" about improvement for dynamic planarity testing problem. Planarity testing is well-studied problem for testing if a graph can be drawn without crossing edges and O(n) algorithms are known. This article on the other hand studies the case when the edges may be added and removed and the question is how to redraw the graph so that it becomes planar. The results were published at STOC'20.
The title is catchy, but the article is "only" about improvement for dynamic planarity testing problem. Planarity testing is well-studied problem for testing if a graph can be drawn without crossing edges and O(n) algorithms are known. This article on the other hand studies the case when the edges may be added and removed and the question is how to redraw the graph so that it becomes planar. The results were published at STOC'20.
Popular Mechanics
If We Draw Graphs Like This, We Can Change Computers Forever
Engineers could use this breakthrough in graph theory to design wildly efficient quantum computer chips.
A Complete Beginner's Guide to G-Invariant Neural Networks
A tutorial by S. Chandra Mouli and Bruno Ribeiro about G-invariant neural networks, eigenvectors, invariant subspaces, transformation groups, Reynolds operator, and more. Soon, there should be more tutorials on the topic of invariance and linear algebra.
A tutorial by S. Chandra Mouli and Bruno Ribeiro about G-invariant neural networks, eigenvectors, invariant subspaces, transformation groups, Reynolds operator, and more. Soon, there should be more tutorials on the topic of invariance and linear algebra.
www.invariances.org
Foundations of Invariant and Equivariant Representation Learning
Graph Transformer: A Generalization of Transformers to Graphs
A blog post by Vijay Prakash Dwivedi that discusses their paper A Generalization of Transformer Networks to Graphs with Xavier Bresson at 2021 AAAI Workshop (DLG-AAAI’21). It looks like a generalization of GAT network with batch norm and positional encodings. It still though aggregates via local neighborhoods.
My feeling after studying heterophily is that we will see more works that go beyond local neighborhoods and maybe will define neighborhoods not as something that is given by the graph topology but as something we have to learn. For example, we can define attention from each node to all other nodes in the graph and treat the distances in the graph as additional features. It could be difficult to scale so sampling methods should be employed I guess, but it seems allowing the network to decide which nodes are important for aggregation could be a better way to go.
A blog post by Vijay Prakash Dwivedi that discusses their paper A Generalization of Transformer Networks to Graphs with Xavier Bresson at 2021 AAAI Workshop (DLG-AAAI’21). It looks like a generalization of GAT network with batch norm and positional encodings. It still though aggregates via local neighborhoods.
My feeling after studying heterophily is that we will see more works that go beyond local neighborhoods and maybe will define neighborhoods not as something that is given by the graph topology but as something we have to learn. For example, we can define attention from each node to all other nodes in the graph and treat the distances in the graph as additional features. It could be difficult to scale so sampling methods should be employed I guess, but it seems allowing the network to decide which nodes are important for aggregation could be a better way to go.
Medium
Graph Transformer: A Generalization of Transformers to Graphs
We generalize Transformers to arbitrary graphs by extending key design aspects of attention and positional encodings from NLP to graphs.
PyTorch Geometric Temporal
PyG-Temporal is an extension of PyG for temporal graphs. It now includes more than 10 GNN models and several datasets. With world being dynamic I see more and more applications when standard GNN wouldn't work and one needs to resort to dynamic GNNs.
PyG-Temporal is an extension of PyG for temporal graphs. It now includes more than 10 GNN models and several datasets. With world being dynamic I see more and more applications when standard GNN wouldn't work and one needs to resort to dynamic GNNs.
Fresh picks from ArXiv
This week on ArXiv: comparison of using feature vs edge information, SOTA for heterogeneous graphs, surveys on efficient training of GNNs 🏃
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNNs
* Spatio-temporal Modeling for Large-scale Vehicular Networks Using Graph Convolutional Networks
* R-GSN: The Relation-based Graph Similar Network for Heterogeneous Graph
* On the Equivalence Between Temporal and Static Graph Representations for Observational Predictions with Bruno Ribeiro
* Deep Graph Matching under Quadratic Constraint
* Should Graph Neural Networks Use Features, Edges, Or Both?
* Deep graph convolution neural network with non-negative matrix factorization for community discovery
* Graph Neural Networks Inspired by Classical Iterative Algorithms
* Size-Invariant Graph Representations for Graph Classification Extrapolations
Survey
* Sampling methods for efficient training of graph convolutional networks: A survey
* Graph Metrics for Internet Robustness -- A Survey
* A Taxonomy for Classification and Comparison of Dataflows for GNN Accelerators
This week on ArXiv: comparison of using feature vs edge information, SOTA for heterogeneous graphs, surveys on efficient training of GNNs 🏃
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNNs
* Spatio-temporal Modeling for Large-scale Vehicular Networks Using Graph Convolutional Networks
* R-GSN: The Relation-based Graph Similar Network for Heterogeneous Graph
* On the Equivalence Between Temporal and Static Graph Representations for Observational Predictions with Bruno Ribeiro
* Deep Graph Matching under Quadratic Constraint
* Should Graph Neural Networks Use Features, Edges, Or Both?
* Deep graph convolution neural network with non-negative matrix factorization for community discovery
* Graph Neural Networks Inspired by Classical Iterative Algorithms
* Size-Invariant Graph Representations for Graph Classification Extrapolations
Survey
* Sampling methods for efficient training of graph convolutional networks: A survey
* Graph Metrics for Internet Robustness -- A Survey
* A Taxonomy for Classification and Comparison of Dataflows for GNN Accelerators
Large-scale graph machine learning challenge (OGB-LSC) at KDD Cup 2021
OGB-LSC is a collection of three graph datasets—PCQM4M-LSC, WikiKG90M-LSC, and MAG240M-LSC—that are orders of magnitude larger than existing ones. The three datasets correspond to link prediction, graph regression, and node classification tasks, respectively. The goal of OGB-LSC is to empower the community to discover innovative solutions for large-scale graph ML.
The competition will be from March 15th, 2021 until June 8th, 2021 and the winners will be notified by mid-June 2021. The winners will be honored at the KDD 2021 opening ceremony and will present their solutions at the KDD Cup workshop during the conference.
The graphs are indeed big, with the largest size 168 GB, and it's interesting what approaches can be used to solve these problems.
OGB-LSC is a collection of three graph datasets—PCQM4M-LSC, WikiKG90M-LSC, and MAG240M-LSC—that are orders of magnitude larger than existing ones. The three datasets correspond to link prediction, graph regression, and node classification tasks, respectively. The goal of OGB-LSC is to empower the community to discover innovative solutions for large-scale graph ML.
The competition will be from March 15th, 2021 until June 8th, 2021 and the winners will be notified by mid-June 2021. The winners will be honored at the KDD 2021 opening ceremony and will present their solutions at the KDD Cup workshop during the conference.
The graphs are indeed big, with the largest size 168 GB, and it's interesting what approaches can be used to solve these problems.
Open Graph Benchmark
OGB-LSC @ KDD Cup 2021
Why a Large-Scale Graph ML Competiton? Machine Learning (ML) on graphs has attracted immense attention in recent years because of the prevalence of graph-structured data in real-world applications. Modern application domains include web-scale social networks…
A Tale of Three Implicit Planners and the XLVIN agent
A video presentation by Petar Veličković about implicit planners, which could be seen as a middle-ground between model-based and model-free approaches for RL planning problems. The talk covers three popular implicit planners: VIN, ATreeC and XLVIN. All three focus on the recently popularised idea of algorithmically aligning to a planning algorithm, but with different realisations.
A video presentation by Petar Veličković about implicit planners, which could be seen as a middle-ground between model-based and model-free approaches for RL planning problems. The talk covers three popular implicit planners: VIN, ATreeC and XLVIN. All three focus on the recently popularised idea of algorithmically aligning to a planning algorithm, but with different realisations.
YouTube
Petar Veličković - A Tale of Three Implicit Planners and the XLVIN agent
Deep reinforcement learning (RL) is one of the most active topics in today's machine learning landscape. While it offers a remarkably powerful and general blueprint for solving generic problems, it is usually data-hungry, often requiring observations of millions…
Graph Machine Learning research groups: Yaron Lipman
I do a series of posts on the groups in graph research, previous post is here. The 25th is Yaron Lipman, a professor in Israel, who has been co-authoring many papers on equivariances and the power of GNNs.
Yaron Lipman (~1980)
- Affiliation: Weizmann Institute of Science
- Education: Ph.D. at Tel Aviv University in 2008 (advisors: David Levin and Daniel Cohen-Or)
- h-index 41
- Interests: geometric deep learning, meshes, 3d point clouds, equivariant networks
I do a series of posts on the groups in graph research, previous post is here. The 25th is Yaron Lipman, a professor in Israel, who has been co-authoring many papers on equivariances and the power of GNNs.
Yaron Lipman (~1980)
- Affiliation: Weizmann Institute of Science
- Education: Ph.D. at Tel Aviv University in 2008 (advisors: David Levin and Daniel Cohen-Or)
- h-index 41
- Interests: geometric deep learning, meshes, 3d point clouds, equivariant networks
Telegram
Graph Machine Learning
Graph Machine Learning research groups: Tyler Derr
I do a series of posts on the groups in graph research, previous post is here. The 24th is Tyler Derr, a young professor graph ML, who proposed signed GNNs on graphs with negative links.
Tyler Derr (~1992)…
I do a series of posts on the groups in graph research, previous post is here. The 24th is Tyler Derr, a young professor graph ML, who proposed signed GNNs on graphs with negative links.
Tyler Derr (~1992)…
GML Express: large-scale challenge, top papers in AI, and implicit planners.
Another issue of my newsletter. So I finally solved a struggle for me what to write this newsletter about: news or insights. GML express will cover news (which you mostly should get anyway from this channel) and GML In-Depth should cover my insights.
In this GML express you will find a bunch of learning materials, recent video presentations, blog posts, and announcements.
Another issue of my newsletter. So I finally solved a struggle for me what to write this newsletter about: news or insights. GML express will cover news (which you mostly should get anyway from this channel) and GML In-Depth should cover my insights.
In this GML express you will find a bunch of learning materials, recent video presentations, blog posts, and announcements.
Graph Machine Learning
GML Express: large-scale challenge, top papers in AI, and implicit planners.
“Why do old men wake so early? Is it to have one longer day?” Ernest Hemingway
Fresh picks from ArXiv
This week on ArXiv: large-scale node prediction competition, GNNs for 3D objects, and improved performance for imbalanced classification 👯♀️
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNNs
* OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs with Jure Leskovec
* Concentric Spherical GNN for 3D Representation Learning with Le Song
* GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks
* Catastrophic Forgetting in Deep Graph Networks: an Introductory Benchmark for Graph Classification
Applications
* A graph theoretical approach to the firebreak locating problem
* Exploiting Isomorphic Subgraphs in SAT (Long version)
* ChronoR: Rotation Based Temporal Knowledge Graph Embedding
* Code Completion by Modeling Flattened Abstract Syntax Trees as Graphs
* NetVec: A Scalable Hypergraph Embedding System
Software
* GNNerator: A Hardware/Software Framework for Accelerating Graph Neural Networks
* Characterizing the Communication Requirements of GNN Accelerators: A Model-Based Approach
This week on ArXiv: large-scale node prediction competition, GNNs for 3D objects, and improved performance for imbalanced classification 👯♀️
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNNs
* OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs with Jure Leskovec
* Concentric Spherical GNN for 3D Representation Learning with Le Song
* GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks
* Catastrophic Forgetting in Deep Graph Networks: an Introductory Benchmark for Graph Classification
Applications
* A graph theoretical approach to the firebreak locating problem
* Exploiting Isomorphic Subgraphs in SAT (Long version)
* ChronoR: Rotation Based Temporal Knowledge Graph Embedding
* Code Completion by Modeling Flattened Abstract Syntax Trees as Graphs
* NetVec: A Scalable Hypergraph Embedding System
Software
* GNNerator: A Hardware/Software Framework for Accelerating Graph Neural Networks
* Characterizing the Communication Requirements of GNN Accelerators: A Model-Based Approach
DIG: Dive into Graphs library
A new python library DIG (and paper) in PyTorch for several graph tasks:
* Graph Generation
* Self-supervised Learning on Graphs
* Explainability of Graph Neural Networks
* Deep Learning on 3D Graphs
A new python library DIG (and paper) in PyTorch for several graph tasks:
* Graph Generation
* Self-supervised Learning on Graphs
* Explainability of Graph Neural Networks
* Deep Learning on 3D Graphs
GNN User Group: meeting 3
Third meeting of GNN user group will include talks from Marinka Zitnik, Kexin Huang, and Xavier Bresson, talking about GNNs for therapeutics and combinatorial optimization. It will be tomorrow, 25th March.
Third meeting of GNN user group will include talks from Marinka Zitnik, Kexin Huang, and Xavier Bresson, talking about GNNs for therapeutics and combinatorial optimization. It will be tomorrow, 25th March.
Eventbrite
Graph Neural Networks User Group
Model distillation for GNNs
Model distillation is the approach to train a small neural network called student given a large pretrained neural network called teacher. Motivation for this is that you want to reduce the number of parameters of your production model as much as possible, while keeping the quality of your solution. One of the first approaches for this was by Geoffrey Hinton, Oriol Vinyals, Jeff Dean (what a combo) who proposed to train student network on the logits of the teacher network. Since then, a huge amount of losses has appeared that attempt to improve performance of student network, but the original approach by Hinton et al. still works reasonably well. A good survey is this recent one.
Surprisingly, there were not many papers on model distillation for GNNs. Here are a few examples:
* Reliable Data Distillation on Graph Convolutional Network SIGMOD 2020
* Distilling Knowledge from Graph Convolutional Networks CVPR 2020
* Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation Framework WWW 21
But these approaches were not convincing enough for me to say that knowledge distillation is solved for GNNs, so I'd say it's still an open question to research. I have also tried to train MLP model on GNN logits to see if we can replace GNN with MLP at inference time, and apparently you can get an uplift wrt vanilla MLP trained on targets; however, the performance is not as good as for GNNs. One of the good examples of significantly reducing the number of parameters of GNNs is the recent work on LP for node classification: LP has 0 parameters and with C&S it gets some MLP parameters but not as many as for GNNs.
Model distillation is the approach to train a small neural network called student given a large pretrained neural network called teacher. Motivation for this is that you want to reduce the number of parameters of your production model as much as possible, while keeping the quality of your solution. One of the first approaches for this was by Geoffrey Hinton, Oriol Vinyals, Jeff Dean (what a combo) who proposed to train student network on the logits of the teacher network. Since then, a huge amount of losses has appeared that attempt to improve performance of student network, but the original approach by Hinton et al. still works reasonably well. A good survey is this recent one.
Surprisingly, there were not many papers on model distillation for GNNs. Here are a few examples:
* Reliable Data Distillation on Graph Convolutional Network SIGMOD 2020
* Distilling Knowledge from Graph Convolutional Networks CVPR 2020
* Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation Framework WWW 21
But these approaches were not convincing enough for me to say that knowledge distillation is solved for GNNs, so I'd say it's still an open question to research. I have also tried to train MLP model on GNN logits to see if we can replace GNN with MLP at inference time, and apparently you can get an uplift wrt vanilla MLP trained on targets; however, the performance is not as good as for GNNs. One of the good examples of significantly reducing the number of parameters of GNNs is the recent work on LP for node classification: LP has 0 parameters and with C&S it gets some MLP parameters but not as many as for GNNs.
GitHub
GitHub - HobbitLong/RepDistiller: [ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge…
[ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methods - HobbitLong/RepDistiller
Tensorflow GNN libraries
If I miss some libraries in tensorflow, please let me know, I will update the list.
* tf_geometric (paper)
* tf2-gnn (microsoft)
* tf-gnn-samples (microsoft)
* Spektral (documentation)
* graph_nets (deepmind)
* gnn (documentation)
If I miss some libraries in tensorflow, please let me know, I will update the list.
* tf_geometric (paper)
* tf2-gnn (microsoft)
* tf-gnn-samples (microsoft)
* Spektral (documentation)
* graph_nets (deepmind)
* gnn (documentation)
GitHub
GitHub - CrawlScript/tf_geometric: Efficient and Friendly Graph Neural Network Library for TensorFlow 1.x and 2.x
Efficient and Friendly Graph Neural Network Library for TensorFlow 1.x and 2.x - CrawlScript/tf_geometric
GNN Explainer UI
Awesome tool that provides user interface for visualizing edge attributions on trained GNN models and compare different explanation methods. An explanation method takes as input a GNN model and a single sample graph and outputs attribution values for all the edges in the graph. Each explanation method uses a different approach for calculating how important each edge is and it is important to evaluate explanation methods as well.
Awesome tool that provides user interface for visualizing edge attributions on trained GNN models and compare different explanation methods. An explanation method takes as input a GNN model and a single sample graph and outputs attribution values for all the edges in the graph. Each explanation method uses a different approach for calculating how important each edge is and it is important to evaluate explanation methods as well.
Fresh picks from ArXiv
This week on ArXiv: tricks to improve GNNs, unlearning problem on graphs, and cheating on TOEFL with GNNs ✍️
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNNs
* A nonlinear diffusion method for semi-supervised learning on hypergraphs with Austin R. Benson
* Bag of Tricks of Semi-Supervised Classification with Graph Neural Networks
* Self-supervised Graph Neural Networks without explicit negative sampling
* Graph Unlearning
* InsertGNN: Can Graph Neural Networks Outperform Humans in TOEFL Sentence Insertion Problem?
* Beyond permutation equivariance in graph networks
* Knowledge-aware Contrastive Molecular Graph Learning
* Autism Spectrum Disorder Screening Using Discriminative Brain Sub-Networks: An Entropic Approach
Survey
* A Comprehensive Survey on Knowledge Graph Entity Alignment via Representation Learning
This week on ArXiv: tricks to improve GNNs, unlearning problem on graphs, and cheating on TOEFL with GNNs ✍️
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNNs
* A nonlinear diffusion method for semi-supervised learning on hypergraphs with Austin R. Benson
* Bag of Tricks of Semi-Supervised Classification with Graph Neural Networks
* Self-supervised Graph Neural Networks without explicit negative sampling
* Graph Unlearning
* InsertGNN: Can Graph Neural Networks Outperform Humans in TOEFL Sentence Insertion Problem?
* Beyond permutation equivariance in graph networks
* Knowledge-aware Contrastive Molecular Graph Learning
* Autism Spectrum Disorder Screening Using Discriminative Brain Sub-Networks: An Entropic Approach
Survey
* A Comprehensive Survey on Knowledge Graph Entity Alignment via Representation Learning
Video and slides: GNN User Group meeting 3
In the third meeting of GNN user group, there are two talks:
* Therapeutics Data Commons: Machine Learning Datasets and Tasks for Therapeutics by Marinka Zitnik and Kexin Huang (Harvard)
* The Transformer Network for TSP by Xavier Bresson (NTU)
Slides are available in their slack channel.
In the third meeting of GNN user group, there are two talks:
* Therapeutics Data Commons: Machine Learning Datasets and Tasks for Therapeutics by Marinka Zitnik and Kexin Huang (Harvard)
* The Transformer Network for TSP by Xavier Bresson (NTU)
Slides are available in their slack channel.
YouTube
Graph Neural Networks User Group Meeting on Mar 25, 2021
Agenda
4:00 - 4:30 (PST): Therapeutics Data Commons: Machine Learning Datasets and Tasks for Therapeutics
Abstract: Machine learning (ML) for therapeutics is an emerging field with incredible opportunities for innovation and expansion. Despite the initial…
4:00 - 4:30 (PST): Therapeutics Data Commons: Machine Learning Datasets and Tasks for Therapeutics
Abstract: Machine learning (ML) for therapeutics is an emerging field with incredible opportunities for innovation and expansion. Despite the initial…
Pytorch Geometric tutorial
Awesome tutorials on how to program your GNNs with PyTorch Geometric. I often say that the best way to learn about GNNs is through coding, so if you are new I would definitely recommend checking it out. There are upcoming sessions soon, if you want to do it live.
Awesome tutorials on how to program your GNNs with PyTorch Geometric. I often say that the best way to learn about GNNs is through coding, so if you are new I would definitely recommend checking it out. There are upcoming sessions soon, if you want to do it live.
YouTube
PyTorch Geometric tutorial: Graph Autoencoders & Variational Graph Autoencoders
In this tutorial, we present Graph Autoencoders and Variational Graph Autoencoders from the paper:
https://arxiv.org/pdf/1611.07308.pdf
Later, we show an example taken from the official PyTorch geometric repository:
https://github.com/rusty1s/pytorch_g…
https://arxiv.org/pdf/1611.07308.pdf
Later, we show an example taken from the official PyTorch geometric repository:
https://github.com/rusty1s/pytorch_g…
Graph Machine Learning research groups: Mingyuan Zhou
I do a series of posts on the groups in graph research, previous post is here. The 26th is Mingyuan Zhou, a professor at the University of Texas, who has been working on statistical aspects of GNNs.
Mingyuan Zhou (~1985)
- Affiliation: The University of Texas at Austin
- Education: Ph.D. at Duke University in 2013 (advisors: Lawrence Carin)
- h-index 30
- Interests: hyperbolic graph embeddings, bayesian GNNs, graph auto-encoders
I do a series of posts on the groups in graph research, previous post is here. The 26th is Mingyuan Zhou, a professor at the University of Texas, who has been working on statistical aspects of GNNs.
Mingyuan Zhou (~1985)
- Affiliation: The University of Texas at Austin
- Education: Ph.D. at Duke University in 2013 (advisors: Lawrence Carin)
- h-index 30
- Interests: hyperbolic graph embeddings, bayesian GNNs, graph auto-encoders
Telegram
Graph Machine Learning
Graph Machine Learning research groups: Yaron Lipman
I do a series of posts on the groups in graph research, previous post is here. The 25th is Yaron Lipman, a professor in Israel, who has been co-authoring many papers on equivariances and the power of GNNs.…
I do a series of posts on the groups in graph research, previous post is here. The 25th is Yaron Lipman, a professor in Israel, who has been co-authoring many papers on equivariances and the power of GNNs.…
Insights from Physics on Graphs and Relational Bias
A great lecture with lots of insights by Kyle Cranmer on the inductive biases involved in physics. Applying GNNs to life science problems is one of the biggest trends for ML and it's exciting to see more and more cool results in this area.
A great lecture with lots of insights by Kyle Cranmer on the inductive biases involved in physics. Applying GNNs to life science problems is one of the biggest trends for ML and it's exciting to see more and more cool results in this area.
YouTube
Graph Deep Learning 2021 - Kyle Cranmer - GNNs in physics
This guest lecture was part of the 2021 Graph Deep Learning course at Università della Svizzera italiana (https://www.usi.ch/). Prof. Cranmer talks about how relational inductive biases can be useful to describe physical systems.
Bio (from Wikipedia):
…
Bio (from Wikipedia):
…