Fresh picks from ArXiv
This week on ArXiv: self-supervised approach without negatives, review of generative models, and semantic search at AliBaba 👞
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNNs
* Neural message passing for joint paratope-epitope prediction with Petar Veličković
* Graph Infomax Adversarial Learning for Treatment Effect Estimation with Networked Observational Data KDD 21
* GraphMI: Extracting Private Graph Data from Graph Neural Networks IJCAI 21
* Graph Barlow Twins: A self-supervised representation learning framework for graphs
* Motif Prediction with Graph Neural Networks
* SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks
Algorithms
* AliCG: Fine-grained and Evolvable Conceptual Graph Construction for Semantic Search at Alibaba KDD 2021
* Stochastic Iterative Graph Matching ICML 2021
* Convergent Graph Solvers
Survey
* Evaluation Metrics for Graph Generative Models: Problems, Pitfalls, and Practical Solutions with Karsten Borgwardt
* Laplacian-Based Dimensionality Reduction Including Spectral Clustering, Laplacian Eigenmap, Locality Preserving Projection, Graph Embedding, and Diffusion Map: Tutorial and Survey
* Graph-based Deep Learning for Communication Networks: A Survey
This week on ArXiv: self-supervised approach without negatives, review of generative models, and semantic search at AliBaba 👞
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNNs
* Neural message passing for joint paratope-epitope prediction with Petar Veličković
* Graph Infomax Adversarial Learning for Treatment Effect Estimation with Networked Observational Data KDD 21
* GraphMI: Extracting Private Graph Data from Graph Neural Networks IJCAI 21
* Graph Barlow Twins: A self-supervised representation learning framework for graphs
* Motif Prediction with Graph Neural Networks
* SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks
Algorithms
* AliCG: Fine-grained and Evolvable Conceptual Graph Construction for Semantic Search at Alibaba KDD 2021
* Stochastic Iterative Graph Matching ICML 2021
* Convergent Graph Solvers
Survey
* Evaluation Metrics for Graph Generative Models: Problems, Pitfalls, and Practical Solutions with Karsten Borgwardt
* Laplacian-Based Dimensionality Reduction Including Spectral Clustering, Laplacian Eigenmap, Locality Preserving Projection, Graph Embedding, and Diffusion Map: Tutorial and Survey
* Graph-based Deep Learning for Communication Networks: A Survey
Graph Neural Networking Challenge 2021
An interesting competition, organized by Technical University of Catalonia (UPC) and ITU, about building GNNs to predict source-destination routing time. The goal is to test generalization abilities of GNNs: training on small graphs and testing on much larger graphs.
An interesting competition, organized by Technical University of Catalonia (UPC) and ITU, about building GNNs to predict source-destination routing time. The goal is to test generalization abilities of GNNs: training on small graphs and testing on much larger graphs.
bnn.upc.edu
challenge 2021 -
Graph Neural Networking Challenge 2021 Creating a Scalable Network Digital Twin ITU Artificial Intelligence/Machine Learning in 5G Challenge ITU Artificial Intelligence/Machine Learning in 5G Challenge ITU invites you to participate in the ITU Artificial…
Udemy Graph Neural Network course
Online course at Udemy that covers the basics of representation learning on graphs (e.g. DeepWalk, node2vec) and popular GNN architectures, plus some PyG implementations.
Online course at Udemy that covers the basics of representation learning on graphs (e.g. DeepWalk, node2vec) and popular GNN architectures, plus some PyG implementations.
Udemy
Graph Neural Network
From Graph Representation Learning to Graph Neural Network (Complete Introductory Course to GNN)
Deep Learning on Graphs for Natural Language Processing
Interesting tutorial at NAACL 2021 about applications of graph models to NLP tasks such as text classification, semantic parsing, machine translation, and more. It's based on Graph4NLP library and the slides are available here.
Interesting tutorial at NAACL 2021 about applications of graph models to NLP tasks such as text classification, semantic parsing, machine translation, and more. It's based on Graph4NLP library and the slides are available here.
GitHub
GitHub - graph4ai/graph4nlp_demo: This repo is to present various code demos on how to use our Graph4NLP library.
This repo is to present various code demos on how to use our Graph4NLP library. - GitHub - graph4ai/graph4nlp_demo: This repo is to present various code demos on how to use our Graph4NLP library.
Graphs at ICLR 2021
Very good digest of a few graph papers at ICLR 2021. Talks about new GNNS to solve overmoothing, over-squashing, heterophily, and attention problems.
Very good digest of a few graph papers at ICLR 2021. Talks about new GNNS to solve overmoothing, over-squashing, heterophily, and attention problems.
Danielepaliotta
daniele paliotta | Graphs at ICLR 2021
--
PyTorch-Geometric Tutorial Talk
Today, I will speak about our ICLR work "Boost then Convolve: Gradient Boosting Meets Graph Neural Networks". If you want to learn more about how GBDT and GNN work, and how they can be applied successfully for node prediction tasks, please join here at 15 (Paris time).
Today, I will speak about our ICLR work "Boost then Convolve: Gradient Boosting Meets Graph Neural Networks". If you want to learn more about how GBDT and GNN work, and how they can be applied successfully for node prediction tasks, please join here at 15 (Paris time).
openreview.net
Boost then Convolve: Gradient Boosting Meets Graph Neural Networks
Graph neural networks (GNNs) are powerful models that have been successful in various graph representation learning tasks. Whereas gradient boosted decision trees (GBDT) often outperform other...
GML Express: keynotes at ICLR, topics at ICML 2021, and new GNN tutorials.
The most interesting events in graph ML during the last 2 months are in my new issue of graph ML newsletter.
The most interesting events in graph ML during the last 2 months are in my new issue of graph ML newsletter.
Graph Machine Learning
GML Express: keynotes at ICLR, topics at ICML 2021, and new GNN tutorials.
"There are 3 ways to make a living: be first, be smarter, or cheat." Margin Call
Fresh picks from ArXiv
This week on ArXiv: analysis of transformers, resolving scalability, and new attacks ⚔️
If I forgot to mention your paper, please shoot me a message and I will update the post.
Embeddings
* Self-supervised Graph-level Representation Learning with Local and Global Structure with Jian Tang
* Do Transformers Really Perform Bad for Graph Representation?
* Order Matters: Probabilistic Modeling of Node Sequence for Graph Generation ICML 2021
* Symmetric Spaces for Graph Embeddings: A Finsler-Riemannian Approach ICML 2021
GNNs
* TDGIA:Effective Injection Attacks on Graph Neural Networks KDD 2021
* Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction with Jian Tang
* Is Homophily a Necessity for Graph Neural Networks?
* Learning to Pool in Graph Neural Networks for Extrapolation
* GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings with Jure Leskovec
* Scaling Up Graph Neural Networks Via Graph Coarsening
* Rethinking Graph Transformers with Spectral Attention with William L. Hamilton
* Breaking the Limit of Graph Neural Networks by Improving the Assortativity of Graphs with Local Mixing Patterns
* Breaking the Limits of Message Passing Graph Neural Networks
Survey
* Survey of Image Based Graph Neural Networks
* Graph Neural Networks for Natural Language Processing: A Survey
This week on ArXiv: analysis of transformers, resolving scalability, and new attacks ⚔️
If I forgot to mention your paper, please shoot me a message and I will update the post.
Embeddings
* Self-supervised Graph-level Representation Learning with Local and Global Structure with Jian Tang
* Do Transformers Really Perform Bad for Graph Representation?
* Order Matters: Probabilistic Modeling of Node Sequence for Graph Generation ICML 2021
* Symmetric Spaces for Graph Embeddings: A Finsler-Riemannian Approach ICML 2021
GNNs
* TDGIA:Effective Injection Attacks on Graph Neural Networks KDD 2021
* Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction with Jian Tang
* Is Homophily a Necessity for Graph Neural Networks?
* Learning to Pool in Graph Neural Networks for Extrapolation
* GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings with Jure Leskovec
* Scaling Up Graph Neural Networks Via Graph Coarsening
* Rethinking Graph Transformers with Spectral Attention with William L. Hamilton
* Breaking the Limit of Graph Neural Networks by Improving the Assortativity of Graphs with Local Mixing Patterns
* Breaking the Limits of Message Passing Graph Neural Networks
Survey
* Survey of Image Based Graph Neural Networks
* Graph Neural Networks for Natural Language Processing: A Survey
Deep Learning DIY course
A very good deep learning course by Marc Lelarge that among other things cover graph ML: graph embeddings, signal processing, and GNNs. It comes with videos, slides, notebooks, and assignments.
A very good deep learning course by Marc Lelarge that among other things cover graph ML: graph embeddings, signal processing, and GNNs. It comes with videos, slides, notebooks, and assignments.
Dynamic GNNs videos
A new YouTube channel that discusses spatio-temporal and dynamic GNNs in an easy and fun manner.
A new YouTube channel that discusses spatio-temporal and dynamic GNNs in an easy and fun manner.
YouTube
Deep learning with dynamic graph neural networks
Graph machine learning has become very popular in recent years in the machine learning and engineering communities. In this video, we explore the math behind some of the most popular graph neural network algorithms!
Support the channel by liking, commenting…
Support the channel by liking, commenting…
Results of OGB large-scale challenge
OGB team announced the results of KDD 2021 cup challenge where teams competed in node classification, triplet prediction, and graph regression tasks. Short summaries are provided for the winning solutions and it's quite interesting to see the diversity of the proposed methods: some used ensembles of GNNs, some pretrained graph embeddings, some label propagation, among others. Notably, Baidu and DeepMind scored really well on these tasks. Congrats to the winners!
OGB team announced the results of KDD 2021 cup challenge where teams competed in node classification, triplet prediction, and graph regression tasks. Short summaries are provided for the winning solutions and it's quite interesting to see the diversity of the proposed methods: some used ensembles of GNNs, some pretrained graph embeddings, some label propagation, among others. Notably, Baidu and DeepMind scored really well on these tasks. Congrats to the winners!
Open Graph Benchmark
OGB-LSC @ KDD Cup 2021
Learn about competition results and winning solutions
Graphormer - Do Transformers Really Perform Bad for Graph Representation? | Paper Explained
A nice explanation by Aleksa Gordić of the recent paper that shows how enriching node features with some structural information from the graph can help Transformer model to achieve SOTA results on OGB datasets.
A nice explanation by Aleksa Gordić of the recent paper that shows how enriching node features with some structural information from the graph can help Transformer model to achieve SOTA results on OGB datasets.
YouTube
Graphormer - Do Transformers Really Perform Bad for Graph Representation? | Paper Explained
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Paper: Do Transformers Really Perform Bad for Graph Representation?
In this video, I cover Graphormer a new transformer model that achieved SOTA results…
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Paper: Do Transformers Really Perform Bad for Graph Representation?
In this video, I cover Graphormer a new transformer model that achieved SOTA results…
Graph Neural Networks User Group: June
June's meeting of GNN user group will include the following talks:
* 4:00 - 4:30 (PST): Binary Graph Neural Networks and Dynamic Graph Models (Mahdi Saleh, Imperial College London).
* 4:30 - 5:00 (PST): Simplifying large-scale visual analysis of tricky data & models with GPUs, graphs, and ML (Leo Meyerovich, Graphistry Inc)
* 5:00 - 5:30 (PST): Open Discussion and Networking
Join this Thursday!
June's meeting of GNN user group will include the following talks:
* 4:00 - 4:30 (PST): Binary Graph Neural Networks and Dynamic Graph Models (Mahdi Saleh, Imperial College London).
* 4:30 - 5:00 (PST): Simplifying large-scale visual analysis of tricky data & models with GPUs, graphs, and ML (Leo Meyerovich, Graphistry Inc)
* 5:00 - 5:30 (PST): Open Discussion and Networking
Join this Thursday!
Eventbrite
Graph Neural Networks User Group
Fresh picks from ArXiv
This week on ArXiv: 1000-layer GNN, solutions to OGB challenge, and theory behind GNN explanations 🤔
If I forgot to mention your paper, please shoot me a message and I will update the post.
Deep GNNs
* Training Graph Neural Networks with 1000 Layers ICML 2021
* Very Deep Graph Neural Networks Via Noise Regularisation with Petar Veličković, Peter Battaglia
Heterophily
* Improving Robustness of Graph Neural Networks with Heterophily-Inspired Designs with Danai Koutra
Knowledge graphs
* Query Embedding on Hyper-relational Knowledge Graphs with Mikhail Galkin
OGB-challenge
* Fast Quantum Property Prediction via Deeper 2D and 3D Graph Networks
* First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
Theory
* Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations with Marinka Zitnik
* A unifying point of view on expressive power of GNNs
GNNs
* Stability of Graph Convolutional Neural Networks to Stochastic Perturbations with Alejandro Ribeiro
* TD-GEN: Graph Generation With Tree Decomposition
* Unsupervised Resource Allocation with Graph Neural Networks
* Equivariance-bridged SO(2)-Invariant Representation Learning using Graph Convolutional Network
* GemNet: Universal Directional Graph Neural Networks for Molecules with Stephan Günnemann
* Optimizing Graph Transformer Networks with Graph-based Techniques
Survey
* Systematic comparison of graph embedding methods in practical tasks
* Evaluating Modules in Graph Contrastive Learning
* A Survey on Mining and Analysis of Uncertain Graphs
This week on ArXiv: 1000-layer GNN, solutions to OGB challenge, and theory behind GNN explanations 🤔
If I forgot to mention your paper, please shoot me a message and I will update the post.
Deep GNNs
* Training Graph Neural Networks with 1000 Layers ICML 2021
* Very Deep Graph Neural Networks Via Noise Regularisation with Petar Veličković, Peter Battaglia
Heterophily
* Improving Robustness of Graph Neural Networks with Heterophily-Inspired Designs with Danai Koutra
Knowledge graphs
* Query Embedding on Hyper-relational Knowledge Graphs with Mikhail Galkin
OGB-challenge
* Fast Quantum Property Prediction via Deeper 2D and 3D Graph Networks
* First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
Theory
* Towards a Rigorous Theoretical Analysis and Evaluation of GNN Explanations with Marinka Zitnik
* A unifying point of view on expressive power of GNNs
GNNs
* Stability of Graph Convolutional Neural Networks to Stochastic Perturbations with Alejandro Ribeiro
* TD-GEN: Graph Generation With Tree Decomposition
* Unsupervised Resource Allocation with Graph Neural Networks
* Equivariance-bridged SO(2)-Invariant Representation Learning using Graph Convolutional Network
* GemNet: Universal Directional Graph Neural Networks for Molecules with Stephan Günnemann
* Optimizing Graph Transformer Networks with Graph-based Techniques
Survey
* Systematic comparison of graph embedding methods in practical tasks
* Evaluating Modules in Graph Contrastive Learning
* A Survey on Mining and Analysis of Uncertain Graphs
Open Catalyst Challenge: Using AI to discover catalysts for renewable energy storage
Open Catalyst Project is an endeavor by Facebook and CMU to predict the energies between the molecules and the catalysts with the applications for discovering new energy solutions, which I wrote about in one of the coolest applications of GNNs. This year there is a competition for this project organized by the same team. Winners will be invited to NeurIPS 2021 to present their solutions.
Open Catalyst Project is an endeavor by Facebook and CMU to predict the energies between the molecules and the catalysts with the applications for discovering new energy solutions, which I wrote about in one of the coolest applications of GNNs. This year there is a competition for this project organized by the same team. Winners will be invited to NeurIPS 2021 to present their solutions.
opencatalystproject.org
Open Catalyst Project
Using AI to model and discover new catalysts to address the energy challenges posed by climate change.
Transferability of Spectral Graph Convolutional Neural Networks
A talk by Ron Levie (Ludwig Maximilian University of Munich, Germany) about spectral GNNs. As Xavier Bresson said: "His work aims at debunking the misconception that spectral nets are computationally expensive, unstable/do not generalize - which is not true (theoretically & in practice).". Good research for those who love math in GNN world.
A talk by Ron Levie (Ludwig Maximilian University of Munich, Germany) about spectral GNNs. As Xavier Bresson said: "His work aims at debunking the misconception that spectral nets are computationally expensive, unstable/do not generalize - which is not true (theoretically & in practice).". Good research for those who love math in GNN world.
YouTube
Ron Levie - Transferability of Spectral Graph Convolutional Neural Networks - Seminar
This online talk by Ron Levie (Ludwig Maximilian University of Munich, Gemany) was part of a series of seminars organized by GMLG at Univeristà della Svizzera italiana.
Abstract:
Graph neural networks (GNN) are generalizations of grid-based deep learning…
Abstract:
Graph neural networks (GNN) are generalizations of grid-based deep learning…
Graph Machine Learning research groups: Johan Ugander
I do a series of posts on the groups in graph research, previous post is here. The 30th is Johan Ugander, a professor at Stanford, who was a post-doc at Microsoft Research Redmond 2014-2015 and held an affiliation with the Facebook Data Science team 2010-2014.
Johan Ugander (~1986)
- Affiliation: Stanford
- Education: Ph.D. at Cornell in 2014 (advisors: Jon Kleinberg)
- h-index 17
- Interests: social network analysis, algorithms on graphs, clustering
- Awards: Young Investigator Award, best paper awards (WebSci, WSDM, AAAI)
I do a series of posts on the groups in graph research, previous post is here. The 30th is Johan Ugander, a professor at Stanford, who was a post-doc at Microsoft Research Redmond 2014-2015 and held an affiliation with the Facebook Data Science team 2010-2014.
Johan Ugander (~1986)
- Affiliation: Stanford
- Education: Ph.D. at Cornell in 2014 (advisors: Jon Kleinberg)
- h-index 17
- Interests: social network analysis, algorithms on graphs, clustering
- Awards: Young Investigator Award, best paper awards (WebSci, WSDM, AAAI)
Telegram
Graph Machine Learning
Graph Machine Learning research groups: Gal Chechik
I do a series of posts on the groups in graph research, previous post is here. The 29th is Gal Chechik, a professor at the Gonda Brain research institute and a director of AI at NVIDIA in Israel.
Gal…
I do a series of posts on the groups in graph research, previous post is here. The 29th is Gal Chechik, a professor at the Gonda Brain research institute and a director of AI at NVIDIA in Israel.
Gal…
Compositional Tokenization for Knowledge Graphs
This is a guest post by Michael Galkin about their new paper of reducing the memory issues of existing approaches.
Pretty much all KG embedding algorithms are, in fact, shallow embedding algorithms. It means that each node is mapped to a unique vector - and as a basis for all downstream tasks you need to store the whole embedding matrix in memory. Already at the OGB scale (2.5-5M nodes) you’d need 2-10 GB VRAM on the embeddings only, not counting forward passes and backprop. The more nodes you have - the bigger the matrix, the more expensive GPU you need.
Looking back to 2015, it resembles word2vec and GloVe a lot - huge shallow word vocabularies of 0.5-3M words, every other word is OOV (out of vocab). Then, subword units arrived (as Byte-Pair Encoding or WordPiece) and dramatically reduced vocab sizes allowing building infinite combinations from a rather small tokens vocab (30-50K in BERT & GPT-2/3 ). Saved params are now better invested into a flurry of Transformer encoders.
If we treat nodes in a graph like “words”, what would be their “sub-word” units? Can we have a similar approach that would allow to bootstrap a representation of both seen and unseen nodes using the same vocab? We tackle those questions in our new work where we design NodePiece (pun intended), a compositional tokenization scheme for KGs where tokens are anchor nodes and relation types. Going from shallow to compositional encoding, we reduce embedding matrices 10-1000x times and still observe a competitive performance. Interestingly, sometimes you don’t even need trainable node embeddings to perform well on node classification and relation prediction, i.e, relations around the node are enough!
We encourage you to find even more details in the pre-print, Medum blog, and try out the code in Github repo.
This is a guest post by Michael Galkin about their new paper of reducing the memory issues of existing approaches.
Pretty much all KG embedding algorithms are, in fact, shallow embedding algorithms. It means that each node is mapped to a unique vector - and as a basis for all downstream tasks you need to store the whole embedding matrix in memory. Already at the OGB scale (2.5-5M nodes) you’d need 2-10 GB VRAM on the embeddings only, not counting forward passes and backprop. The more nodes you have - the bigger the matrix, the more expensive GPU you need.
Looking back to 2015, it resembles word2vec and GloVe a lot - huge shallow word vocabularies of 0.5-3M words, every other word is OOV (out of vocab). Then, subword units arrived (as Byte-Pair Encoding or WordPiece) and dramatically reduced vocab sizes allowing building infinite combinations from a rather small tokens vocab (30-50K in BERT & GPT-2/3 ). Saved params are now better invested into a flurry of Transformer encoders.
If we treat nodes in a graph like “words”, what would be their “sub-word” units? Can we have a similar approach that would allow to bootstrap a representation of both seen and unseen nodes using the same vocab? We tackle those questions in our new work where we design NodePiece (pun intended), a compositional tokenization scheme for KGs where tokens are anchor nodes and relation types. Going from shallow to compositional encoding, we reduce embedding matrices 10-1000x times and still observe a competitive performance. Interestingly, sometimes you don’t even need trainable node embeddings to perform well on node classification and relation prediction, i.e, relations around the node are enough!
We encourage you to find even more details in the pre-print, Medum blog, and try out the code in Github repo.
Medium
NodePiece: Tokenizing Knowledge Graphs
Mapping each node to an embedding vector results in enormously large embedding matrices. Is there a way to have a fixed-size vocabulary of…
Graph Neural Networks as Neural Diffusion PDEs
A new post by Michael Bronstein about the connection of GNNs and differential equations that govern diffusion on graphs. This gives new mathematical framework for studying different architectures on graphs as well as a blueprint for developing new ones.
A new post by Michael Bronstein about the connection of GNNs and differential equations that govern diffusion on graphs. This gives new mathematical framework for studying different architectures on graphs as well as a blueprint for developing new ones.
Fresh picks from ArXiv
This week on ArXiv: life science package of DGL, efficient models for knowledge graphs, and explanation insights from tabular data 🤓
If I forgot to mention your paper, please shoot me a message and I will update the post.
Software
* DGL-LifeSci: An Open-Source Toolkit for Deep Learning on Graphs in Life Science
Embeddings
* Simple Truncated SVD based Model for Node Classification on Heterophilic Graphs
* Exploring the Representational Power of Graph Autoencoder
* NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs with Mikhail Galkin and William L. Hamilton
* A Deep Latent Space Model for Graph Representation Learning
Explanation
* Towards Automated Evaluation of Explanations in Graph Neural Networks
* Reimagining GNN Explanations with ideas from Tabular Data
Survey
* Graph and hypergraph colouring via nibble methods: A survey
This week on ArXiv: life science package of DGL, efficient models for knowledge graphs, and explanation insights from tabular data 🤓
If I forgot to mention your paper, please shoot me a message and I will update the post.
Software
* DGL-LifeSci: An Open-Source Toolkit for Deep Learning on Graphs in Life Science
Embeddings
* Simple Truncated SVD based Model for Node Classification on Heterophilic Graphs
* Exploring the Representational Power of Graph Autoencoder
* NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs with Mikhail Galkin and William L. Hamilton
* A Deep Latent Space Model for Graph Representation Learning
Explanation
* Towards Automated Evaluation of Explanations in Graph Neural Networks
* Reimagining GNN Explanations with ideas from Tabular Data
Survey
* Graph and hypergraph colouring via nibble methods: A survey