The Easiest Unsolved Problem in Graph Theory
Our new blog post about reconstruction conjecture, a well-known graph theory problem with 80 years of results but no final proof yet. I have already written several posts in this channel about it and it to me it's one of the grand challenges in graph theory (along with graph isomorphism problem). It seems there is quite some progress, so I hope to see it being resolved during my lifetime. In the meantime, we considered graph families for which reconstruction conjecture is known to be true and tried to come up with the easiest family of graphs that is still not resolved and have very few vertices. The resulted family is a type of bidegreed graphs (close to regular) on 20 vertices, which is probably possible to verify on the computer (though it would take a year or so).
Our new blog post about reconstruction conjecture, a well-known graph theory problem with 80 years of results but no final proof yet. I have already written several posts in this channel about it and it to me it's one of the grand challenges in graph theory (along with graph isomorphism problem). It seems there is quite some progress, so I hope to see it being resolved during my lifetime. In the meantime, we considered graph families for which reconstruction conjecture is known to be true and tried to come up with the easiest family of graphs that is still not resolved and have very few vertices. The resulted family is a type of bidegreed graphs (close to regular) on 20 vertices, which is probably possible to verify on the computer (though it would take a year or so).
Medium
The Easiest Unsolved Problem in Graph Theory
Graph theory has a long history of problems being solved by amateur mathematicians. Do you want to try yourself to become one of them?
The Transformer Network for the Traveling Salesman Problem
(video and slides) Another great tutorial from Xavier Bresson on traveling salesman problem (TSP) and recent ML approaches to solve it. It gives a nice overview of the current solvers such as Concorde or Gurobi and their computational complexity.
(video and slides) Another great tutorial from Xavier Bresson on traveling salesman problem (TSP) and recent ML approaches to solve it. It gives a nice overview of the current solvers such as Concorde or Gurobi and their computational complexity.
GML Newsletter: Homophily, Heterophily, and Oversmoothing for GNNs
Apparently, Cora and OGB datasets are mostly assortative datasets, i.e. nodes of the same labels tend to be connected. In many real-world applications, it's not the case, i.e. nodes of different groups are connected, while within the groups the connections are sparse. Such datasets are called disassortative graphs.
What has been realized in 2020 and now in 2021 is that typical GNNs like GCN do not work well in disassortative graphs. So several GNN architectures were proposed to get good performance for these datasets. Not only these new GNNs work well on assortative and disassortative graphs, but also they solve the problem of oversmoothing, i.e. effectively designing many layers for GNNs.
In my new email newsletter I discuss this change from assortative to disassortative GNNs and its relation to oversmoothing. What's interesting is that existing approaches still do not rely explicitly on the labels, but rather learn parameters to account for heterophily. In the future, I think there will be more hacks how to integrate target labels directly into the GNN algorithm.
Apparently, Cora and OGB datasets are mostly assortative datasets, i.e. nodes of the same labels tend to be connected. In many real-world applications, it's not the case, i.e. nodes of different groups are connected, while within the groups the connections are sparse. Such datasets are called disassortative graphs.
What has been realized in 2020 and now in 2021 is that typical GNNs like GCN do not work well in disassortative graphs. So several GNN architectures were proposed to get good performance for these datasets. Not only these new GNNs work well on assortative and disassortative graphs, but also they solve the problem of oversmoothing, i.e. effectively designing many layers for GNNs.
In my new email newsletter I discuss this change from assortative to disassortative GNNs and its relation to oversmoothing. What's interesting is that existing approaches still do not rely explicitly on the labels, but rather learn parameters to account for heterophily. In the future, I think there will be more hacks how to integrate target labels directly into the GNN algorithm.
Graph Machine Learning
GML Newsletter: Homophily, Heterophily, and Oversmoothing for GNNs
"Birds of a feather flock together"
Fresh picks from ArXiv
This week on ArXiv: 2 surveys on self-supervised graph learning, fair embeddings, and combined structural and positional node embeddings 🎭
If I forgot to mention your paper, please shoot me a message and I will update the post.
Survey
* Graph Self-Supervised Learning: A Survey with Philip S. Yu
* Graph-based Semi-supervised Learning: A Comprehensive Review
* Meta-Learning with Graph Neural Networks: Methods and Applications
* Benchmarking Graph Neural Networks on Link Prediction
* A Survey of RDF Stores & SPARQL Engines for Querying Knowledge Graphs
Embeddings
* Towards a Unified Framework for Fair and Stable Graph Representation Learning with Marinka Zitnik
* Node Proximity Is All You Need: Unified Structural and Positional Node and Graph Embedding with Danai Koutra
This week on ArXiv: 2 surveys on self-supervised graph learning, fair embeddings, and combined structural and positional node embeddings 🎭
If I forgot to mention your paper, please shoot me a message and I will update the post.
Survey
* Graph Self-Supervised Learning: A Survey with Philip S. Yu
* Graph-based Semi-supervised Learning: A Comprehensive Review
* Meta-Learning with Graph Neural Networks: Methods and Applications
* Benchmarking Graph Neural Networks on Link Prediction
* A Survey of RDF Stores & SPARQL Engines for Querying Knowledge Graphs
Embeddings
* Towards a Unified Framework for Fair and Stable Graph Representation Learning with Marinka Zitnik
* Node Proximity Is All You Need: Unified Structural and Positional Node and Graph Embedding with Danai Koutra
Video and slides: GNN User Group meeting 2
In the second meeting of GNN user group, there is a discussion of new release of DGL, graph analytics on GPU, as well as new approaches for training GNNs, including those on disassortative graphs. Slides can be found in the slack channel.
In the second meeting of GNN user group, there is a discussion of new release of DGL, graph analytics on GPU, as well as new approaches for training GNNs, including those on disassortative graphs. Slides can be found in the slack channel.
YouTube
Graph Neural Networks User Group Meeting on Feb 25, 2021
Logistics and announcement on the newest release of DGL 0.6.
Our featured speakers are:
4:00 - 4:25 (PST): Gunrock: Graph Analytics on GPU (Dr. John Owens, University of California, Davis)
4:25 - 4:50 (PST): NVIDIA CuGraph - An Open-Source Package for…
Our featured speakers are:
4:00 - 4:25 (PST): Gunrock: Graph Analytics on GPU (Dr. John Owens, University of California, Davis)
4:25 - 4:50 (PST): NVIDIA CuGraph - An Open-Source Package for…
Theoretical Foundations of Graph Neural Networks
Video presentation by Petar Veličković who covers design, history, and applications of GNNs. A lot of interesting concepts such as permutation invariance and equivariance discussed. Slides can be found here.
Video presentation by Petar Veličković who covers design, history, and applications of GNNs. A lot of interesting concepts such as permutation invariance and equivariance discussed. Slides can be found here.
YouTube
Theoretical Foundations of Graph Neural Networks
Deriving graph neural networks (GNNs) from first principles, motivating their use, and explaining how they have emerged along several related research lines.
Computer Laboratory Wednesday Seminar, 17 February 2021
Slide deck: https://petar-v.com/talks/GNN…
Computer Laboratory Wednesday Seminar, 17 February 2021
Slide deck: https://petar-v.com/talks/GNN…
Trapped in a Desert Maze: AlphaZero learns to deal with graphs
A fun video and blog post about AlphaZero playing a simple game of node covering in graphs. I wish there is some human interface to play against these creatures.
A fun video and blog post about AlphaZero playing a simple game of node covering in graphs. I wish there is some human interface to play against these creatures.
YouTube
Battle of A.I. wits: 2 bots trapped in a 3D desert maze
This channel documents our research into the behavior of AI agents, or creatures, living on graphs. In this episode, we describe the development of the "Ringo" test scenario. Then we show two sample "games", set in a 3D environment. The environment is a 3D…
Graph Machine Learning research groups: Tyler Derr
I do a series of posts on the groups in graph research, previous post is here. The 24th is Tyler Derr, a young professor graph ML, who proposed signed GNNs on graphs with negative links.
Tyler Derr (~1992)
- Affiliation: Vanderbilt University
- Education: Ph.D. at Michigan State University in 2020 (advisors: Jiliang Tang)
- h-index 10
- Awards: best papers at SDM
- Interests: adversarial attacks, graph neural networks
I do a series of posts on the groups in graph research, previous post is here. The 24th is Tyler Derr, a young professor graph ML, who proposed signed GNNs on graphs with negative links.
Tyler Derr (~1992)
- Affiliation: Vanderbilt University
- Education: Ph.D. at Michigan State University in 2020 (advisors: Jiliang Tang)
- h-index 10
- Awards: best papers at SDM
- Interests: adversarial attacks, graph neural networks
Telegram
Graph Machine Learning
Graph Machine Learning research groups: Austin R. Benson
I do a series of posts on the groups in graph research, previous post is here. The 23rd is Austin R. Benson, a professor at Cornell, who together with his students recently shook the graph community…
I do a series of posts on the groups in graph research, previous post is here. The 23rd is Austin R. Benson, a professor at Cornell, who together with his students recently shook the graph community…
Paper Notes in Notion
Vitaly Kurin discovered a great format to track notes for the papers he reads. These are short and clean digestions of papers in the intersection of GNNs and RL and I would definitely recommend to look it up if you are studying the same papers. You can also create the same format in Notion by adding a new page (database -> list) and then clicking on New button selectin the properties that are necessary.
Vitaly Kurin discovered a great format to track notes for the papers he reads. These are short and clean digestions of papers in the intersection of GNNs and RL and I would definitely recommend to look it up if you are studying the same papers. You can also create the same format in Notion by adding a new page (database -> list) and then clicking on New button selectin the properties that are necessary.
yobispace on Notion
Paper Notes by Vitaly Kurin | Notion
Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning
Fresh picks from ArXiv
This week on ArXiv: unsupervised community detection, reducing variance with sampling, and GNNs for quantum calculations 🧮
If I forgot to mention your paper, please shoot me a message and I will update the post.
Algorithms
* Graph Force Learning
* Recurrent Graph Neural Network Algorithm for Unsupervised Network Community Detection
* ForceNet: A Graph Neural Network for Large-Scale Quantum Calculations
* Autobahn: Automorphism-based Graph Neural Nets
* Graph Convolutional Embeddings for Recommender Systems SIGIR 2021
* On the Importance of Sampling in Learning Graph Convolutional Networks
* Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation Framework
Software
* Implementing graph neural networks with TensorFlow-Keras
* Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture
Survey
* Deep Graph Structure Learning for Robust Representations: A Survey
This week on ArXiv: unsupervised community detection, reducing variance with sampling, and GNNs for quantum calculations 🧮
If I forgot to mention your paper, please shoot me a message and I will update the post.
Algorithms
* Graph Force Learning
* Recurrent Graph Neural Network Algorithm for Unsupervised Network Community Detection
* ForceNet: A Graph Neural Network for Large-Scale Quantum Calculations
* Autobahn: Automorphism-based Graph Neural Nets
* Graph Convolutional Embeddings for Recommender Systems SIGIR 2021
* On the Importance of Sampling in Learning Graph Convolutional Networks
* Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation Framework
Software
* Implementing graph neural networks with TensorFlow-Keras
* Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture
Survey
* Deep Graph Structure Learning for Robust Representations: A Survey
MLSys 2021 Conference
MLsys is one of the main conferences on applications of ML in real-world. Accepted papers for MLSys 2021 are available here. It will also feature GNN workshop and keynote speakers from NVIDIA, PyTorch, and others. Dates are April 5-9, 2021. Registration is 25$ for students.
MLsys is one of the main conferences on applications of ML in real-world. Accepted papers for MLSys 2021 are available here. It will also feature GNN workshop and keynote speakers from NVIDIA, PyTorch, and others. Dates are April 5-9, 2021. Registration is 25$ for students.
mlsys.org
MLSys 2021 Schedule
MLSys Website
Top-10 Research Papers in AI
A new blog post about the top-10 most cited papers in AI during the last 5 years. I looked at major AI conferences and journals (excluding CV and NLP conferences).
It was quite a refreshing experience to realize that many of what we use today by default have been discovered only within the last few years. Things like Adam, Batch Norm, GCNs, etc.
A new blog post about the top-10 most cited papers in AI during the last 5 years. I looked at major AI conferences and journals (excluding CV and NLP conferences).
It was quite a refreshing experience to realize that many of what we use today by default have been discovered only within the last few years. Things like Adam, Batch Norm, GCNs, etc.
Medium
Top-10 Research Papers in AI
Each year scientists from around the world publish thousands of research papers in AI but only a few of them reach wide audiences and make a global impact in the world. Below are the top-10 most…
Deep Learning and Combinatorial Optimization IPAM Workshop
A great workshop on the intersection of ML, RL, GNNs, and combinatorial optimization. Videos are available. Topics include applications of ML to chip design, TSP, physics, integer programming and more.
A great workshop on the intersection of ML, RL, GNNs, and combinatorial optimization. Videos are available. Topics include applications of ML to chip design, TSP, physics, integer programming and more.
IPAM
Deep Learning and Combinatorial Optimization - IPAM
If We Draw Graphs Like This, We Can Change Computers Forever
The title is catchy, but the article is "only" about improvement for dynamic planarity testing problem. Planarity testing is well-studied problem for testing if a graph can be drawn without crossing edges and O(n) algorithms are known. This article on the other hand studies the case when the edges may be added and removed and the question is how to redraw the graph so that it becomes planar. The results were published at STOC'20.
The title is catchy, but the article is "only" about improvement for dynamic planarity testing problem. Planarity testing is well-studied problem for testing if a graph can be drawn without crossing edges and O(n) algorithms are known. This article on the other hand studies the case when the edges may be added and removed and the question is how to redraw the graph so that it becomes planar. The results were published at STOC'20.
Popular Mechanics
If We Draw Graphs Like This, We Can Change Computers Forever
Engineers could use this breakthrough in graph theory to design wildly efficient quantum computer chips.
A Complete Beginner's Guide to G-Invariant Neural Networks
A tutorial by S. Chandra Mouli and Bruno Ribeiro about G-invariant neural networks, eigenvectors, invariant subspaces, transformation groups, Reynolds operator, and more. Soon, there should be more tutorials on the topic of invariance and linear algebra.
A tutorial by S. Chandra Mouli and Bruno Ribeiro about G-invariant neural networks, eigenvectors, invariant subspaces, transformation groups, Reynolds operator, and more. Soon, there should be more tutorials on the topic of invariance and linear algebra.
www.invariances.org
Foundations of Invariant and Equivariant Representation Learning
Graph Transformer: A Generalization of Transformers to Graphs
A blog post by Vijay Prakash Dwivedi that discusses their paper A Generalization of Transformer Networks to Graphs with Xavier Bresson at 2021 AAAI Workshop (DLG-AAAI’21). It looks like a generalization of GAT network with batch norm and positional encodings. It still though aggregates via local neighborhoods.
My feeling after studying heterophily is that we will see more works that go beyond local neighborhoods and maybe will define neighborhoods not as something that is given by the graph topology but as something we have to learn. For example, we can define attention from each node to all other nodes in the graph and treat the distances in the graph as additional features. It could be difficult to scale so sampling methods should be employed I guess, but it seems allowing the network to decide which nodes are important for aggregation could be a better way to go.
A blog post by Vijay Prakash Dwivedi that discusses their paper A Generalization of Transformer Networks to Graphs with Xavier Bresson at 2021 AAAI Workshop (DLG-AAAI’21). It looks like a generalization of GAT network with batch norm and positional encodings. It still though aggregates via local neighborhoods.
My feeling after studying heterophily is that we will see more works that go beyond local neighborhoods and maybe will define neighborhoods not as something that is given by the graph topology but as something we have to learn. For example, we can define attention from each node to all other nodes in the graph and treat the distances in the graph as additional features. It could be difficult to scale so sampling methods should be employed I guess, but it seems allowing the network to decide which nodes are important for aggregation could be a better way to go.
Medium
Graph Transformer: A Generalization of Transformers to Graphs
We generalize Transformers to arbitrary graphs by extending key design aspects of attention and positional encodings from NLP to graphs.
PyTorch Geometric Temporal
PyG-Temporal is an extension of PyG for temporal graphs. It now includes more than 10 GNN models and several datasets. With world being dynamic I see more and more applications when standard GNN wouldn't work and one needs to resort to dynamic GNNs.
PyG-Temporal is an extension of PyG for temporal graphs. It now includes more than 10 GNN models and several datasets. With world being dynamic I see more and more applications when standard GNN wouldn't work and one needs to resort to dynamic GNNs.
Fresh picks from ArXiv
This week on ArXiv: comparison of using feature vs edge information, SOTA for heterogeneous graphs, surveys on efficient training of GNNs 🏃
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNNs
* Spatio-temporal Modeling for Large-scale Vehicular Networks Using Graph Convolutional Networks
* R-GSN: The Relation-based Graph Similar Network for Heterogeneous Graph
* On the Equivalence Between Temporal and Static Graph Representations for Observational Predictions with Bruno Ribeiro
* Deep Graph Matching under Quadratic Constraint
* Should Graph Neural Networks Use Features, Edges, Or Both?
* Deep graph convolution neural network with non-negative matrix factorization for community discovery
* Graph Neural Networks Inspired by Classical Iterative Algorithms
* Size-Invariant Graph Representations for Graph Classification Extrapolations
Survey
* Sampling methods for efficient training of graph convolutional networks: A survey
* Graph Metrics for Internet Robustness -- A Survey
* A Taxonomy for Classification and Comparison of Dataflows for GNN Accelerators
This week on ArXiv: comparison of using feature vs edge information, SOTA for heterogeneous graphs, surveys on efficient training of GNNs 🏃
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNNs
* Spatio-temporal Modeling for Large-scale Vehicular Networks Using Graph Convolutional Networks
* R-GSN: The Relation-based Graph Similar Network for Heterogeneous Graph
* On the Equivalence Between Temporal and Static Graph Representations for Observational Predictions with Bruno Ribeiro
* Deep Graph Matching under Quadratic Constraint
* Should Graph Neural Networks Use Features, Edges, Or Both?
* Deep graph convolution neural network with non-negative matrix factorization for community discovery
* Graph Neural Networks Inspired by Classical Iterative Algorithms
* Size-Invariant Graph Representations for Graph Classification Extrapolations
Survey
* Sampling methods for efficient training of graph convolutional networks: A survey
* Graph Metrics for Internet Robustness -- A Survey
* A Taxonomy for Classification and Comparison of Dataflows for GNN Accelerators
Large-scale graph machine learning challenge (OGB-LSC) at KDD Cup 2021
OGB-LSC is a collection of three graph datasets—PCQM4M-LSC, WikiKG90M-LSC, and MAG240M-LSC—that are orders of magnitude larger than existing ones. The three datasets correspond to link prediction, graph regression, and node classification tasks, respectively. The goal of OGB-LSC is to empower the community to discover innovative solutions for large-scale graph ML.
The competition will be from March 15th, 2021 until June 8th, 2021 and the winners will be notified by mid-June 2021. The winners will be honored at the KDD 2021 opening ceremony and will present their solutions at the KDD Cup workshop during the conference.
The graphs are indeed big, with the largest size 168 GB, and it's interesting what approaches can be used to solve these problems.
OGB-LSC is a collection of three graph datasets—PCQM4M-LSC, WikiKG90M-LSC, and MAG240M-LSC—that are orders of magnitude larger than existing ones. The three datasets correspond to link prediction, graph regression, and node classification tasks, respectively. The goal of OGB-LSC is to empower the community to discover innovative solutions for large-scale graph ML.
The competition will be from March 15th, 2021 until June 8th, 2021 and the winners will be notified by mid-June 2021. The winners will be honored at the KDD 2021 opening ceremony and will present their solutions at the KDD Cup workshop during the conference.
The graphs are indeed big, with the largest size 168 GB, and it's interesting what approaches can be used to solve these problems.
Open Graph Benchmark
OGB-LSC @ KDD Cup 2021
Why a Large-Scale Graph ML Competiton? Machine Learning (ML) on graphs has attracted immense attention in recent years because of the prevalence of graph-structured data in real-world applications. Modern application domains include web-scale social networks…
A Tale of Three Implicit Planners and the XLVIN agent
A video presentation by Petar Veličković about implicit planners, which could be seen as a middle-ground between model-based and model-free approaches for RL planning problems. The talk covers three popular implicit planners: VIN, ATreeC and XLVIN. All three focus on the recently popularised idea of algorithmically aligning to a planning algorithm, but with different realisations.
A video presentation by Petar Veličković about implicit planners, which could be seen as a middle-ground between model-based and model-free approaches for RL planning problems. The talk covers three popular implicit planners: VIN, ATreeC and XLVIN. All three focus on the recently popularised idea of algorithmically aligning to a planning algorithm, but with different realisations.
YouTube
Petar Veličković - A Tale of Three Implicit Planners and the XLVIN agent
Deep reinforcement learning (RL) is one of the most active topics in today's machine learning landscape. While it offers a remarkably powerful and general blueprint for solving generic problems, it is usually data-hungry, often requiring observations of millions…