Graph Machine Learning
6.71K subscribers
53 photos
11 files
808 links
Everything about graph theory, computer science, machine learning, etc.


If you have something worth sharing with the community, reach out @gimmeblues, @chaitjo.

Admins: Sergey Ivanov; Michael Galkin; Chaitanya K. Joshi
Download Telegram
โ€‹Scene Graph Generation by Iterative Message Passing

There are numerous applications of graphs in computer vision, which usually boil down to reduction of the image to a sparse graph. This has found applications in image captioning, human action identification, predicting trajectories, or transferring knowledge to new data sets, among others (see Applications section from this survey). One of the recent interesting advancements is from 2017 CVPR paper (a group of Fei-Fei Li), Scene Graph Generation by Iterative Message Passing, where the authors proposed an image encoder that produces essentially a knowledge graph of objects in the image and their relationships, which can be then used for downstream tasks.
Applications of graphs in NLP

In addition to computer vision, there are many applications of graphs to NLP too. For example, knowledge graphs have been used extensively in research to bring fact knowledge to dialogue and question-answering systems, besides approaches for commonsense reasoning and named entity recognition (see these conference reviews of NLP+KG domains by Michael Galkin). And sometimes you even get more technical papers in this domain, like in this video Relation Embedding with Dihedral Group in Knowledge Graph of ACL '19 paper, where the authors model composition of relationships with some group theory.
The forbidden sidetrip by Lรกszlรณ Babai.

Lรกszlรณ Babai, the top mathematician in the field of graph isomorphism, turns out, has a zingy autobiographical story called "The forbidden sidetrip" about his trip to Minsk, Moscow, and Leningrad, when it was 1978 and he was 28. It's just fun on its own, but it also contains lots of names of Soviet mathematicians and references about the places where and how math was being built in the late USSR. To my surprise, he knows (knew?) Russian quite well (originally Hungarian) to listen to lectures in the universities.

There are stories about how he generally despises anti-semitism by another talented Russian mathematician Pontryagin, and how he was getting into the soviet queues before knowing what they sell, and how you can give "chervonets sverhu" (i.e. greasing the palm) to get the ticket when they are sold out, and lots of other soviet-related stuff that I really love asking older generation about. But among others, it's just interesting to look at the young, not-yet-famous guy who will then climb to the olymp of mathematics. I wish there was a movie about it.
โ€‹How powerful are graph neural networks? Part II

This is the second insight I recently got from the paper, How powerful are graph neural networks. It's about the initial node features. In the paper they write:
This first X (which is in latex \mathcal{X}) is the set of all input features. What this theorem says is that you need to have input features that are countable. Countable means rational or categorical, but not irrational numbers. So this condition "prohibits" any "continuous" features such as real numbers (e.g. degrees). If your set of features is uncountable then the sum is not injective anymore and hence GNN becomes less powerful than 1-WL test. So that's why if you use one-hot encoding of features your GNN becomes theoretically more powerful.
GraphChallenge

Challenges such as YOHO, MNIST, HPC Challenge, ImageNet, and VAST have played important roles in driving progress in fields as diverse as machine learning, high performance computing, and visual analytics.

GraphChallenge encourages community approaches to developing new solutions for analyzing graphs and sparse data derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field.
Deep Weisfeiler Leman

Surprisingly this paper is not about applying deep neural networks to Weisfeiler Lehman algorithm. It's quite technical but well-written and I could understand the main results and ideas of the paper.

They propose an extension of k-WL algorithm, when instead of adding all k-tuples to the input graph at each iteration, they add only a subset of k-tuples. Because you reduce the size of the graph comparing to the original k-WL algorithm, your DeepWL algorithm becomes more efficient, and hence they show that it can solve some graphs in polynomial time that original k-WL algorithm cannot (in particular CFI graphs).
Limitations of GNN

A compiled version of the recent insights I gained from the recent studies on theoretical properties of GNN.
Graph Machine Learning research groups: Le Song

I do a series of posts on the groups in graph research. The second is Le Song.
His group had 7 accepted papers at ICLR 2020. The top-2 result after Sergey Levine.

Le Song (~1981)
- Affiliation: Georgia Institute of Technology;
- Education: Ph.D. at U. of Sydney in 2008 (supervised by Alex Smola);
- h-index: 59;
- Awards: best papers at ICML, NeurIPS, AISTATS;
- Interests: generative and adversarial graph models, social network analysis, diffusion models.
Chess is the Drosophila of Artificial Intelligence

While writing another post on the life of Andrei Leman, I found that John McCarthy, the creator of Lisp, author of the term Artificial Intelligence, and one of the first and most influential programmers has an interesting remark, where he discusses the importance of long-term research. Apparently this saying, Chess is the Drosophila of Artificial Intelligence, is attributed to a legendary Soviet scientist, Alexander Kronrod, who arguably founded the first AI lab in the USSR. Back in '70s there were some huge ICL machines (analog of IBM machines), people used punch cards to provide the input, and calculations lasted for days, without the possibility to run a program in parallel. Yet, Kronrod's lab, which included Weisfeiler, Leman, and many other famous CS scientists and engineers, were working among others on the development of the chess programs, which in 1974, by the name Kaissa, won the first international chess tournament among machines.
Fresh picks from ArXiv
This week ranges applications of graphs from social studies to astrophysics to robotics ๐Ÿค– Also checkout latest survey of deep learning impact on scientific discovery by Eric Schmidt, ex-Google CEO ๐ŸŽ“ And lots of other coolest graph papers from last week.

Applications
โ€ข Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers โ€” combinatorial optimization
โ€ข A Heterogeneous Dynamical Graph Neural Networks Approach to Quantify Scientific Impact โ€” scientific impact
โ€ข Investigating Software Usage in the Social Sciences: A Knowledge Graph Approach โ€” social sciences
โ€ข Graph Neural Networks for Particle Reconstruction in High Energy Physics detectors โ€” physics
โ€ข Representing Multi-Robot Structure through Multimodal Graph Embedding for the Selection of Robot Teams โ€” robotics
โ€ข Identification of Patterns in Cosmic-Ray Arrival Directions using Dynamic Graph Convolutional Neural Networks โ€” astrophysics

Survey
โ€ข word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings of Structured Data by Martin Grohe
โ€ข A Survey of Deep Learning for Scientific Discovery by Eric Schmidt (ex-Google CEO)
โ€ข COVID-19 and Computer Audition: An Overview on What Speech & Sound Analysis Could Contribute in the SARS-CoV-2 Corona Crisis

CVPR
โ€ข Distillating Knowledge from Graph Convolutional Networks

GNN
โ€ข Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks

Graph Theory
โ€ข The 1-2-3 Conjecture holds for graphs with large enough minimum degree
โ€ข Bounds for the rainbow disconnection number of graphs
CS 520: Knowledge Graphs

Today starts a Stanford course on knowledge graphs. It's available for everyone for free via Zoom (although there is no guarantee to have the videos afterwards).
CS 520: Knowledge Graphs

There is no promise for the materials afterwards, but here is a page where presentations can be stored (so far there is a single presentation from Michael Galkin).
โ€‹Deep Graph Library v0.4.3

In version 0.4.3, a popular GNN library DGL has now support for TensorFlow (originally supported PyTorch only), spin-off packages for Knowledge Graphs and Life Sciences (DGL-KE and DGL-LifeSci) and more flexible distributed training.

For me it's really interesting to see a big uplift in training time against PyTorch-BigGraph library, this looks great.
Knowledge Graph course 2019

Michael Galkin had a course on knowledge graphs in 2019 and there are presentations of the course (though most in Russian) if you are interested.
Network Epidemiology Online Workshop Series

In April, there are seminars, tutorials, and discussion groups organized by University of Maryland to study how network science can help fighting virus contamination. It's for free, although there is a limitation on the number of participants (~150). There are two tracks: just listen to the tutorials or additionally participate in the groups. Lectures through Zoom. Deadline for application: tomorrow, 7th April.
CS 520 Knowledge Graphs Video Materials

It seems that they will make the lectures available on YouTube ๐Ÿ‘
Fresh picks from ArXiv
This week is full of CVPR papers with graphs, surveys on recommendation, and a historical note of the discovery of TSP algorithm is USSR ๐Ÿ‘จโ€๐Ÿš€

CVPR
โ€ข HOPE-Net: A Graph-based Model for Hand-Object Pose Estimation
โ€ข Graph Structured Network for Image-Text Matching
โ€ข Spatio-Temporal Graph for Video Captioning with Knowledge Distillation
โ€ข Multi-Modal Graph Neural Network for Joint Reasoning on Vision and Scene Text
โ€ข Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition
โ€ข Deep Homography Estimation for Dynamic Scenes
โ€ข Iterative Context-Aware Graph Inference for Visual Dialog
โ€ข Geometrically Principled Connections in Graph Neural Networks with Michael Bronstein
โ€ข LiDAR-based Online 3D Video Object Detection with Graph-based Message Passing and Spatiotemporal Transformer Attention

WebConf
โ€ข Graph Enhanced Representation Learning for News Recommendation

Survey
โ€ข Deep Learning on Knowledge Graph for Recommender System: A Survey
โ€ข Heterogeneous Network Representation Learning: Survey, Benchmark, Evaluation, and Beyond โ€” essentially a survey on KG
โ€ข A historical note on the 3/2-approximation algorithm for the metric traveling salesman problem โ€” show that Christofides algorithm used for TSP was discovered in USSR around the same time
โ€ข A Survey on Conversational Recommender Systems

Graph Theory
โ€ข Longest paths in random hypergraphs
โ€ข A Fast Algorithm for the Product Structure of Planar Graphs
โ€ข Counterexamples to "A Conjecture on Induced Subgraphs of Cayley Graphs"
RL for TSP

There is already a rich literature on applying RL methods for Traveling Salesman Problem (TSP). In one line of research the solution is built incrementally, one node at a time. This is for example in a popular paper, Attention, Learn to Solve Routing Problems!. Recently another approach appeared: start with some solution to TSP and update this solution by swapping nodes. For example, Learning 2-opt Heuristics for the Traveling Salesman Problem via Deep Reinforcement Learning uses policy gradient method and shows that it significantly outperforms incremental approaches.