Graph Machine Learning
6.71K subscribers
53 photos
11 files
808 links
Everything about graph theory, computer science, machine learning, etc.


If you have something worth sharing with the community, reach out @gimmeblues, @chaitjo.

Admins: Sergey Ivanov; Michael Galkin; Chaitanya K. Joshi
Download Telegram
How random are peer reviews?

A new paper came out about the quality of the reviews at peer-review conferences that analyzed submissions at ICLR's OpenReview for the last 4 years. Here is what I found the most interesting.

* If an accepted paper were reviewed anew, would it be accepted a second time?

This is called reproducibility of reviews. In 2020, it's 66% which means 1 out of 3 times you'd get a reject even if your paper deserves acceptance. More to it, even if you increase the number of reviewers reproducibility will be around the same ~70%.

* Do final paper score correlates with how many citations it gets?

Yes, higher ranked papers get more citations. What's more interesting is how many more citations a paper gets just due to an exposure at the conference: the correlation doubles just because of the exposure at the venue.

* Is there a bias of affiliation, author reputation, or ArXiv in reviewers' scores?

Yes, but very small. For example, papers at Cornell get 0.58 boost of the score (out of 10). For Google and DeepMind there is no correlation between their score and acceptance rate compared to other papers. Same can be said about ArXiv availability of a paper or h-index of the authors.
NeurIPS 2020. Comprehensive analysis of authors, organizations, and countries.

This post analyzes what authors and organizations publish at NeurIPS 2020 this December, similar to the analysis I did for ICML 2020.

In addition to general insights (that I also found interesting), there are two collaboration graphs that I created, one between affiliations and one between authors. What's exciting is that these two networks are very different from each other, and the graph of authors is actually quite disconnected with lots of small groups of people (of size ~50) and large diameter (25 hops). Could be interesting in the future to understand why it's the case and what are these small groups of people.
GML YouTube Videos

I was pleasantly surprised to see there is YouTube playlist by Zak Jost that covers some aspects of GNNs, including an interview with DeepMind authors for using GNNs for physics.
CIKM 2020 stats

Dates: Oct 19-23
Where: Online
Price: €70

Graph papers can be found at paper digest.

• 970/397 full/short submissions (vs 1030/470 in 2019)
• 193/103 accepted (vs 202/107 in 2019)
• 20% / 26% acceptance rate (vs 19/21% in 2019)
• ~97 total graph papers (20% of total)
Open Catalyst Project

Facebook and CMU launch Open Catalyst Project, which contains the largest dataset for quantum chemistry predictions. The goal is to predict atomic interactions faster than quantum mechanical simulations (DFT), which could be translated as a graph regression task.
Graph Machine Learning research groups: Jiliang Tang

I
do a series of posts on the groups in graph research, previous post is here. The 17th is Jiliang Tang, coauthor of the book "Deep Learning on Graphs".

Jiliang Tang (~1974)
- Affiliation: Michigan State University
- Education: Ph.D. at Arizona State University in 2015 (advisor: Huan Liu)
- h-index 56
- Awards: best paper awards KDD, WSDM; Yahoo! awards; Distinguished Withrow Research Award; NSF Career Award
- Interests: graph neural networks, network analysis, anomaly detection on graphs
Genesis Therapeutic — a startup working on GNN drug discovery

Launched in November 2019 out of Stanford’s Pande Lab, Genesis Therapeutics is researching GNNs and graph generative models in the field of drug discovery. They recently announced their partnership with Genentech, large biotech company, to test their ML platform for pharma.
Fresh picks from ArXiv
Today at ArXiv: building graphs from pretrained language models, graph information bottleneck, and quantum entanglement ⚛️

If I forgot to mention your paper, please shoot me a message and I will update the post.

Conferences
- XLVIN: eXecuted Latent Value Iteration Nets NeurIPS-DeepRL 2020, with Petar Veličković
- Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks NeurIPS 2020
- Graph Information Bottleneck NeurIPS 2020, with Jure Leskovec
- Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs NeurIPS 2020, with Jure Leskovec
- Graph Geometry Interaction Learning NeurIPS 2020
- Rethinking pooling in graph neural networks NeurIPS 2020
- Heterogeneous Hypergraph Embedding for Graph Classification WSDM 2021
- Contextual Heterogeneous Graph Network for Human-Object Interaction Detection ECCV-2020

Graphs
- A Differentiable Relaxation of Graph Segmentation and Alignment for AMR Parsing with Ivan Titov
- Graph and graphon neural network stability with Alejandro Ribeiro
- Language Models are Open Knowledge Graphs
- Can entanglement hide behind triangle-free graphs?


Survey
- Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization
Python Bindings of JGraphT

There is a new python binding of popular java library JGraphT, which is exciting news for those who want efficiency when working with graphs (in addition to other recent news Nvidia GPU-accelerated package).

JGraphT is a java library that contains very efficient and generic graph data-structures along with a large collection of state-of-the-art algorithms. What's great is that it allows to use easy interface with java bindings across all OS (including Windows) without installing JVM. JGraphT is known for its efficiency, reliability, and large collection of graph algorithms including pagerank, flows, cuts, vertex covers, colorings, isomorphism checking and more.