Graph Machine Learning
6.71K subscribers
53 photos
11 files
808 links
Everything about graph theory, computer science, machine learning, etc.


If you have something worth sharing with the community, reach out @gimmeblues, @chaitjo.

Admins: Sergey Ivanov; Michael Galkin; Chaitanya K. Joshi
Download Telegram
RAPIDS cuGraph adds NetworkX and DiGraph Compatibility

Very exciting update to running graph algorithms on GPU. Huge speedups for typical algorithms (PageRank, SCC, etc.) and new algorithms (Louvain, Leiden, etc.) for graphs with thousands of vertices. The migration from networkx seems very smooth, so worth giving it a shot.
DataStart Conference 2020

There is a russian-speaking event DataStart (20 Oct) that includes presentations from the leading experts in the industry and academy in Russia. The speakers include Anton Tsitsulin who will talk about unsupervised graph embeddings and Valentin Malykh who will describe how you can use knowledge graphs for visualization in NLP.
GML Newsletter Issue #3

The third issue of GML newsletter is available! Blog posts, videos, past and future events.
Graph Machine Learning research groups: Tina Eliassi-Rad

I
do a series of posts on the groups in graph research, previous post is here. The 16th is Tina Eliassi-Rad, coauthor of Cora datasets that are still widely used in node classification benchmarks.


Tina Eliassi-Rad (~1974)
- Affiliation: Northeastern University
- Education: Ph.D. at University of Wisconsin-Madison in 2001 (advisor: Jude Shavlik)
- h-index 32
- Awards: best paper awards ICDM, CIKM; ISI fellow
- Interests: graph mining, anomaly detection, graph algorithms
NeurIPS 2020 Graph Papers

I counted 123 graph papers (attached) at NeurIPS 2020, which is 6.5% of all accepted papers. This repo provides a good categorization of graph papers into topics such as oversmoothing, adversarial attacks, expressive power, etc.

Also the plot shows number of accepted papers per "graph" authors, i.e. authors that at least have one graph paper at NeurIPS 2020.
How random are peer reviews?

A new paper came out about the quality of the reviews at peer-review conferences that analyzed submissions at ICLR's OpenReview for the last 4 years. Here is what I found the most interesting.

* If an accepted paper were reviewed anew, would it be accepted a second time?

This is called reproducibility of reviews. In 2020, it's 66% which means 1 out of 3 times you'd get a reject even if your paper deserves acceptance. More to it, even if you increase the number of reviewers reproducibility will be around the same ~70%.

* Do final paper score correlates with how many citations it gets?

Yes, higher ranked papers get more citations. What's more interesting is how many more citations a paper gets just due to an exposure at the conference: the correlation doubles just because of the exposure at the venue.

* Is there a bias of affiliation, author reputation, or ArXiv in reviewers' scores?

Yes, but very small. For example, papers at Cornell get 0.58 boost of the score (out of 10). For Google and DeepMind there is no correlation between their score and acceptance rate compared to other papers. Same can be said about ArXiv availability of a paper or h-index of the authors.
NeurIPS 2020. Comprehensive analysis of authors, organizations, and countries.

This post analyzes what authors and organizations publish at NeurIPS 2020 this December, similar to the analysis I did for ICML 2020.

In addition to general insights (that I also found interesting), there are two collaboration graphs that I created, one between affiliations and one between authors. What's exciting is that these two networks are very different from each other, and the graph of authors is actually quite disconnected with lots of small groups of people (of size ~50) and large diameter (25 hops). Could be interesting in the future to understand why it's the case and what are these small groups of people.
GML YouTube Videos

I was pleasantly surprised to see there is YouTube playlist by Zak Jost that covers some aspects of GNNs, including an interview with DeepMind authors for using GNNs for physics.