Graph Machine Learning
6.71K subscribers
53 photos
11 files
808 links
Everything about graph theory, computer science, machine learning, etc.


If you have something worth sharing with the community, reach out @gimmeblues, @chaitjo.

Admins: Sergey Ivanov; Michael Galkin; Chaitanya K. Joshi
Download Telegram
Graph Machine Learning research groups: Alejandro Ribeiro

I
do a series of posts on the groups in graph research, previous post is here. The 15th is Alejandro Ribeiro, head of Alelab at UPenn and the leading author of the ongoing GNN course.


Alejandro Ribeiro (1975)
- Affiliation: University of Pennsylvania
- Education: Ph.D. in University of Minnesota in 2006 (advisor: Georgios B. Giannakis)
- h-index 51
- Awards: Hugo Schuck best paper award, paper awards at CDC, ACC, ICASSP, Lindback award, NSF award
- Interests: wireless autonomous networks, machine learning on network data, distributed collaborative learning
NeurIPS 2020 stats

Dates: Dec 6 - 12
Where: Online
Price: $25/$100 (students/non-students)

• 9454 submissions (vs 6743 in 2019)
• 1900 accepted (vs 1428 in 2019)
• 20.1% acceptance rate (vs 21% in 2019)
• 123 graph papers (6.5% of total)
SE(3)-Transformers

A blog post about a recent paper (NeurIPS 2020) that introduces group theory to set functions. It seems like it performs on par with state-of-the-art methods for classification and regression, but at least is provably equivariant.
The next big thing: the use of graph neural networks to discover particles

It's great to see that GNNs can be useful for fundamental applications such as new particles discovery. In another post by Fermilab, US-based physics lab, researchers discuss that they are able to move GNNs to production for Large Hadron Collider (LHC) at CERN. The goal is to process millions of images and select those that could be relevant to discovery of new particles. They expect to see the results in LHC's Run 3 in 2021. ArXiv preprint is available online.
ICLR 2021 Graph Papers

Last Friday submissions to ICLR 2021 became available for reading. There are 3013 submissions, about 210 graph papers (7% of total). About every third paper came from rejection of NeurIPS (which is based on overlap of paper submissions), which surprised me not just on sheer volume, but also because I'm puzzled where the remaining 6000 rejected papers are resubmitted to.

I extracted graph papers, which are attached, and categorized them loosely in 4 topics: model, theory, application, and survey. Most of the papers (171) are about new models (general GNNs, graph models for new problems, improvements over existing models). 22 papers are novel applications in physics, chemistry, biology, etc. 13 are theoretical papers, and 4 are surveys/evaluation benchmarks.
RAPIDS cuGraph adds NetworkX and DiGraph Compatibility

Very exciting update to running graph algorithms on GPU. Huge speedups for typical algorithms (PageRank, SCC, etc.) and new algorithms (Louvain, Leiden, etc.) for graphs with thousands of vertices. The migration from networkx seems very smooth, so worth giving it a shot.
DataStart Conference 2020

There is a russian-speaking event DataStart (20 Oct) that includes presentations from the leading experts in the industry and academy in Russia. The speakers include Anton Tsitsulin who will talk about unsupervised graph embeddings and Valentin Malykh who will describe how you can use knowledge graphs for visualization in NLP.
GML Newsletter Issue #3

The third issue of GML newsletter is available! Blog posts, videos, past and future events.
Graph Machine Learning research groups: Tina Eliassi-Rad

I
do a series of posts on the groups in graph research, previous post is here. The 16th is Tina Eliassi-Rad, coauthor of Cora datasets that are still widely used in node classification benchmarks.


Tina Eliassi-Rad (~1974)
- Affiliation: Northeastern University
- Education: Ph.D. at University of Wisconsin-Madison in 2001 (advisor: Jude Shavlik)
- h-index 32
- Awards: best paper awards ICDM, CIKM; ISI fellow
- Interests: graph mining, anomaly detection, graph algorithms
NeurIPS 2020 Graph Papers

I counted 123 graph papers (attached) at NeurIPS 2020, which is 6.5% of all accepted papers. This repo provides a good categorization of graph papers into topics such as oversmoothing, adversarial attacks, expressive power, etc.

Also the plot shows number of accepted papers per "graph" authors, i.e. authors that at least have one graph paper at NeurIPS 2020.
How random are peer reviews?

A new paper came out about the quality of the reviews at peer-review conferences that analyzed submissions at ICLR's OpenReview for the last 4 years. Here is what I found the most interesting.

* If an accepted paper were reviewed anew, would it be accepted a second time?

This is called reproducibility of reviews. In 2020, it's 66% which means 1 out of 3 times you'd get a reject even if your paper deserves acceptance. More to it, even if you increase the number of reviewers reproducibility will be around the same ~70%.

* Do final paper score correlates with how many citations it gets?

Yes, higher ranked papers get more citations. What's more interesting is how many more citations a paper gets just due to an exposure at the conference: the correlation doubles just because of the exposure at the venue.

* Is there a bias of affiliation, author reputation, or ArXiv in reviewers' scores?

Yes, but very small. For example, papers at Cornell get 0.58 boost of the score (out of 10). For Google and DeepMind there is no correlation between their score and acceptance rate compared to other papers. Same can be said about ArXiv availability of a paper or h-index of the authors.