Graph Machine Learning
6.7K subscribers
53 photos
11 files
808 links
Everything about graph theory, computer science, machine learning, etc.


If you have something worth sharing with the community, reach out @gimmeblues, @chaitjo.

Admins: Sergey Ivanov; Michael Galkin; Chaitanya K. Joshi
Download Telegram
Paper proposed new embeddings based on (almost) anonymous walks... a few years after of the original paper. Can I resubmit my own papers and get accepted?
The paper proposes GNN for knowledge graph reasoning. But what's really interesting is that AC single-handedly saves this paper from 3 rejects to the accept.
NetLSD: Hearing the Shape of a Graph
Proposing a distance between graphs, essentially as a L2 distance between a more advanced spectrum of a graph.
https://arxiv.org/abs/1805.10712
Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data (ICLR 2019)
The paper proposes a DNN architecture, where each layer is a gradient boosting decision trees (GBDT) such that the outputs of previous layer are passed forward to the new one. A quite interesting contribution is how to make those GBDT layers differentiable for end-to-end training.
https://arxiv.org/pdf/1909.06312.pdf
Computed some stats about graph papers in ICLR 2020. There are a few interesting things.

(1) Every third paper on graphs is accepted, clear indication GML is becoming popular;
(2) On average it's needed [6,6,8] to get accepted, [6,6,6] would be borderline.
(3) AC can sometimes save a paper, even if got low scores. This is rather good, meaning that reviewers are not the only ones who decide.
(4) Likewise, AC can reject a paper, even if it is unanimous accept by the reviewers. I think that happens mostly because the paper does not present enough experimental comparison to SOTA.

https://medium.com/@sergei.ivanov_24894/iclr-2020-graph-papers-9bc2e90e56b0
Recent papers on graph matching.

Scalable Gromov-Wasserstein Learning for Graph Partitioning and Matching (NeurIPS 2019) https://nips.cc/Conferences/2019/Schedule?showEvent=13486

KerGM: Kernelized Graph Matching (NeurIPS 2019)https://nips.cc/Conferences/2019/Schedule?showEvent=14512

(Nearly) Efficient Algorithms for the Graph Matching Problem on Correlated Random Graphs (NeurIPS 2019)https://nips.cc/Conferences/2019/Schedule?showEvent=13959

Gromov-Wasserstein Learning for Graph Matching and Node Embedding (ICML 2019)https://icml.cc/Conferences/2019/Schedule?showEvent=3845

Graph Matching Networks for Learning the Similarity of Graph Structured Objects (ICML 2019)https://deepmind.com/research/publications/Graph-matching-networks-for-learning-the-similarity-of-graph-structured-objects

Learning deep graph matching with channel-independent embedding and Hungarian attention (ICLR 2020) https://openreview.net/forum?id=rJgBd2NYPH

Deep Graph Matching Consensus (ICLR 2020) https://openreview.net/forum?id=HyeJf1HKvS

Spectral Graph Matching and Regularized Quadratic Relaxations II: Erdős-Rényi Graphs and Universality (ICML 2020) https://arxiv.org/abs/1907.08883

Graph Optimal Transport for Cross-Domain Alignment (ICML 2020) https://arxiv.org/abs/2006.14744
Our resubmission of the paper from ICLR to IJCAI. Taught me how to strip down the paper from 21 pages to 6. Also, there are 9K submissions and one of authors for each submission must agree to review three other papers, so I expect a lot of noise, but still hope for the best.
How Uber Eats uses GNNs to power recommendations.
https://eng.uber.com/uber-eats-graph-learning/
There is a recent trend in machine learning papers to do ablation studies, showing that SOTA results are not that great compared to old baselines. RecSys 19 best paper was about it (https://arxiv.org/abs/1907.06902). I think I saw some similar works in NLP and CV, and now it's time for GML. Two papers, one in knowledge graph link prediction and another in graph classification:
https://openreview.net/forum?id=BkxSmlBFvr
https://openreview.net/forum?id=HygDF6NFPB
6 papers at ICLR by the group of Jure Leskovec (3 accepts + 3 rejects)

1. Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings (https://openreview.net/forum?id=BJgr4kSFDS)

2. Strategies for Pre-training Graph Neural Networks (https://openreview.net/forum?id=HJlWWJSFDH)

3. Redundancy-Free Computation Graphs for Graph Neural Networks (https://openreview.net/forum?id=H1eF3kStPS)

4. Unifying Graph Convolutional Neural Networks and Label Propagation (https://openreview.net/forum?id=rkgdYhVtvH)

5. Selection via Proxy: Efficient Data Selection for Deep Learning (https://openreview.net/forum?id=HJg2b0VYDr)

6. Coresets for Accelerating Incremental Gradient Methods (https://openreview.net/forum?id=SygRikHtvS)
Continuing this, a group of Le Song has 7 papers at ICLR, all accepts. This is top-2 result among all, with the first one Sergey Levine having 13 accepts.

1. HOPPITY: Learning Graph Transformations to Detect and Fix Bugs in Programs (https://openreview.net/forum?id=SJeqs6EFvB)

2. GLAD: Learning Sparse Graph Recovery (https://openreview.net/forum?id=BkxpMTEtPB)

3. Efficient Probabilistic Logic Reasoning with Graph Neural Networks (https://openreview.net/forum?id=rJg76kStwH)

4. Double Neural Counterfactual Regret Minimization (https://openreview.net/forum?id=ByedzkrKvH)

5. RNA Secondary Structure Prediction By Learning Unrolled Algorithms (https://openreview.net/forum?id=S1eALyrYDH)

6. Learn to Explain Efficiently via Neural Logic Inductive Learning (https://openreview.net/forum?id=SJlh8CEYDB)

7. Learning to Plan in High Dimensions via Neural Exploration-Exploitation Trees (https://openreview.net/forum?id=rJgJDAVKvB)
There are quite a few tools to monitor new papers on ArXiv:
* arxiv-sanity.com
* arxivist.com
But you can also configure rss feed on the keywords that you like by using https://siftrss.com/
For example, if you want papers only on graphs you can use the following links:
CS track: https://siftrss.com/f/x70NM5NWmLn
Stat track: https://siftrss.com/f/3meBo55VMyA
One of the trends that I outlined in the post above was about the growing rate of papers on knowledge graphs. This is quite interesting as I realize that many recommendation tasks for example in conversational AI systems can be modeled well with knowledge graphs, instead of let's say deep learning methods. Here is a very fresh survey on this topic.

A Survey on Knowledge Graphs: Representation, Acquisition and Applications.

https://arxiv.org/abs/2002.00388
Our new submission for ICLR workshop on AI+Neuroscience: https://baicsworkshop.github.io/

Here is what I think.
1️⃣ Even for simple ideas it takes still 5-7 days to implement and write 4 pages.

2️⃣ More importantly, this work is about prediction of IQ based on the EEG brain measurements. Essentially, here is (X, y) train whatever model you want and report the result. The problem is that for real data sets simple baselines work better than your machine learning. For example, taking the most common y from the training set and predicting it for all examples test gives the result very close to ML.

It would be cool to have some hints from the oracle that would say "Don't bother, these data are doomed, you can't do better with ML". If you know some theory like that, please ping me in private messages 🙁
AAAI 2020 is taking place tomorrow in NYC.

AAAI 2020 stats
7737 number of submissions
1591 number of accepted
21% acceptance rate
142 graph accepted papers (9% of total)

ICLR 2020 stats
2213 number of submissions
687 number of accepted
31% acceptance rate
49 graph accepted papers (7% of total)