DataStart Conference 2020
There is a russian-speaking event DataStart (20 Oct) that includes presentations from the leading experts in the industry and academy in Russia. The speakers include Anton Tsitsulin who will talk about unsupervised graph embeddings and Valentin Malykh who will describe how you can use knowledge graphs for visualization in NLP.
There is a russian-speaking event DataStart (20 Oct) that includes presentations from the leading experts in the industry and academy in Russia. The speakers include Anton Tsitsulin who will talk about unsupervised graph embeddings and Valentin Malykh who will describe how you can use knowledge graphs for visualization in NLP.
datastart.ru
Бесплатная осенняя онлайн-конференция Data Science 2020
Обучающие конференции по Data Science в
Москве и Санкт-Петербурге. Программа мероприятий содержит актуальные темы по Big Data,
Machine Learning, AI. Практические занятия позволят лучше усвоить полученные
на мероприятии знания.
Москве и Санкт-Петербурге. Программа мероприятий содержит актуальные темы по Big Data,
Machine Learning, AI. Практические занятия позволят лучше усвоить полученные
на мероприятии знания.
GML Newsletter Issue #3
The third issue of GML newsletter is available! Blog posts, videos, past and future events.
The third issue of GML newsletter is available! Blog posts, videos, past and future events.
Graph Machine Learning research groups: Tina Eliassi-Rad
I do a series of posts on the groups in graph research, previous post is here. The 16th is Tina Eliassi-Rad, coauthor of Cora datasets that are still widely used in node classification benchmarks.
Tina Eliassi-Rad (~1974)
- Affiliation: Northeastern University
- Education: Ph.D. at University of Wisconsin-Madison in 2001 (advisor: Jude Shavlik)
- h-index 32
- Awards: best paper awards ICDM, CIKM; ISI fellow
- Interests: graph mining, anomaly detection, graph algorithms
I do a series of posts on the groups in graph research, previous post is here. The 16th is Tina Eliassi-Rad, coauthor of Cora datasets that are still widely used in node classification benchmarks.
Tina Eliassi-Rad (~1974)
- Affiliation: Northeastern University
- Education: Ph.D. at University of Wisconsin-Madison in 2001 (advisor: Jude Shavlik)
- h-index 32
- Awards: best paper awards ICDM, CIKM; ISI fellow
- Interests: graph mining, anomaly detection, graph algorithms
Telegram
Graph Machine Learning
Graph Machine Learning research groups: Alejandro Ribeiro
I do a series of posts on the groups in graph research, previous post is here. The 15th is Alejandro Ribeiro, head of Alelab at UPenn and the leading author of the ongoing GNN course.
Alejandro…
I do a series of posts on the groups in graph research, previous post is here. The 15th is Alejandro Ribeiro, head of Alelab at UPenn and the leading author of the ongoing GNN course.
Alejandro…
NeurIPS 2020 Graph Papers
I counted 123 graph papers (attached) at NeurIPS 2020, which is 6.5% of all accepted papers. This repo provides a good categorization of graph papers into topics such as oversmoothing, adversarial attacks, expressive power, etc.
Also the plot shows number of accepted papers per "graph" authors, i.e. authors that at least have one graph paper at NeurIPS 2020.
I counted 123 graph papers (attached) at NeurIPS 2020, which is 6.5% of all accepted papers. This repo provides a good categorization of graph papers into topics such as oversmoothing, adversarial attacks, expressive power, etc.
Also the plot shows number of accepted papers per "graph" authors, i.e. authors that at least have one graph paper at NeurIPS 2020.
Fresh picks from ArXiv
Today at ArXiv: learning logic and simulations with GNN and a new practical guide to GNNs.
If I forgot to mention your paper, please shoot me a message and I will update the post.
Conferences
- Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text Generation EMNLP 2020
- Learning to Represent Image and Text with Denotation Graph EMNLP 2020
- TeMP: Temporal Message Passing for Temporal Knowledge Graph Completion EMNLP 2020 with William L. Hamilton
- Embedding Words in Non-Vector Space with Unsupervised Graph Learning EMNLP 2020
- Unsupervised Joint k-node Graph Representations with Compositional Energy-Based Models with Bruno Ribeiro, NeurIPS 2020
- Dirichlet Graph Variational Autoencoder NeurIPS 2020
- RatE: Relation-Adaptive Translating Embedding for Knowledge Graph Completion COLING 2020
GNN
- Graph Convolutional Value Decomposition in Multi-Agent Reinforcement Learning
- High-Order Relation Construction and Mining for Graph Matching
- RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs with Yoshua Bengio and Jian Tang
- Learning Mesh-Based Simulation with Graph Networks with Peter W. Battaglia
- Directional Graph Networks with William L. Hamilton
- Simplicial Neural Networks
Survey
- A Practical Guide to Graph Neural Networks
Today at ArXiv: learning logic and simulations with GNN and a new practical guide to GNNs.
If I forgot to mention your paper, please shoot me a message and I will update the post.
Conferences
- Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text Generation EMNLP 2020
- Learning to Represent Image and Text with Denotation Graph EMNLP 2020
- TeMP: Temporal Message Passing for Temporal Knowledge Graph Completion EMNLP 2020 with William L. Hamilton
- Embedding Words in Non-Vector Space with Unsupervised Graph Learning EMNLP 2020
- Unsupervised Joint k-node Graph Representations with Compositional Energy-Based Models with Bruno Ribeiro, NeurIPS 2020
- Dirichlet Graph Variational Autoencoder NeurIPS 2020
- RatE: Relation-Adaptive Translating Embedding for Knowledge Graph Completion COLING 2020
GNN
- Graph Convolutional Value Decomposition in Multi-Agent Reinforcement Learning
- High-Order Relation Construction and Mining for Graph Matching
- RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs with Yoshua Bengio and Jian Tang
- Learning Mesh-Based Simulation with Graph Networks with Peter W. Battaglia
- Directional Graph Networks with William L. Hamilton
- Simplicial Neural Networks
Survey
- A Practical Guide to Graph Neural Networks
arXiv.org
Simplicial Neural Networks
We present simplicial neural networks (SNNs), a generalization of graph neural networks to data that live on a class of topological spaces called simplicial complexes. These are natural...
How random are peer reviews?
A new paper came out about the quality of the reviews at peer-review conferences that analyzed submissions at ICLR's OpenReview for the last 4 years. Here is what I found the most interesting.
* If an accepted paper were reviewed anew, would it be accepted a second time?
This is called reproducibility of reviews. In 2020, it's 66% which means 1 out of 3 times you'd get a reject even if your paper deserves acceptance. More to it, even if you increase the number of reviewers reproducibility will be around the same ~70%.
* Do final paper score correlates with how many citations it gets?
Yes, higher ranked papers get more citations. What's more interesting is how many more citations a paper gets just due to an exposure at the conference: the correlation doubles just because of the exposure at the venue.
* Is there a bias of affiliation, author reputation, or ArXiv in reviewers' scores?
Yes, but very small. For example, papers at Cornell get 0.58 boost of the score (out of 10). For Google and DeepMind there is no correlation between their score and acceptance rate compared to other papers. Same can be said about ArXiv availability of a paper or h-index of the authors.
A new paper came out about the quality of the reviews at peer-review conferences that analyzed submissions at ICLR's OpenReview for the last 4 years. Here is what I found the most interesting.
* If an accepted paper were reviewed anew, would it be accepted a second time?
This is called reproducibility of reviews. In 2020, it's 66% which means 1 out of 3 times you'd get a reject even if your paper deserves acceptance. More to it, even if you increase the number of reviewers reproducibility will be around the same ~70%.
* Do final paper score correlates with how many citations it gets?
Yes, higher ranked papers get more citations. What's more interesting is how many more citations a paper gets just due to an exposure at the conference: the correlation doubles just because of the exposure at the venue.
* Is there a bias of affiliation, author reputation, or ArXiv in reviewers' scores?
Yes, but very small. For example, papers at Cornell get 0.58 boost of the score (out of 10). For Google and DeepMind there is no correlation between their score and acceptance rate compared to other papers. Same can be said about ArXiv availability of a paper or h-index of the authors.
OpenReview
An Open Review of OpenReview: A Critical Analysis of the Machine...
Mainstream machine learning conferences have seen a dramatic increase in the number of participants, along with a growing range of perspectives, in recent years. Members of the machine learning...
NeurIPS 2020. Comprehensive analysis of authors, organizations, and countries.
This post analyzes what authors and organizations publish at NeurIPS 2020 this December, similar to the analysis I did for ICML 2020.
In addition to general insights (that I also found interesting), there are two collaboration graphs that I created, one between affiliations and one between authors. What's exciting is that these two networks are very different from each other, and the graph of authors is actually quite disconnected with lots of small groups of people (of size ~50) and large diameter (25 hops). Could be interesting in the future to understand why it's the case and what are these small groups of people.
This post analyzes what authors and organizations publish at NeurIPS 2020 this December, similar to the analysis I did for ICML 2020.
In addition to general insights (that I also found interesting), there are two collaboration graphs that I created, one between affiliations and one between authors. What's exciting is that these two networks are very different from each other, and the graph of authors is actually quite disconnected with lots of small groups of people (of size ~50) and large diameter (25 hops). Could be interesting in the future to understand why it's the case and what are these small groups of people.
Medium
NeurIPS 2020. Comprehensive analysis of authors, organizations, and countries.
What will happen at NeurIPS2020 this December? Top authors, affiliations, and countries at the biggest AI Conference of this year analyzed.
GML YouTube Videos
I was pleasantly surprised to see there is YouTube playlist by Zak Jost that covers some aspects of GNNs, including an interview with DeepMind authors for using GNNs for physics.
I was pleasantly surprised to see there is YouTube playlist by Zak Jost that covers some aspects of GNNs, including an interview with DeepMind authors for using GNNs for physics.
YouTube
Graph Neural Networks
A series on the basics of graph neural networks.
Fresh picks from ArXiv
Today at ArXiv: generalization of GNNs, faster graphlet kernels, and sunshine graphs ☀️
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNN
- Discriminability of Single-Layer Graph Neural Networks with Alejandro Ribeiro
- On Size Generalization in Graph Neural Networks with Haggai Maron
- Bi-GCN: Binary Graph Convolutional Network
- Fast Graph Kernel with Optical Random Features
- Scalable Graph Networks for Particle Simulations
- Disentangled Dynamic Graph Deep Generation
Math
- Supercards, Sunshines and Caterpillar Graphs
- An Extension of the Birkhoff-von Neumann Theorem to Non-Bipartite Graphs with Vijay V. Vazirani
Conferences
- Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs EMNLP 2020
- Supertagging Combinatory Categorial Grammar with Attentive Graph Convolutional Networks EMNLP
- A Graph Representation of Semi-structured Data for Web Question Answering COLING 2020
- STP-UDGAT: Spatial-Temporal-Preference User Dimensional Graph Attention Network for Next POI Recommendation CIKM 2020
- Enhancing Extractive Text Summarization with Topic-Aware Graph Neural Networks COLING 2020
Today at ArXiv: generalization of GNNs, faster graphlet kernels, and sunshine graphs ☀️
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNN
- Discriminability of Single-Layer Graph Neural Networks with Alejandro Ribeiro
- On Size Generalization in Graph Neural Networks with Haggai Maron
- Bi-GCN: Binary Graph Convolutional Network
- Fast Graph Kernel with Optical Random Features
- Scalable Graph Networks for Particle Simulations
- Disentangled Dynamic Graph Deep Generation
Math
- Supercards, Sunshines and Caterpillar Graphs
- An Extension of the Birkhoff-von Neumann Theorem to Non-Bipartite Graphs with Vijay V. Vazirani
Conferences
- Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs EMNLP 2020
- Supertagging Combinatory Categorial Grammar with Attentive Graph Convolutional Networks EMNLP
- A Graph Representation of Semi-structured Data for Web Question Answering COLING 2020
- STP-UDGAT: Spatial-Temporal-Preference User Dimensional Graph Attention Network for Next POI Recommendation CIKM 2020
- Enhancing Extractive Text Summarization with Topic-Aware Graph Neural Networks COLING 2020
CIKM 2020 stats
Dates: Oct 19-23
Where: Online
Price: €70
Graph papers can be found at paper digest.
• 970/397 full/short submissions (vs 1030/470 in 2019)
• 193/103 accepted (vs 202/107 in 2019)
• 20% / 26% acceptance rate (vs 19/21% in 2019)
• ~97 total graph papers (20% of total)
Dates: Oct 19-23
Where: Online
Price: €70
Graph papers can be found at paper digest.
• 970/397 full/short submissions (vs 1030/470 in 2019)
• 193/103 accepted (vs 202/107 in 2019)
• 20% / 26% acceptance rate (vs 19/21% in 2019)
• ~97 total graph papers (20% of total)
Open Catalyst Project
Facebook and CMU launch Open Catalyst Project, which contains the largest dataset for quantum chemistry predictions. The goal is to predict atomic interactions faster than quantum mechanical simulations (DFT), which could be translated as a graph regression task.
Facebook and CMU launch Open Catalyst Project, which contains the largest dataset for quantum chemistry predictions. The goal is to predict atomic interactions faster than quantum mechanical simulations (DFT), which could be translated as a graph regression task.
Facebook
Facebook and Carnegie Mellon launch the Open Catalyst Project to find new ways to store renewable energy
Facebook AI and the Carnegie Mellon University (CMU) Department of Chemical Engineering are announcing the Open Catalyst Project, a collaboration intended to use…