Fresh picks from ArXiv
Ever wondered what color of the sofa to choose to be compatible with the rest of the furniture? π Today you will find an answer in one of the papers and of course with the help of GNN. Besides, there is a survey on 6G (!) technologies π±, new theoretical result on graph isomorphism π, and many applications of graphs.
Applications
The Quantum Approximate Optimization Algorithm Needs to See the Whole Graph: A Typical Case β quantum computation
Recommendation system using a deep learning and graph analysis approach β recommendation
Learning Furniture Compatibility with Graph Neural Networks β interior design
Gumbel-softmax-based Optimization: A Simple General Framework for Optimization Problems on Graphs βcombinatorial optimization
Knowledge graphs
DGL-KE: Training Knowledge Graph Embeddings at Scale
Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning
Layered Graph Embedding for Entity Recommendation using Wikipedia in the Yahoo! Knowledge Graph
Survey
A Survey of 6G Wireless Communications: Emerging Technologies
Duplication Detection in Knowledge Graphs: Literature and Tools
Graph theory
Isomorphism Testing for Graphs Excluding Small Minors β on graph isomorphism
Hitting forbidden induced subgraphs on bounded treewidth graphs β on treewidth
Low-stretch spanning trees of graphs with bounded width
Steiner Trees for Hereditary Graph Classes: a Treewidth Perspective
Ever wondered what color of the sofa to choose to be compatible with the rest of the furniture? π Today you will find an answer in one of the papers and of course with the help of GNN. Besides, there is a survey on 6G (!) technologies π±, new theoretical result on graph isomorphism π, and many applications of graphs.
Applications
The Quantum Approximate Optimization Algorithm Needs to See the Whole Graph: A Typical Case β quantum computation
Recommendation system using a deep learning and graph analysis approach β recommendation
Learning Furniture Compatibility with Graph Neural Networks β interior design
Gumbel-softmax-based Optimization: A Simple General Framework for Optimization Problems on Graphs βcombinatorial optimization
Knowledge graphs
DGL-KE: Training Knowledge Graph Embeddings at Scale
Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning
Layered Graph Embedding for Entity Recommendation using Wikipedia in the Yahoo! Knowledge Graph
Survey
A Survey of 6G Wireless Communications: Emerging Technologies
Duplication Detection in Knowledge Graphs: Literature and Tools
Graph theory
Isomorphism Testing for Graphs Excluding Small Minors β on graph isomorphism
Hitting forbidden induced subgraphs on bounded treewidth graphs β on treewidth
Low-stretch spanning trees of graphs with bounded width
Steiner Trees for Hereditary Graph Classes: a Treewidth Perspective
A forgotten story of Soviet AI
I found out about Weisfeiler-Leman algorithm about 5 years ago, and then sometime after I realized that both authors were from the USSR. That was quite unexpected. I started looking up information about the authors and found quite a good biography of Boris Weisfeiler, written by his sister, and not so much about Andrey Leman. About one year I was searching the people who knew him, one by one, who are now quite senior and don't use all fancy messengers, to find out more about his life. Finally, I gathered enough to write a post on his life, from interest in math olympiads to development of the first AI chess player, to working in Silicon Valley.
His life is a symbol of generation of mathematicians of his time. Strong performance in math olympiads, competitive Moscow State University, working in the Institute of theoretical and experimental physics, and then emigration to the West, when the iron curtain collapsed. I like hearing these stories because it's reminiscent of stories of my parents and their friends-engineers. It's the voice of that time, that now is inevitably gone. Similar to the trip of Babai to the USSR, reading about these stories uncovers the foundations of graph theory, computer science and artificial intelligence that we study today and let us connect the dots between old and new approaches.
I found out about Weisfeiler-Leman algorithm about 5 years ago, and then sometime after I realized that both authors were from the USSR. That was quite unexpected. I started looking up information about the authors and found quite a good biography of Boris Weisfeiler, written by his sister, and not so much about Andrey Leman. About one year I was searching the people who knew him, one by one, who are now quite senior and don't use all fancy messengers, to find out more about his life. Finally, I gathered enough to write a post on his life, from interest in math olympiads to development of the first AI chess player, to working in Silicon Valley.
His life is a symbol of generation of mathematicians of his time. Strong performance in math olympiads, competitive Moscow State University, working in the Institute of theoretical and experimental physics, and then emigration to the West, when the iron curtain collapsed. I like hearing these stories because it's reminiscent of stories of my parents and their friends-engineers. It's the voice of that time, that now is inevitably gone. Similar to the trip of Babai to the USSR, reading about these stories uncovers the foundations of graph theory, computer science and artificial intelligence that we study today and let us connect the dots between old and new approaches.
Medium
A forgotten story of Soviet AI
What it was like to be a programmer 70 years ago?
Geometric and Relational Deep Learning
A workshop on GML will be streamed tomorrow on YouTube. It will start at 9-20 and continue until 17-00. The list of speakers is great:
Peter Battaglia, DeepMind
Natalia Neverova, Facebook AI Research (FAIR)
Stephan GΓΌnnemann, TU Munich
Yaron Lipman, Weizmann Institute of Science
Miltos Allamanis, Microsoft Research
Qi Liu, University of Oxford
Pim de Haan, University of Amsterdam & Qualcomm AI Research
Noemi Montobbio, Italian Institute of Technology
A workshop on GML will be streamed tomorrow on YouTube. It will start at 9-20 and continue until 17-00. The list of speakers is great:
Peter Battaglia, DeepMind
Natalia Neverova, Facebook AI Research (FAIR)
Stephan GΓΌnnemann, TU Munich
Yaron Lipman, Weizmann Institute of Science
Miltos Allamanis, Microsoft Research
Qi Liu, University of Oxford
Pim de Haan, University of Amsterdam & Qualcomm AI Research
Noemi Montobbio, Italian Institute of Technology
Geometric and Relational Deep Learning Pt. 2
Apparently in this workshop there will be also poster sessions that are only available to registered participants. The list of the papers can be found below (thanks to people who attend it).
β’ A geometric deep learning model to filter out anatomically non plausible fibers from tractograms [video]
β’ Patient-Specific Pathological Gait Modelling with Conditional-NRI [video]
β’ GRASP: Graph Alignment through Spectral Signatures
β’ Isomorphism Leakage in Multi-Interaction Datasets [video]
β’ Integrating Spectral and Spatial Domain Graph Neural Networks
β’ Unshackling Bisimulation with Graph Neural Networks
β’ State2vec: Learning Off-Policy State Representations [video]
β’ Are Graph Convolutional Networks Fully Exploiting Graph Structure? [video]
β’ Principal Neighbourhood Aggregation Networks [video]
β’ Attentive Group Equivariant Convolutional Networks [video]
β’ SMP: An Equivariant Message Passing Scheme for Learning Graph Structural Information [video]
β’ Evaluation of Molecular Fingerprints for Similarity-based Virtual Screening generated through Graph Convolution Networks [video]
β’ Network alignment with GNN [video]
β’ Learning Generative Models across Incomparable Spaces
β’ Learning Set Operations for Deformable Shapes [video]
β’ Instant recovery of shape from spectrum via latent space connections [video]
β’ SIGN: Scalable Inception Graph Neural Networks [video]
β’ Universal Invariant and Equivariant Graph Neural Networks [video]
β’ Graph Convolutional Gaussian Processes for Link Prediction [video]
β’ Deep Graph Mapper: Seeing Graphs through the Neural Lens [video]
β’ Geoopt: Riemannian Optimization in PyTorch [video]
β’ HyperLearn: A Distributed Approach for Representation Learning in Datasets With Many Modalities
β’ Multi-relational PoincarΓ© Graph Embeddings [video]
β’ On Understanding Knowledge Graph Representation [video]
β’ Learning Object-Object Relations in Video [video]
Apparently in this workshop there will be also poster sessions that are only available to registered participants. The list of the papers can be found below (thanks to people who attend it).
β’ A geometric deep learning model to filter out anatomically non plausible fibers from tractograms [video]
β’ Patient-Specific Pathological Gait Modelling with Conditional-NRI [video]
β’ GRASP: Graph Alignment through Spectral Signatures
β’ Isomorphism Leakage in Multi-Interaction Datasets [video]
β’ Integrating Spectral and Spatial Domain Graph Neural Networks
β’ Unshackling Bisimulation with Graph Neural Networks
β’ State2vec: Learning Off-Policy State Representations [video]
β’ Are Graph Convolutional Networks Fully Exploiting Graph Structure? [video]
β’ Principal Neighbourhood Aggregation Networks [video]
β’ Attentive Group Equivariant Convolutional Networks [video]
β’ SMP: An Equivariant Message Passing Scheme for Learning Graph Structural Information [video]
β’ Evaluation of Molecular Fingerprints for Similarity-based Virtual Screening generated through Graph Convolution Networks [video]
β’ Network alignment with GNN [video]
β’ Learning Generative Models across Incomparable Spaces
β’ Learning Set Operations for Deformable Shapes [video]
β’ Instant recovery of shape from spectrum via latent space connections [video]
β’ SIGN: Scalable Inception Graph Neural Networks [video]
β’ Universal Invariant and Equivariant Graph Neural Networks [video]
β’ Graph Convolutional Gaussian Processes for Link Prediction [video]
β’ Deep Graph Mapper: Seeing Graphs through the Neural Lens [video]
β’ Geoopt: Riemannian Optimization in PyTorch [video]
β’ HyperLearn: A Distributed Approach for Representation Learning in Datasets With Many Modalities
β’ Multi-relational PoincarΓ© Graph Embeddings [video]
β’ On Understanding Knowledge Graph Representation [video]
β’ Learning Object-Object Relations in Video [video]
YouTube
1: A geometric deep learning model to filter out anatomically non plausible fibers from tractograms
spotlight presentation @ ELLIS Workshop on Geometric Deep Learning
Graph Machine Learning research groups: Philip S. Yu
I do a series of posts on the groups in graph research. The fourth is Philip S. Yu. Fun fact: in 2019 he has 136 papers in google scholar (approximately one paper every 2.5 days).
Philip S. Yu (~1952)
- Affiliation: University of Illinois at Chicago;
- Education: Ph.D. at Stanford in 1978; MBA at NYU in 1982;
- h-index: 161;
- Awards: awards at KDD, IEEE CS, ICDM;
- Interests: data mining, anomaly detection on graphs, graph surveys, GNN.
I do a series of posts on the groups in graph research. The fourth is Philip S. Yu. Fun fact: in 2019 he has 136 papers in google scholar (approximately one paper every 2.5 days).
Philip S. Yu (~1952)
- Affiliation: University of Illinois at Chicago;
- Education: Ph.D. at Stanford in 1978; MBA at NYU in 1982;
- h-index: 161;
- Awards: awards at KDD, IEEE CS, ICDM;
- Interests: data mining, anomaly detection on graphs, graph surveys, GNN.
ICLR 2020
ICLR conference starts this Sunday with workshops, followed by 4 days of main conference. For those who registered, you can enter ICLR portal where all conference will take place (videos, chats, keynotes, etc.). They projected the papers with t-SNE so that you can quickly find relevant papers.
Finally, if you are interested in graph machine learning, revisit the post that I did early this year on Top Trends of Graph Machine Learning in 2020 that is based on 150 graph papers submitted/accepted to ICLR.
ICLR conference starts this Sunday with workshops, followed by 4 days of main conference. For those who registered, you can enter ICLR portal where all conference will take place (videos, chats, keynotes, etc.). They projected the papers with t-SNE so that you can quickly find relevant papers.
Finally, if you are interested in graph machine learning, revisit the post that I did early this year on Top Trends of Graph Machine Learning in 2020 that is based on 150 graph papers submitted/accepted to ICLR.
ICLR 2020, Day 1
Here is a list of interesting papers (and their links to ICLR portal) of Day 1.
1. On Universal Equivariant Set Networks portal link
2. Hoppity: Learning Graph Transformations to Detect and Fix Bugs in Programs portal link
3. GraphSAINT: Graph Sampling Based Inductive Learning Method portal link
4. Measuring and Improving the Use of Graph Information in Graph Neural Networks portal link
5. Deep Double Descent: Where Bigger Models and More Data Hurt portal link
6. GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation portal link
7. Dynamically Pruned Message Passing Networks for Large-scale Knowledge Graph Reasoning portal link
8. You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings portal link
9. Deep Graph Matching Consensus portal link
10. PairNorm: Tackling Oversmoothing in GNNs portal link
11. Dynamically Pruned Message Passing Networks for Large-scale Knowledge Graph Reasoning portal link
12. Making Efficient Use of Demonstrations to Solve Hard Exploration Problems portal link
13. LambdaNet: Probabilistic Type Inference using Graph Neural Networks portal link
14. StructPool: Structured Graph Pooling via Conditional Random Fields portal link
15. Implementation Matters in Deep RL: A Case Study on PPO and TRPO portal link
Here is a list of interesting papers (and their links to ICLR portal) of Day 1.
1. On Universal Equivariant Set Networks portal link
2. Hoppity: Learning Graph Transformations to Detect and Fix Bugs in Programs portal link
3. GraphSAINT: Graph Sampling Based Inductive Learning Method portal link
4. Measuring and Improving the Use of Graph Information in Graph Neural Networks portal link
5. Deep Double Descent: Where Bigger Models and More Data Hurt portal link
6. GraphAF: a Flow-based Autoregressive Model for Molecular Graph Generation portal link
7. Dynamically Pruned Message Passing Networks for Large-scale Knowledge Graph Reasoning portal link
8. You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings portal link
9. Deep Graph Matching Consensus portal link
10. PairNorm: Tackling Oversmoothing in GNNs portal link
11. Dynamically Pruned Message Passing Networks for Large-scale Knowledge Graph Reasoning portal link
12. Making Efficient Use of Demonstrations to Solve Hard Exploration Problems portal link
13. LambdaNet: Probabilistic Type Inference using Graph Neural Networks portal link
14. StructPool: Structured Graph Pooling via Conditional Random Fields portal link
15. Implementation Matters in Deep RL: A Case Study on PPO and TRPO portal link
iclr.cc
Login
ICLR Website
Knowledge graphs at ICLR 2020
Interested in knowledge graphs? Check out this fresh post on knowledge graphs at ICLR from Michael Galkin.
Interested in knowledge graphs? Check out this fresh post on knowledge graphs at ICLR from Michael Galkin.
Medium
Knowledge Graphs @ ICLR 2020
π Hello, I hope you are all doing well during the lockdown. ICLR 2020 went fully virtual, and here is a fully virtual article (wellβ¦
ICLR 2020, Day 2
Day 1 was great, each paper has a prerecorded 5-minute video and 2 slots when you can ask questions through Zoom. Very convenient.
Here is a list of interesting papers of Day 2.
1. Abstract Diagrammatic Reasoning with Multiplex Graph Networks portal link
2. Probability Calibration for Knowledge Graph Embedding Models portal link
3. Learning to Guide Random Search portal link
4. Directional Message Passing for Molecular Graphs portal link
5. Locally Constant Networks portal link
6. Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data portal link
7. On the Weaknesses of Reinforcement Learning for Neural Machine Translation portal link
8. Scale-Equivariant Steerable Networks portal link
9. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification portal link
10. Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning portal link
11. Abstract Diagrammatic Reasoning with Multiplex Graph Networks portal link
12. Memory-Based Graph Networks portal link
13. Are Transformers universal approximators of sequence-to-sequence functions? portal link
14. GLAD: Learning Sparse Graph Recovery portal link
15. Hyper-SAGNN: a self-attention based graph neural network for hypergraphs portal link
16. The Curious Case of Neural Text Degeneration portal link
17. Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering portal link
18. Global Relational Models of Source Code portal link
Day 1 was great, each paper has a prerecorded 5-minute video and 2 slots when you can ask questions through Zoom. Very convenient.
Here is a list of interesting papers of Day 2.
1. Abstract Diagrammatic Reasoning with Multiplex Graph Networks portal link
2. Probability Calibration for Knowledge Graph Embedding Models portal link
3. Learning to Guide Random Search portal link
4. Directional Message Passing for Molecular Graphs portal link
5. Locally Constant Networks portal link
6. Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data portal link
7. On the Weaknesses of Reinforcement Learning for Neural Machine Translation portal link
8. Scale-Equivariant Steerable Networks portal link
9. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification portal link
10. Learning Heuristics for Quantified Boolean Formulas through Reinforcement Learning portal link
11. Abstract Diagrammatic Reasoning with Multiplex Graph Networks portal link
12. Memory-Based Graph Networks portal link
13. Are Transformers universal approximators of sequence-to-sequence functions? portal link
14. GLAD: Learning Sparse Graph Recovery portal link
15. Hyper-SAGNN: a self-attention based graph neural network for hypergraphs portal link
16. The Curious Case of Neural Text Degeneration portal link
17. Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering portal link
18. Global Relational Models of Source Code portal link
iclr.cc
Login
ICLR Website
Fresh picks from ArXiv
This week has lots of applications of GNNs to recommender systems, new way to scale GNN, and a survey on deepfakes π
Recommendation
β’ Contextualized Graph Attention Network for Recommendation with Item Knowledge Graph
β’ Learning Hierarchical Review Graph Representation for Recommendation
β’ Graph Learning Approaches to Recommender Systems: A Review with Philip S. Yu
β’ Learning Hierarchical Review Graph Representation for Recommendation
ACL
β’ GCAN: Graph-aware Co-Attention Networks for Explainable Fake News Detection on Social Media
β’ Relational Graph Attention Network for Aspect-based Sentiment Analysis
GNN
β’ SIGN: Scalable Inception Graph Neural Networks with Michael Bronstein
β’ Perturb More, Trap More: Understanding Behaviors of Graph Neural Networks
Graph Theory
β’ Coloring Problems on Bipartite Graphs of Small Diameter
β’ Graph isomorphism: Physical resources, optimization models, and algebraic characterizations
β’ On the spectral gap and the diameter of Cayley graphs
β’ Counterexamples to a conjecture by Gross, Mansour and Tucker on partial-dual genus polynomials of ribbon graphs
Survey
β’ The Creation and Detection of Deepfakes: A Survey
This week has lots of applications of GNNs to recommender systems, new way to scale GNN, and a survey on deepfakes π
Recommendation
β’ Contextualized Graph Attention Network for Recommendation with Item Knowledge Graph
β’ Learning Hierarchical Review Graph Representation for Recommendation
β’ Graph Learning Approaches to Recommender Systems: A Review with Philip S. Yu
β’ Learning Hierarchical Review Graph Representation for Recommendation
ACL
β’ GCAN: Graph-aware Co-Attention Networks for Explainable Fake News Detection on Social Media
β’ Relational Graph Attention Network for Aspect-based Sentiment Analysis
GNN
β’ SIGN: Scalable Inception Graph Neural Networks with Michael Bronstein
β’ Perturb More, Trap More: Understanding Behaviors of Graph Neural Networks
Graph Theory
β’ Coloring Problems on Bipartite Graphs of Small Diameter
β’ Graph isomorphism: Physical resources, optimization models, and algebraic characterizations
β’ On the spectral gap and the diameter of Cayley graphs
β’ Counterexamples to a conjecture by Gross, Mansour and Tucker on partial-dual genus polynomials of ribbon graphs
Survey
β’ The Creation and Detection of Deepfakes: A Survey
ICML 2020 Workshops
I don't know why it's so hard to find workshops for ICML, especially that deadlines for submissions are the end of May, but here is a full list.
There are two particular graph workshops
Graph Representation Learning and Beyond (GRL+)
and Bridge Between Perception and Reasoning:Graph Neural Networks & Beyond. The first is more on graph representations, the latter is more on reasoning by using graph models, but they seem to overlap quite a lot.
I don't know why it's so hard to find workshops for ICML, especially that deadlines for submissions are the end of May, but here is a full list.
There are two particular graph workshops
Graph Representation Learning and Beyond (GRL+)
and Bridge Between Perception and Reasoning:Graph Neural Networks & Beyond. The first is more on graph representations, the latter is more on reasoning by using graph models, but they seem to overlap quite a lot.
grlplus.github.io
Overview
ICML 2020 Workshop
ICLR 2020, Day 3
Day 3 has posters for Reformer π€, theory for GNN π, deep learning for mathematics βοΈ, and much more. Check out these papers.
1. Reformer: The Efficient Transformer portal link
2. Graph Neural Networks Exponentially Lose Expressive Power for Node Classification portal link
3. Neural Execution of Graph Algorithms portal link
4. Mathematical Reasoning in Latent Space portal link
5. Deep Learning For Symbolic Mathematics portal link
6. Graph Convolutional Reinforcement Learning portal link
7. Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation portal link
8. Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings portal link
9. A Fair Comparison of Graph Neural Networks for Graph Classification portal link
10. Inductive representation learning on temporal graphs portal link
11. Inductive and Unsupervised Representation Learning on Graph Structured Objects portal link
Day 3 has posters for Reformer π€, theory for GNN π, deep learning for mathematics βοΈ, and much more. Check out these papers.
1. Reformer: The Efficient Transformer portal link
2. Graph Neural Networks Exponentially Lose Expressive Power for Node Classification portal link
3. Neural Execution of Graph Algorithms portal link
4. Mathematical Reasoning in Latent Space portal link
5. Deep Learning For Symbolic Mathematics portal link
6. Graph Convolutional Reinforcement Learning portal link
7. Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation portal link
8. Query2box: Reasoning over Knowledge Graphs in Vector Space Using Box Embeddings portal link
9. A Fair Comparison of Graph Neural Networks for Graph Classification portal link
10. Inductive representation learning on temporal graphs portal link
11. Inductive and Unsupervised Representation Learning on Graph Structured Objects portal link
iclr.cc
Login
ICLR Website
List of open, simple, computational problems
There is a cool recent thread on MathOverflow on the open problems in Computer Science that anyone can comprehend (thanks to Alex). This is intriguing topic for me as I think that many math problems of 20th century can be solved with smart computations in 21st century.
There are quite a few problems on graphs, such as finding Moore graph or regular graphs. Besides this thread, there was an old similar thread in MathOverflow, where also a number of graph theory problems were posed. At last, in Open Problem Garden, there are all sorts of conjectures for graph theory that I believe can be much advanced by graph machine learning.
There is a cool recent thread on MathOverflow on the open problems in Computer Science that anyone can comprehend (thanks to Alex). This is intriguing topic for me as I think that many math problems of 20th century can be solved with smart computations in 21st century.
There are quite a few problems on graphs, such as finding Moore graph or regular graphs. Besides this thread, there was an old similar thread in MathOverflow, where also a number of graph theory problems were posed. At last, in Open Problem Garden, there are all sorts of conjectures for graph theory that I believe can be much advanced by graph machine learning.
MathOverflow
List of long open, elementary problems which are computational in nature
I would like to ask a question of a similar vein to this question.
Question: I'm asking for a list of long open problems which are computational in nature which a beginning graduate student can
Question: I'm asking for a list of long open problems which are computational in nature which a beginning graduate student can
ICLR 2020, Day 4
The final day of ICLR 2020. I promise. You can unmute this channel now.
1. What graph neural networks cannot learn: depth vs width portal link
2. The Logical Expressiveness of Graph Neural Networks portal link
3. Reinforced Genetic Algorithm Learning for Optimizing Computation Graphs portal link
4. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations portal link
5. Contrastive Learning of Structured World Models portal link
6. GraphZoom: A Multi-level Spectral Approach for Accurate and Scalable Graph Embedding portal link
7. An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality portal link
8. Learning deep graph matching with channel-independent embedding and Hungarian attention portal link
9. On the Equivalence between Positional Node Embeddings and Structural Graph Representations portal link
The final day of ICLR 2020. I promise. You can unmute this channel now.
1. What graph neural networks cannot learn: depth vs width portal link
2. The Logical Expressiveness of Graph Neural Networks portal link
3. Reinforced Genetic Algorithm Learning for Optimizing Computation Graphs portal link
4. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations portal link
5. Contrastive Learning of Structured World Models portal link
6. GraphZoom: A Multi-level Spectral Approach for Accurate and Scalable Graph Embedding portal link
7. An Inductive Bias for Distances: Neural Nets that Respect the Triangle Inequality portal link
8. Learning deep graph matching with channel-independent embedding and Hungarian attention portal link
9. On the Equivalence between Positional Node Embeddings and Structural Graph Representations portal link
iclr.cc
Login
ICLR Website
Thoughts from the first virtual conference
I had nice experience from virtual ICLR 2020. Most of the poster sessions were empty, which allowed me to bother authors with questions. Each paper had two slots during the day, so that I can definitely attend it. Chat allowed finding attendees quite easily, something that I had difficulty with real conferences. So it was much more valuable based on the insights that I gained than in real conference. But I didn't present and can understand that other people didn't get what they wanted.
By the way organizers, promised to make the portal available to everyone soon.
Now, here are some insights from the papers that I gained.
1) Topic on theoretical explanation of GNNs is hot. We now know some problems that can be approximated with GNN, functions that GNN can compute, limitations of GNN. [paper 1, paper 2, paper 3, paper 4]
2) One emerging topic is to teach GNN to learn algorithms, instead of doing classification task. Here be dragons. [paper 1, paper 2]
3) GNN are used to represent programs and equations. So potentially you can prove theorems with it. [paper 1, paper 2, paper 3, paper 4, paper 5]
I had nice experience from virtual ICLR 2020. Most of the poster sessions were empty, which allowed me to bother authors with questions. Each paper had two slots during the day, so that I can definitely attend it. Chat allowed finding attendees quite easily, something that I had difficulty with real conferences. So it was much more valuable based on the insights that I gained than in real conference. But I didn't present and can understand that other people didn't get what they wanted.
By the way organizers, promised to make the portal available to everyone soon.
Now, here are some insights from the papers that I gained.
1) Topic on theoretical explanation of GNNs is hot. We now know some problems that can be approximated with GNN, functions that GNN can compute, limitations of GNN. [paper 1, paper 2, paper 3, paper 4]
2) One emerging topic is to teach GNN to learn algorithms, instead of doing classification task. Here be dragons. [paper 1, paper 2]
3) GNN are used to represent programs and equations. So potentially you can prove theorems with it. [paper 1, paper 2, paper 3, paper 4, paper 5]
OpenReview
What graph neural networks cannot learn: depth vs width
Several graph problems are impossible unless the product of a graph neural network's depth and width exceeds a polynomial of the graph size.
Videos from Geometric and Relational Deep Learning Workshop
Videos are available from the workshop. Two my favorites are:
* Peter Battaglia: Learning Physics with Graph Neural Networks [video]
* Yaron Lipman: Deep Learning of Irregular and Geometric Data [video]
Videos are available from the workshop. Two my favorites are:
* Peter Battaglia: Learning Physics with Graph Neural Networks [video]
* Yaron Lipman: Deep Learning of Irregular and Geometric Data [video]
YouTube
Peter Battaglia - Learning Physics with Graph Neural Networks
ELLIS Workshop on Geometric and Relational Deep Learning
https://geometric-relational-dl.github.io
https://geometric-relational-dl.github.io
Fresh picks from ArXiv
This week presents new graph datasets OGB, accepted papers at ACL and SIGIR and a survey on Winograd challenge π
GNN
β’ Open Graph Benchmark: Datasets for Machine Learning on Graphs with Jure Leskovec
β’ Low-Dimensional Hyperbolic Knowledge Graph Embeddings with group Christopher RΓ©
β’ Graph Homomorphism Convolution
Graph Theory
β’ Independent Set on Pk-Free Graphs in Quasi-Polynomial Time
β’ Tree-depth and the Formula Complexity of Subgraph Isomorphism
Conferences
β’ Alleviating the Inconsistency Problem of Applying Graph Neural Network to Fraud Detection SIGIR 20
β’ Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward ACL 20
β’ Bipartite Flat-Graph Network for Nested Named Entity Recognition ACL 20
β’ LogicalFactChecker: Leveraging Logical Operations for Fact Checking with Graph Module Network ACL 20
β’ A Review of Winograd Schema Challenge Datasets and Approaches IJCAI 20
This week presents new graph datasets OGB, accepted papers at ACL and SIGIR and a survey on Winograd challenge π
GNN
β’ Open Graph Benchmark: Datasets for Machine Learning on Graphs with Jure Leskovec
β’ Low-Dimensional Hyperbolic Knowledge Graph Embeddings with group Christopher RΓ©
β’ Graph Homomorphism Convolution
Graph Theory
β’ Independent Set on Pk-Free Graphs in Quasi-Polynomial Time
β’ Tree-depth and the Formula Complexity of Subgraph Isomorphism
Conferences
β’ Alleviating the Inconsistency Problem of Applying Graph Neural Network to Fraud Detection SIGIR 20
β’ Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven Cloze Reward ACL 20
β’ Bipartite Flat-Graph Network for Nested Named Entity Recognition ACL 20
β’ LogicalFactChecker: Leveraging Logical Operations for Fact Checking with Graph Module Network ACL 20
β’ A Review of Winograd Schema Challenge Datasets and Approaches IJCAI 20
KDD 2020: Workshop on Deep Learning on Graphs
If you miss ICML deadlines, there is another good workshop for GML at KDD.
Deadline: 15 June
5 pages, double-blind
If you miss ICML deadlines, there is another good workshop for GML at KDD.
Deadline: 15 June
5 pages, double-blind