Workshops at NeurIPS 2020
There are more than 60 workshops at NeurIPS this year. Some relevant (with available accepted papers) are Learning Meets Combinatorial Algorithms (LMCA) on ML + NP-hard problems; and Differential Geometry meets Deep Learning (DiffGeo4DL) on geometry and manifolds.
There are more than 60 workshops at NeurIPS this year. Some relevant (with available accepted papers) are Learning Meets Combinatorial Algorithms (LMCA) on ML + NP-hard problems; and Differential Geometry meets Deep Learning (DiffGeo4DL) on geometry and manifolds.
nips.cc
NeurIPS 2020 Schedule
NeurIPS Website
Fresh picks from ArXiv
Today at ArXiv new big graph dataset for graph classification, fast clustering, and reconstructing ancient documents with GNN ⚰️
If I forgot to mention your paper, please shoot me a message and I will update the post.
Graphs
- A Large-Scale Database for Graph Representation Learning
- Using Graph Neural Networks to Reconstruct Ancient Documents
- Learning to Drop: Robust Graph Neural Network via Topological Denoising
- Towards Better Approximation of Graph Crossing Number
- Distill2Vec: Dynamic Graph Representation Learning with Knowledge Distillation
- Node Attribute Completion in Knowledge Graphs with Multi-Relational Propagation
Conferences
- Duality-Induced Regularizer for Tensor Factorization Based Knowledge Graph Completion NeurIPS 2020
- Higher-Order Spectral Clustering of Directed Graphs NeurIPS 2020
- Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures Workshop NeurIPS 2020
- IGSQL: Database Schema Interaction Graph Based Neural Model for Context-Dependent Text-to-SQL Generation EMNLP 2020
Today at ArXiv new big graph dataset for graph classification, fast clustering, and reconstructing ancient documents with GNN ⚰️
If I forgot to mention your paper, please shoot me a message and I will update the post.
Graphs
- A Large-Scale Database for Graph Representation Learning
- Using Graph Neural Networks to Reconstruct Ancient Documents
- Learning to Drop: Robust Graph Neural Network via Topological Denoising
- Towards Better Approximation of Graph Crossing Number
- Distill2Vec: Dynamic Graph Representation Learning with Knowledge Distillation
- Node Attribute Completion in Knowledge Graphs with Multi-Relational Propagation
Conferences
- Duality-Induced Regularizer for Tensor Factorization Based Knowledge Graph Completion NeurIPS 2020
- Higher-Order Spectral Clustering of Directed Graphs NeurIPS 2020
- Molecular Mechanics-Driven Graph Neural Network with Multiplex Graph for Molecular Structures Workshop NeurIPS 2020
- IGSQL: Database Schema Interaction Graph Based Neural Model for Context-Dependent Text-to-SQL Generation EMNLP 2020
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks
This paper by Cornell and Facebook made a lot of noise on Twitter recently. In short, it shows that GNNs can be outperformed by simpler models such as MLP + Label Propagation (LP) on several large datasets.
They use LP (actually twice) to propagate the labels from training nodes to test nodes. LP has been used for two decades successfully (NIPS 2004 as well as this survey), it's just it was not directly compared to GNN. Unfortunately, LP does not use node features, so the authors propose first to use MLP on node features and then use LP on predictions of MLP and on labels.
This work only applies for transductive node classification, but not on inductive node classification (applying trained model on new graphs), neither on link prediction nor graph classification. But for node classification it shows pretty good results in terms of speed and quality.
Another detail is that LP usually works on homophilous graphs, i.e. graphs where nodes with the same labels have higher chance of being connected. While this assumption is reasonable, not all graphs have this type of connectivity, for example the mail that goes from a person to a post office to aggregator to the recipient may connect nodes of different classes together. Petar Veličković talks more in detail about this.
I must add that it's not the first time we see that existing graph datasets can be outperformed by simple models. A year ago there were many works showing that MLP works better than GNN on many graph classification datasets (e.g. this paper). MLP don't work on OGB datasets really well, but MLP + LP does. Hopefully it will lead to more graph datasets and subsequently to more insights about which tools are the best for graph prediction problems.
This paper by Cornell and Facebook made a lot of noise on Twitter recently. In short, it shows that GNNs can be outperformed by simpler models such as MLP + Label Propagation (LP) on several large datasets.
They use LP (actually twice) to propagate the labels from training nodes to test nodes. LP has been used for two decades successfully (NIPS 2004 as well as this survey), it's just it was not directly compared to GNN. Unfortunately, LP does not use node features, so the authors propose first to use MLP on node features and then use LP on predictions of MLP and on labels.
This work only applies for transductive node classification, but not on inductive node classification (applying trained model on new graphs), neither on link prediction nor graph classification. But for node classification it shows pretty good results in terms of speed and quality.
Another detail is that LP usually works on homophilous graphs, i.e. graphs where nodes with the same labels have higher chance of being connected. While this assumption is reasonable, not all graphs have this type of connectivity, for example the mail that goes from a person to a post office to aggregator to the recipient may connect nodes of different classes together. Petar Veličković talks more in detail about this.
I must add that it's not the first time we see that existing graph datasets can be outperformed by simple models. A year ago there were many works showing that MLP works better than GNN on many graph classification datasets (e.g. this paper). MLP don't work on OGB datasets really well, but MLP + LP does. Hopefully it will lead to more graph datasets and subsequently to more insights about which tools are the best for graph prediction problems.
Twitter
Austin Benson
(1/N) Since KDD 2019 in Alaska, I have spent a lot of time trying to understand Graph Neural Networks (GNNs) and the graph mining / ML community’s infatuation with them. This has resulted in two papers on "node prediction,", i.e., (semi-supervised) graph…
Video: Recent Developments of Graph Network Architectures
(video) I already did a post with the slides with a great lecture of Xavier Bresson on latest GNNs. Here is also a video presentation of it.
(video) I already did a post with the slides with a great lecture of Xavier Bresson on latest GNNs. Here is also a video presentation of it.
YouTube
Xavier Bresson "Recent Developments of Graph Network Architectures"
Lecture slides on "Recent Developments of Graph Network Architectures" from my deep learning course. It is a review of some exciting works on graph neural networks published in 2019-2020.
Slides: https://rb.gy/quo3n6
Xavier Bresson, NTU, Singapore
http…
Slides: https://rb.gy/quo3n6
Xavier Bresson, NTU, Singapore
http…
Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings
This is a guest post by Christopher Morris about their recent work accepted to NeurIPS 2020 that deals with higher-order WL algorithms.
Motivation: Since the power of GNNs is upper-bounded by the 1-dimensional Weisfeiler-Leman algorithm (WL) (Xu et al. 2019, Morris et al. 2019), it is natural to design GNNs based on insights from the k-dimensional WL (k-WL), which is a strictly more powerful heuristic for the graph isomorphism heuristic. Instead of computing colors or features for single vertices, the k-WL gets more powerful by computing colors for k-tuples, defined over the vertex set, and defines a suitable adjacency notion between them to do a message-passing style update. Hence, it accounts for the higher-order interactions between vertices. However, it does not scale and may suffer from overfitting when used in a machine learning setting. Hence, it remains an important open problem to design WL-based graph learning methods simultaneously expressive, scalable, and non-overfitting.
Methodological Contribution: In our paper, we propose local variants of the k-WL and corresponding neural architectures, which consider a subset of the original neighborhood, making them more scalable, and less prone to overfitting. Surprisingly, the expressive power of (one of)
our algorithms is strictly higher than the original algorithm in terms of the ability to distinguish non-isomorphic graphs. We then lift our results to the neural setting and connect our finding to recent learning theoretic results for GNNs (Garg et al., 2020), showing that our architectures offer better generalization errors.
Empirical results: Our experimental study confirms that the local
algorithms, both kernel and neural architectures lead to vastly reduced computation times and prevent overfitting. The kernel version establishes a new state-of-the-art for graph classification on a wide range of benchmark datasets. In contrast, the neural version shows promising performance on large-scale molecular regression tasks.
Future Challenges: While our new sparse architecture leads to a boost in expressive power over standard GNNs and is less prone to overfitting than dense architectures, it still does not scale to truly large-scale. The main reason for this is the exponential dependence on k, i.e., the algorithm still considers all n**k tuples. Hence, designing scalable (higher-order) GNNs that can provably capture graph structure is an important future goal.
In general, we believe that moving away from the restrictive graph isomorphism objective and deriving a deeper
understanding of our architecture, when optimized with stochastic gradient descent, are important futures goals.
This is a guest post by Christopher Morris about their recent work accepted to NeurIPS 2020 that deals with higher-order WL algorithms.
Motivation: Since the power of GNNs is upper-bounded by the 1-dimensional Weisfeiler-Leman algorithm (WL) (Xu et al. 2019, Morris et al. 2019), it is natural to design GNNs based on insights from the k-dimensional WL (k-WL), which is a strictly more powerful heuristic for the graph isomorphism heuristic. Instead of computing colors or features for single vertices, the k-WL gets more powerful by computing colors for k-tuples, defined over the vertex set, and defines a suitable adjacency notion between them to do a message-passing style update. Hence, it accounts for the higher-order interactions between vertices. However, it does not scale and may suffer from overfitting when used in a machine learning setting. Hence, it remains an important open problem to design WL-based graph learning methods simultaneously expressive, scalable, and non-overfitting.
Methodological Contribution: In our paper, we propose local variants of the k-WL and corresponding neural architectures, which consider a subset of the original neighborhood, making them more scalable, and less prone to overfitting. Surprisingly, the expressive power of (one of)
our algorithms is strictly higher than the original algorithm in terms of the ability to distinguish non-isomorphic graphs. We then lift our results to the neural setting and connect our finding to recent learning theoretic results for GNNs (Garg et al., 2020), showing that our architectures offer better generalization errors.
Empirical results: Our experimental study confirms that the local
algorithms, both kernel and neural architectures lead to vastly reduced computation times and prevent overfitting. The kernel version establishes a new state-of-the-art for graph classification on a wide range of benchmark datasets. In contrast, the neural version shows promising performance on large-scale molecular regression tasks.
Future Challenges: While our new sparse architecture leads to a boost in expressive power over standard GNNs and is less prone to overfitting than dense architectures, it still does not scale to truly large-scale. The main reason for this is the exponential dependence on k, i.e., the algorithm still considers all n**k tuples. Hence, designing scalable (higher-order) GNNs that can provably capture graph structure is an important future goal.
In general, we believe that moving away from the restrictive graph isomorphism objective and deriving a deeper
understanding of our architecture, when optimized with stochastic gradient descent, are important futures goals.
Knowledge Graphs in NLP @ EMNLP 2020
A new digest from Michael Galkin on the applications of knowledge graphs in NLP at the last EMNLP conference. Much bigger models (6.5B parameters), more languages (100 languages for entity linking), more complex tasks (data to text).
A new digest from Michael Galkin on the applications of knowledge graphs in NLP at the last EMNLP conference. Much bigger models (6.5B parameters), more languages (100 languages for entity linking), more complex tasks (data to text).
Medium
Knowledge Graphs in NLP @ EMNLP 2020
Your guide to the KG-related research in NLP, November edition.
Network Repository
A cool interactive repository of about a thousand of different graphs. Could be useful if you need some graphs with specific properties for specific tasks.
A cool interactive repository of about a thousand of different graphs. Could be useful if you need some graphs with specific properties for specific tasks.
Network Data Repository
Network Data Repository | The First Interactive Network Data Repository
The first interactive network dataset repository with interactive graph visualization and analytics
Fresh picks from ArXiv
Today at ArXiv smoothing for link prediction, design space of GNNs, and scalable networks for heterogeneous graphs ⛰
If I forgot to mention your paper, please shoot me a message and I will update the post.
Conferences
- Node Similarity Preserving Graph Convolutional Networks WSDM 2021
- Design Space for Graph Neural Networks with Jure Leskovec, NeurIPS 2020
Graphs
- VLG-Net: Video-Language Graph Matching Network for Video Grounding
- Scalable Graph Neural Networks for Heterogeneous Graphs
- Graph embeddings via matrix factorization for link prediction: smoothing or truncating negatives?
- Reinforcement Learning of Graph Neural Networks for Service Function Chaining
- Quantum algorithms for learning graphs and beyond
Survey
- Survey and Open Problems in Privacy Preserving Knowledge Graph: Merging, Query, Representation, Completion and Applications
- Subpath Queries on Compressed Graphs: a Survey
Today at ArXiv smoothing for link prediction, design space of GNNs, and scalable networks for heterogeneous graphs ⛰
If I forgot to mention your paper, please shoot me a message and I will update the post.
Conferences
- Node Similarity Preserving Graph Convolutional Networks WSDM 2021
- Design Space for Graph Neural Networks with Jure Leskovec, NeurIPS 2020
Graphs
- VLG-Net: Video-Language Graph Matching Network for Video Grounding
- Scalable Graph Neural Networks for Heterogeneous Graphs
- Graph embeddings via matrix factorization for link prediction: smoothing or truncating negatives?
- Reinforcement Learning of Graph Neural Networks for Service Function Chaining
- Quantum algorithms for learning graphs and beyond
Survey
- Survey and Open Problems in Privacy Preserving Knowledge Graph: Merging, Query, Representation, Completion and Applications
- Subpath Queries on Compressed Graphs: a Survey
How node features affect performance of GNN?
This is an open question that I recently thought a bit. In particular, what surprised me are the results from a recent paper on Label Propagation on a particular dataset Rice31 (table below).
You can see that some models achieve 80% accuracy, while others 10% (random guess). In the paper they say that the node features are heterogeneous features such as gender or major, but after speaking with authors it seems they use spectral embeddings instead.
I have tried this dataset with GNN and my results are close to random guess (10%). I tried several variations of GNN as well as node features, but didn't get much higher than 15%. Then I tried GBDT with spectral embeddings and it gave me about 50% accuracy. I haven't tried LP yet on this dataset, but it would be remarkable to see that LP with spectral embeddings can have such a drastic difference with GNN.
This and other experiments led me to think that the paradigm of message passing is too strong, i.e. aggregating information simultaneously among your neighbors may not be a good idea in general. The inductive bias that such model has could be wrong for a particular graph dataset. GNN work on some graph datasets, but how node labels depend on the graph structure is very similar to how message-passing works. In other words, if you were to create a dataset, where a node label equals to an average label of your neighbors, then GNN that does average aggregation would easily learn such dependency. But if your node labels depend on the structure in some counter-intuitive way (for example, by picking a neighbor at random and then assigning its node label), then your GNN with average aggregation would fail. In other words, GNN models don't have to follow message-passing paradigm, they can have very different design principles and that's something that I think we will see in the coming years.
This is an open question that I recently thought a bit. In particular, what surprised me are the results from a recent paper on Label Propagation on a particular dataset Rice31 (table below).
You can see that some models achieve 80% accuracy, while others 10% (random guess). In the paper they say that the node features are heterogeneous features such as gender or major, but after speaking with authors it seems they use spectral embeddings instead.
I have tried this dataset with GNN and my results are close to random guess (10%). I tried several variations of GNN as well as node features, but didn't get much higher than 15%. Then I tried GBDT with spectral embeddings and it gave me about 50% accuracy. I haven't tried LP yet on this dataset, but it would be remarkable to see that LP with spectral embeddings can have such a drastic difference with GNN.
This and other experiments led me to think that the paradigm of message passing is too strong, i.e. aggregating information simultaneously among your neighbors may not be a good idea in general. The inductive bias that such model has could be wrong for a particular graph dataset. GNN work on some graph datasets, but how node labels depend on the graph structure is very similar to how message-passing works. In other words, if you were to create a dataset, where a node label equals to an average label of your neighbors, then GNN that does average aggregation would easily learn such dependency. But if your node labels depend on the structure in some counter-intuitive way (for example, by picking a neighbor at random and then assigning its node label), then your GNN with average aggregation would fail. In other words, GNN models don't have to follow message-passing paradigm, they can have very different design principles and that's something that I think we will see in the coming years.
Telegram
Graph Machine Learning
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks
This paper by Cornell and Facebook made a lot of noise on Twitter recently. In short, it shows that GNNs can be outperformed by simpler models such as MLP + Label Propagation…
This paper by Cornell and Facebook made a lot of noise on Twitter recently. In short, it shows that GNNs can be outperformed by simpler models such as MLP + Label Propagation…
Jraph - A library for graph neural networks in jax.
Jgraph is a new library by DeepMind for constructing GNNs in JAX (autograd computation) and Haiku (writing neural network layers). Could be useful if you cannot use PyTorch.
Jgraph is a new library by DeepMind for constructing GNNs in JAX (autograd computation) and Haiku (writing neural network layers). Could be useful if you cannot use PyTorch.
GitHub
GitHub - google-deepmind/jraph: A Graph Neural Network Library in Jax
A Graph Neural Network Library in Jax. Contribute to google-deepmind/jraph development by creating an account on GitHub.
Graph Machine Learning research groups: Jimeng Sun
I do a series of posts on the groups in graph research, previous post is here. The 19th is Jimeng Sun, the head of SunLab at UIUC, teaching the courses of Big Data Analytics and Healthcare as well as Computing and Society.
Jimeng Sun (~1981)
- Affiliation: University of Illinois Urbana-Champaign
- Education: Ph.D. at CMU in 2002 (advisor: Christos Faloutsos)
- h-index 66
- Awards: KDD, ICDM, SDM best paper awards
- Interests: drug discovery, GNNs, graph mining
I do a series of posts on the groups in graph research, previous post is here. The 19th is Jimeng Sun, the head of SunLab at UIUC, teaching the courses of Big Data Analytics and Healthcare as well as Computing and Society.
Jimeng Sun (~1981)
- Affiliation: University of Illinois Urbana-Champaign
- Education: Ph.D. at CMU in 2002 (advisor: Christos Faloutsos)
- h-index 66
- Awards: KDD, ICDM, SDM best paper awards
- Interests: drug discovery, GNNs, graph mining
Telegram
Graph Machine Learning
Graph Machine Learning research groups: Albert-László Barabási
I do a series of posts on the groups in graph research, previous post is here. The 18th is Albert-László Barabási, who introduced scale-free networks and in particular Barabási–Albert model and…
I do a series of posts on the groups in graph research, previous post is here. The 18th is Albert-László Barabási, who introduced scale-free networks and in particular Barabási–Albert model and…
Planarity game
If you need some time to procrastinate and you want to do it with graphs, here is a fun game to play, called Tronix2. You just need to make the drawn graphs planar. There are several clones of this game (here and here), which even explain how to generate planar graphs. And here is Numberphile video about planar graphs.
If you need some time to procrastinate and you want to do it with graphs, here is a fun game to play, called Tronix2. You just need to make the drawn graphs planar. There are several clones of this game (here and here), which even explain how to generate planar graphs. And here is Numberphile video about planar graphs.
Atom
Treksit.com is for sale at Atom!
Domain name Treksit.com evokes a sense of adventure and exploration, making it ideal for travel, outdoor, or tech startups. The word trek suggests a journey or expedition, while sit implies a place to relax and reflect. This duality can app . " #CatchyDomains…
Golden Knowledge Graph
Golden is a Silicon Valley startup building a knowledge database (similar to Wikipedia) — a good example how knowledge graphs can be commercialized.
Golden is a Silicon Valley startup building a knowledge database (similar to Wikipedia) — a good example how knowledge graphs can be commercialized.
Fresh picks from ArXiv
Today at ArXiv: fast algorithms for dynamic graphs, new adversarial attacks, and GNNs for generating dances 💃
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNN
- Curvature Regularization to Prevent Distortion in Graph Embedding NeurIPS 2020
- Fast and Uncertainty-Aware Directional Message Passing for Non-Equilibrium Molecules with Stephan Günnemann, Workshop NeurIPS 2020
- A Targeted Universal Attack on Graph Convolutional Network
- Fast and Accurate Anomaly Detection in Dynamic Graphs with a Two-Pronged Approach with Christos Faloutsos
- Near-Optimal Algorithms for Reachability, Strongly-Connected Components and Shortest Paths in Partially Dynamic Digraphs
Applications
- Video Self-Stitching Graph Network for Temporal Action Localization
- Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio
- RetroGNN: Approximating Retrosynthesis by Graph Neural Networks for De Novo Drug Design Workshop NeurIPS 2020
- Quantifying Explainers of Graph Neural Networks in Computational Pathology
Today at ArXiv: fast algorithms for dynamic graphs, new adversarial attacks, and GNNs for generating dances 💃
If I forgot to mention your paper, please shoot me a message and I will update the post.
GNN
- Curvature Regularization to Prevent Distortion in Graph Embedding NeurIPS 2020
- Fast and Uncertainty-Aware Directional Message Passing for Non-Equilibrium Molecules with Stephan Günnemann, Workshop NeurIPS 2020
- A Targeted Universal Attack on Graph Convolutional Network
- Fast and Accurate Anomaly Detection in Dynamic Graphs with a Two-Pronged Approach with Christos Faloutsos
- Near-Optimal Algorithms for Reachability, Strongly-Connected Components and Shortest Paths in Partially Dynamic Digraphs
Applications
- Video Self-Stitching Graph Network for Temporal Action Localization
- Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio
- RetroGNN: Approximating Retrosynthesis by Graph Neural Networks for De Novo Drug Design Workshop NeurIPS 2020
- Quantifying Explainers of Graph Neural Networks in Computational Pathology
Undergraduate Math Student Pushes Frontier of Graph Theory
A new article at QuantaMagazine about 21 year old who improved results of Erdős and Szekeres on the upper bound for two-color Ramsey numbers. Informally, Ramsey numbers can be explained as "how big graphs can get before patterns inevitably emerge". This is in addition to the recent proof for lower bounds, also covered in Quanta.
A new article at QuantaMagazine about 21 year old who improved results of Erdős and Szekeres on the upper bound for two-color Ramsey numbers. Informally, Ramsey numbers can be explained as "how big graphs can get before patterns inevitably emerge". This is in addition to the recent proof for lower bounds, also covered in Quanta.
Quanta Magazine
Undergraduate Math Student Pushes Frontier of Graph Theory
At 21, Ashwin Sah has produced a body of work that senior mathematicians say is nearly unprecedented for a college student.
ICLR 2021 Graph Papers
Here is a list of graph papers with their final scores. This is in addition to the list for all the papers. Overall, 74 papers (out of 208 graph papers) increased their scores and 18 decreased.
Here is a list of graph papers with their final scores. This is in addition to the list for all the papers. Overall, 74 papers (out of 208 graph papers) increased their scores and 18 decreased.
Twitter
Sergey Ivanov
#ICLR2021 reviews are finalized and below is a final table with scores. Here is a teaser of how rebuttal helps the papers. 🧵1/n https://t.co/GISD8ukXWs
Geometric Deep Learning: Successes, Challenges, Next Steps
A good video overview of the field by Michael Bronstein: talking about deriving convolution from the first principles, first GNN models, expressivity of GNNs, and the future applications of the field.
A good video overview of the field by Michael Bronstein: talking about deriving convolution from the first principles, first GNN models, expressivity of GNNs, and the future applications of the field.
YouTube
DVL Guest Lectures - Michael Bronstein, Imperial College London
Guest lecture by Michael Bronstein (Imperial College London, USI, Twitter)
Abstract: Geometric deep learning has recently become one of the hottest topics in machine learning, with its particular instance, graph neural networks, being used in a broad spectrum…
Abstract: Geometric deep learning has recently become one of the hottest topics in machine learning, with its particular instance, graph neural networks, being used in a broad spectrum…
Open Access Theses and Dissertations
Seeking for an inspiration for your dissertation or maybe want to check the latest monolithic works in graph community, take a look at OATD portal. Here is for example a search for all dissertations that have
Seeking for an inspiration for your dissertation or maybe want to check the latest monolithic works in graph community, take a look at OATD portal. Here is for example a search for all dissertations that have
graph in their title, resulting in ~400 PhD and ~100 MSc theses just in 2016-2020 period.Don't crack under pressure
I remember when I was interning at HKUST during my master's years I had a chance to see a motivational presentation for freshmen from one of the tenured professors there. One of the things he was emphasizing is that PhD is stressful experience with lots of uncertainty and you should keep being focused, brave and don't crack under the pressure. Here is a funny story that demonstrates it. A guy realized he has a bug 2 weeks before submitting his PhD thesis and asked MathOverflow community to help him to fix it, luckily it worked out well.
I remember when I was interning at HKUST during my master's years I had a chance to see a motivational presentation for freshmen from one of the tenured professors there. One of the things he was emphasizing is that PhD is stressful experience with lots of uncertainty and you should keep being focused, brave and don't crack under the pressure. Here is a funny story that demonstrates it. A guy realized he has a bug 2 weeks before submitting his PhD thesis and asked MathOverflow community to help him to fix it, luckily it worked out well.
MathOverflow
Issue UPDATE: in graph theory, different definitions of edge crossing numbers - impact on applications?
QUICK FINAL UPDATE: Just wanted to thank you MO users for all your support. Special thanks for the fast answers, I've accepted first one, appreciated the clarity it gave me. I've updated my torus
Graph Mining & Learning Workshop NeurIPS 2020
NeurIPS 2020 just started and there is a good workshop by Google Research on graph mining (by Bryan Perozzi and others). To see the videos you need to register for expo but it's free. Then you will have links to videos. There is also a slide deck with 312 pages about many interesting topics.
Update: it seems there is a bug in the registration panel, so you can access the schedule and videos in this page.
NeurIPS 2020 just started and there is a good workshop by Google Research on graph mining (by Bryan Perozzi and others). To see the videos you need to register for expo but it's free. Then you will have links to videos. There is also a slide deck with 312 pages about many interesting topics.
Update: it seems there is a bug in the registration panel, so you can access the schedule and videos in this page.
Graph Mining & Learning
Graph Mining and Learning @ NeurIPS
at the 2020 Neural Information Processing Systems Conference