Fresh picks from ArXiv
This week highlights a new knowledge graph about covid-19, applications to program similarity and drug discovery as well as a group of accepted papers to ECCV 20.
GNN
• COVID-19 Knowledge Graph: Accelerating Information Retrieval and Discovery for Scientific Literature
• funcGNN: A Graph Neural Network Approach to Program Similarity
• The expressive power of kth-order invariant graph networks
• Fast Graphlet Transform of Sparse Graphs
• Visualizing Deep Graph Generative Models for Drug Discovery
• Second-Order Pooling for Graph Neural Networks
• Graph-PCNN: Two Stage Human Pose Estimation with Graph Pose Refinement
• Hierachial Protein Function Prediction with Tails-GNNs with Petar Veličković
Conferences
• Multi-view adaptive graph convolutions for graph classification ECCV 20
• Comprehensive Image Captioning via Scene Graph Decomposition ECCV 20
• Differentiable Hierarchical Graph Grouping for Multi-Person Pose Estimation ECCV 20
• Grale: Designing Networks for Graph Learning with Bryan Perozzi, KDD 20
• Edge-aware Graph Representation Learning and Reasoning for Face Parsing ECCV 20
Surveys
• A Survey on Complex Question Answering over Knowledge Base: Recent Advances and Challenges
• A Survey on Graph Neural Networks for Knowledge Graph Completion
This week highlights a new knowledge graph about covid-19, applications to program similarity and drug discovery as well as a group of accepted papers to ECCV 20.
GNN
• COVID-19 Knowledge Graph: Accelerating Information Retrieval and Discovery for Scientific Literature
• funcGNN: A Graph Neural Network Approach to Program Similarity
• The expressive power of kth-order invariant graph networks
• Fast Graphlet Transform of Sparse Graphs
• Visualizing Deep Graph Generative Models for Drug Discovery
• Second-Order Pooling for Graph Neural Networks
• Graph-PCNN: Two Stage Human Pose Estimation with Graph Pose Refinement
• Hierachial Protein Function Prediction with Tails-GNNs with Petar Veličković
Conferences
• Multi-view adaptive graph convolutions for graph classification ECCV 20
• Comprehensive Image Captioning via Scene Graph Decomposition ECCV 20
• Differentiable Hierarchical Graph Grouping for Multi-Person Pose Estimation ECCV 20
• Grale: Designing Networks for Graph Learning with Bryan Perozzi, KDD 20
• Edge-aware Graph Representation Learning and Reasoning for Face Parsing ECCV 20
Surveys
• A Survey on Complex Question Answering over Knowledge Base: Recent Advances and Challenges
• A Survey on Graph Neural Networks for Knowledge Graph Completion
Discovering Symbolic Models in Physical Systems Using Deep Learning
Today (July 29, at 12:00 EDT) will be a zoom lecture about applying GNN to cosmology by Shirley Ho at Physics ∩ ML seminar.
Abstract: We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs). The technique works as follows: we first encourage sparse latent representations when we train a GNN in a supervised setting, then we apply symbolic regression to components of the learned model to extract explicit physical relations. We find the correct known equations, including force laws and Hamiltonians, can be extracted from the neural network. We then apply our method to a non-trivial cosmology example—a detailed dark matter simulation—and discover a new analytic formula that can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. The symbolic expressions extracted from the GNN using our technique also generalized to out-of-distribution-data better than the GNN itself. Our approach offers alternative directions for interpreting neural networks and discovering novel physical principles from the representations they learn.
Today (July 29, at 12:00 EDT) will be a zoom lecture about applying GNN to cosmology by Shirley Ho at Physics ∩ ML seminar.
Abstract: We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs). The technique works as follows: we first encourage sparse latent representations when we train a GNN in a supervised setting, then we apply symbolic regression to components of the learned model to extract explicit physical relations. We find the correct known equations, including force laws and Hamiltonians, can be extracted from the neural network. We then apply our method to a non-trivial cosmology example—a detailed dark matter simulation—and discover a new analytic formula that can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. The symbolic expressions extracted from the GNN using our technique also generalized to out-of-distribution-data better than the GNN itself. Our approach offers alternative directions for interpreting neural networks and discovering novel physical principles from the representations they learn.
Zoom Video
Join our Cloud HD Video Meeting
Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom Rooms is the original software-based conference room solution…
Podcast with Michael Bronstein
There is a podcast called This Week in Machine Learning & AI (TWIML) about aspects of AI. Michael Bronstein, head of graph machine learning at Twitter, gave recently a lengthy interview talking about evolution of the field over the last 2 years. He describes current challenges (e.g. scalability), difference between industrial and academic settings for graphs, his recent works as well as prediction of where the area of GML is moving towards.
There is a podcast called This Week in Machine Learning & AI (TWIML) about aspects of AI. Michael Bronstein, head of graph machine learning at Twitter, gave recently a lengthy interview talking about evolution of the field over the last 2 years. He describes current challenges (e.g. scalability), difference between industrial and academic settings for graphs, his recent works as well as prediction of where the area of GML is moving towards.
TWIML
TWIML: The voice of machine learning and artificial intelligence
Intelligent content that gives practitioners, innovators, and leaders an inside look at the present and future of ML & AI technologies.
Main theme from GRL+ workshop
I already mentioned it, but let me add more things about trends in GML (credits to Petar Veličković).
The biggest theme from GRL+ workshop was the explicit consideration of structure, which so far was largely ignored in GNNs (i.e. one would just assume a given graph without thinking how it got there or whether it could be specialized for the task at hand).
In the accepted papers, we have many works which tackle latent structure inference (e.g. Differentiable Graph Module, set2graph, Relate-and-Predict, GFSA, and our PGN are all examples thereof) and also works which attempt to explicitly exploit structure in the data for prediction (e.g. the recent subgraph isomorphism counting paper).
This direction was echoed a lot in our invited talks as well.
Thomas Kipf was talking about relational structure discovery (NRI, CompILE and his recent slot attention work).
Kyle Cranmer was talking about how critical structure discovery is in physics-based applications and inductive biases, highlighting especially his set2graph work as well as their recent work on discovering symbolic representations.
Danai Koutra talking how graphs can be appropriately summarized and how to design GNN layers to deal with heterophily.
Tina Eliassi-Rad gave an amazing lecture-style talk on how topology and structure can be leveraged in machine learning more generally. During our Q&A session, she was asked to give comments on the explosive usage datasets like Cora (as she is one of the authors on the paper that originally proposed Cora, Citeseer etc). And she made a very important 'wakeup call' to GRL folks that we shouldn't think our graphs fall from the sky, and on the topic of using heavy-duty GNN methods and hyperbolic embeddings, etc in the real world, we should always ask the question: 'do we really expect our graphs to be coming from a distribution like this?'.
The videos with all of it should be available in the coming weeks.
I already mentioned it, but let me add more things about trends in GML (credits to Petar Veličković).
The biggest theme from GRL+ workshop was the explicit consideration of structure, which so far was largely ignored in GNNs (i.e. one would just assume a given graph without thinking how it got there or whether it could be specialized for the task at hand).
In the accepted papers, we have many works which tackle latent structure inference (e.g. Differentiable Graph Module, set2graph, Relate-and-Predict, GFSA, and our PGN are all examples thereof) and also works which attempt to explicitly exploit structure in the data for prediction (e.g. the recent subgraph isomorphism counting paper).
This direction was echoed a lot in our invited talks as well.
Thomas Kipf was talking about relational structure discovery (NRI, CompILE and his recent slot attention work).
Kyle Cranmer was talking about how critical structure discovery is in physics-based applications and inductive biases, highlighting especially his set2graph work as well as their recent work on discovering symbolic representations.
Danai Koutra talking how graphs can be appropriately summarized and how to design GNN layers to deal with heterophily.
Tina Eliassi-Rad gave an amazing lecture-style talk on how topology and structure can be leveraged in machine learning more generally. During our Q&A session, she was asked to give comments on the explosive usage datasets like Cora (as she is one of the authors on the paper that originally proposed Cora, Citeseer etc). And she made a very important 'wakeup call' to GRL folks that we shouldn't think our graphs fall from the sky, and on the topic of using heavy-duty GNN methods and hyperbolic embeddings, etc in the real world, we should always ask the question: 'do we really expect our graphs to be coming from a distribution like this?'.
The videos with all of it should be available in the coming weeks.
Telegram
Graph Machine Learning
Trends in GML
I think GRL+ workshop is really cool: it gathers people in GML and discusses ideas that are not fully developed but will be soon. It's like peeking into the crystal ball. Petar Veličković, one of the organizers of this workshop, outlined the…
I think GRL+ workshop is really cool: it gathers people in GML and discusses ideas that are not fully developed but will be soon. It's like peeking into the crystal ball. Petar Veličković, one of the organizers of this workshop, outlined the…
Graph Machine Learning research groups: Kristian Kersting
I do a series of posts on the groups in graph research, previous post is here. The 11th is Kristian Kersting, co-author of TU data set and several graph kernels.
Kristian Kersting (1973)
- Affiliation: TU Darmstadt
- Education: Ph.D. at the University of Freiburg, Germany in 2014 (supervised by Luc De Raedt);
- h-index: 49;
- Awards: best paper at ECML, AAAI; Inaugural German AI Award;
- Interests: graph kernels, graph data sets
I do a series of posts on the groups in graph research, previous post is here. The 11th is Kristian Kersting, co-author of TU data set and several graph kernels.
Kristian Kersting (1973)
- Affiliation: TU Darmstadt
- Education: Ph.D. at the University of Freiburg, Germany in 2014 (supervised by Luc De Raedt);
- h-index: 49;
- Awards: best paper at ECML, AAAI; Inaugural German AI Award;
- Interests: graph kernels, graph data sets
Telegram
Graph Machine Learning
Graph Machine Learning research groups: Jian Tang
I do a series of posts on the groups in graph research, previous post is here. The tenth is Jian Tang. A young professor at Mila, with 4 papers at ICML and 2 at ICLR in 2020, and looking for graduate students…
I do a series of posts on the groups in graph research, previous post is here. The tenth is Jian Tang. A young professor at Mila, with 4 papers at ICML and 2 at ICLR in 2020, and looking for graduate students…
Controlling Fake News using Graphs and Statistics
This is a guest post by Siddharth Bhatia about their recent work with Christos Faloutsos on anomaly detection in streaming data.
MIDAS, Microcluster-Based Detector of Anomalies in Edge Streams (AAAI 2020) uses unsupervised learning to detect anomalies in a streaming manner in real-time. It was designed keeping in mind the way recent sophisticated attacks occur. MIDAS can be used to detect intrusions, Denial of Service (DoS), Distributed Denial of Service (DDoS) attacks, financial fraud and fake ratings.
MIDAS combines a chi-squared goodness-of-fit test with the Count-Min-Sketch (CMS) streaming data structures to get an anomaly score for each edge. It then incorporates temporal and spatial relations to achieve better performance. MIDAS provides theoretical guarantees on the false positives and is three orders of magnitude faster than existing state of the art solutions.
Paper: https://arxiv.org/abs/1911.04464
Code: https://github.com/Stream-AD/MIDAS
This is a guest post by Siddharth Bhatia about their recent work with Christos Faloutsos on anomaly detection in streaming data.
MIDAS, Microcluster-Based Detector of Anomalies in Edge Streams (AAAI 2020) uses unsupervised learning to detect anomalies in a streaming manner in real-time. It was designed keeping in mind the way recent sophisticated attacks occur. MIDAS can be used to detect intrusions, Denial of Service (DoS), Distributed Denial of Service (DDoS) attacks, financial fraud and fake ratings.
MIDAS combines a chi-squared goodness-of-fit test with the Count-Min-Sketch (CMS) streaming data structures to get an anomaly score for each edge. It then incorporates temporal and spatial relations to achieve better performance. MIDAS provides theoretical guarantees on the false positives and is three orders of magnitude faster than existing state of the art solutions.
Paper: https://arxiv.org/abs/1911.04464
Code: https://github.com/Stream-AD/MIDAS
Medium
Controlling Fake News using Graphs and Statistics
Fair Elections, Secure Transactions, and unbiased recommendations?
Graph Machine Learning Newsletter
I finally had time to compose the first issue for my newsletter on graph machine learning. I will be out soon!
Please subscribe and share it with your friends: https://newsletter.ivanovml.com/ (or, in case, it gives a warning here is a backup link: https://www.getrevue.co/profile/graphML).
My hope it will be similar to Ruder's newsletter on NLP, highlighting recent developments, current trends, and upcoming events in GML. I plan to send 1-2 issues per month, so it will be less frequent but more long read about our field.
In case you saw recent blog posts, interviews, conference highlights, industry updates, or anything else worth sharing with a community, don't hesitate to write to me.
I finally had time to compose the first issue for my newsletter on graph machine learning. I will be out soon!
Please subscribe and share it with your friends: https://newsletter.ivanovml.com/ (or, in case, it gives a warning here is a backup link: https://www.getrevue.co/profile/graphML).
My hope it will be similar to Ruder's newsletter on NLP, highlighting recent developments, current trends, and upcoming events in GML. I plan to send 1-2 issues per month, so it will be less frequent but more long read about our field.
In case you saw recent blog posts, interviews, conference highlights, industry updates, or anything else worth sharing with a community, don't hesitate to write to me.
Fresh picks from ArXiv
This week studies architecture search and low latency inference in GNN as well as review on graph signal processing 📶
GNN
• Neural Architecture Search in Graph Neural Networks
• FC-GAGA: Fully Connected Gated Graph Architecture for Spatio-Temporal Traffic Forecasting
• Pooling Regularized Graph Neural Network for fMRI Biomarker Analysis
• GRIP: A Graph Neural Network Accelerator Architecture with Christopher Re
• PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings
Math
• A polynomial-time algorithm to determine (almost) Hamiltonicity of dense regular graphs
Surveys
• Graph signal processing for machine learning: A review and new perspectives with Michael Bronstein
This week studies architecture search and low latency inference in GNN as well as review on graph signal processing 📶
GNN
• Neural Architecture Search in Graph Neural Networks
• FC-GAGA: Fully Connected Gated Graph Architecture for Spatio-Temporal Traffic Forecasting
• Pooling Regularized Graph Neural Network for fMRI Biomarker Analysis
• GRIP: A Graph Neural Network Accelerator Architecture with Christopher Re
• PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings
Math
• A polynomial-time algorithm to determine (almost) Hamiltonicity of dense regular graphs
Surveys
• Graph signal processing for machine learning: A review and new perspectives with Michael Bronstein
Videos for ICML 2020 workshops and tutorials
Available at slideslive.
Two related to GML are:
• GRL+ workshop
• Bridge Between Perception and Reasoning: Graph Neural Networks & Beyond workshop
Available at slideslive.
Two related to GML are:
• GRL+ workshop
• Bridge Between Perception and Reasoning: Graph Neural Networks & Beyond workshop
SlidesLive
ICML 2020
The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning.
ICML is globally renowned for presenting and publishing cutting…
ICML is globally renowned for presenting and publishing cutting…
Probabilistic Learning on Graphs via Contextual Architectures
This is a guest post by Federico Errica ([email protected]) about their new JMLR work called “Probabilistic Learning on Graphs via Contextual Architectures”.
Intro/TL;DR:
We propose a probabilistic methodology for representation learning on graph-structured data, in which a stack of Bayesian networks learns different distributions of a vertex’s neighbourhood. The main characteristics of our approach are (i) unsupervised, as it models the generation of node attributes; (ii) layer-wise training: (iii) incremental construction policy; (iv) maximum likelihood estimation with Expectation-Maximization. The model, called Contextual Graph Markov Model (CGMM), can be regarded as a probabilistic version of Deep Graph Networks (DGNs).
Each layer of the model implements a probabilistic version of neighbourhood aggregation. The hidden representation of each node is modelled as a categorical distribution. When aggregating neighbours, the incoming messages are the *frozen* posterior probabilities computed when training the previous layers. When discrete edge types are available, we can weight the contribution of nodes in different ways using the Switching Parent approximation. Moreover, each neighbour aggregation can be conditioned on an arbitrary subset of the previous layers.
By design, this incremental construction policy avoids the exploding/vanishing gradient effect. As a result, each layer exploits different sets of statistics when trying to maximize the likelihood of the nodes in each graph. We test the model on node and graph classification tasks. First, we generate unsupervised node/graph representations; then, we apply a standard ML classifier to output the right class. In turn, this leads to a critical analysis of some benchmarks used in the literature. Finally, we show that the performances of the model increase as we add more layers (up to 20).
Paper: https://www.jmlr.org/papers/v21/19-470.html
Code: https://github.com/diningphil/CGMM
Related reads: (i) https://doi.org/10.1016/j.neunet.2020.06.006 (ii) https://proceedings.mlr.press/v80/bacciu18a.html
This is a guest post by Federico Errica ([email protected]) about their new JMLR work called “Probabilistic Learning on Graphs via Contextual Architectures”.
Intro/TL;DR:
We propose a probabilistic methodology for representation learning on graph-structured data, in which a stack of Bayesian networks learns different distributions of a vertex’s neighbourhood. The main characteristics of our approach are (i) unsupervised, as it models the generation of node attributes; (ii) layer-wise training: (iii) incremental construction policy; (iv) maximum likelihood estimation with Expectation-Maximization. The model, called Contextual Graph Markov Model (CGMM), can be regarded as a probabilistic version of Deep Graph Networks (DGNs).
Each layer of the model implements a probabilistic version of neighbourhood aggregation. The hidden representation of each node is modelled as a categorical distribution. When aggregating neighbours, the incoming messages are the *frozen* posterior probabilities computed when training the previous layers. When discrete edge types are available, we can weight the contribution of nodes in different ways using the Switching Parent approximation. Moreover, each neighbour aggregation can be conditioned on an arbitrary subset of the previous layers.
By design, this incremental construction policy avoids the exploding/vanishing gradient effect. As a result, each layer exploits different sets of statistics when trying to maximize the likelihood of the nodes in each graph. We test the model on node and graph classification tasks. First, we generate unsupervised node/graph representations; then, we apply a standard ML classifier to output the right class. In turn, this leads to a critical analysis of some benchmarks used in the literature. Finally, we show that the performances of the model increase as we add more layers (up to 20).
Paper: https://www.jmlr.org/papers/v21/19-470.html
Code: https://github.com/diningphil/CGMM
Related reads: (i) https://doi.org/10.1016/j.neunet.2020.06.006 (ii) https://proceedings.mlr.press/v80/bacciu18a.html
GitHub
GitHub - diningphil/CGMM: Official Repository of "Contextual Graph Markov Model" (ICML 2018 - JMLR 2020)
Official Repository of "Contextual Graph Markov Model" (ICML 2018 - JMLR 2020) - GitHub - diningphil/CGMM: Official Repository of "Contextual Graph Markov Model&quo...
Issue #1: Introduction, PAC Isometry, Over-Smoothing, and Evolution of the Field
Finally the first issue of a newsletter is out and I hope there will many more in the future. The most difficult of this is to find good stories for the email: it's somewhat different from posting on telegram and twitter, as you need to have more insights in a single story. So if you find something that could be relevant to the community, definitely send me a message.
Finally the first issue of a newsletter is out and I hope there will many more in the future. The most difficult of this is to find good stories for the email: it's somewhat different from posting on telegram and twitter, as you need to have more insights in a single story. So if you find something that could be relevant to the community, definitely send me a message.
Simple scalable graph neural networks
Michael Bronstein continues a marathon of great blog posts on GML. In a new post he describes their recent work on scaling GNNs to large network. There is a good introduction to sampling-based methods (e.g. SAGE, GraphSAINT, ClusterGCN), which sample a subgraph of a large graph and then train GNN only on a subgraph.
Then, he describes that it can be beneficial just precompute r-hop matrices, A^r X, and use MLP on these features. This way, you use topology of your graph and you apply mini-batch training with MLP.
What's cool is that the algorithm is already available in pytorch-geometric as a transform, which is implemented via sparseTensor matrix multiplication.
Michael Bronstein continues a marathon of great blog posts on GML. In a new post he describes their recent work on scaling GNNs to large network. There is a good introduction to sampling-based methods (e.g. SAGE, GraphSAINT, ClusterGCN), which sample a subgraph of a large graph and then train GNN only on a subgraph.
Then, he describes that it can be beneficial just precompute r-hop matrices, A^r X, and use MLP on these features. This way, you use topology of your graph and you apply mini-batch training with MLP.
What's cool is that the algorithm is already available in pytorch-geometric as a transform, which is implemented via sparseTensor matrix multiplication.
Medium
Simple scalable graph neural networks
One of the practical challenges of graph neural networks in scalability to large graphs. We present a simple solution for scalable GNNs.
MLSS Indo 2020: Equivariance
MLSS-Indonesia is part of MLSS series, which is one of the top summer schools on ML. Here is a lecture and slides on equivariance and group theory and how it's used for convolutions.
MLSS-Indonesia is part of MLSS series, which is one of the top summer schools on ML. Here is a lecture and slides on equivariance and group theory and how it's used for convolutions.
YouTube
MLSS-Indo 2020 - Lecture 3 - Advanced Topics in Equivariance and Inductive Biases
The Machine Learning Summer School Indonesia (MLSS-Indo) is part of MLSS series (mlss.cc), which was started at Max Planck Institute for Intelligent Systems in Tübingen, Germany, in 2002. It is a seven-day event where participants have the chance to learn…
Fresh picks from ArXiv
This week discusses ways to compute a distance between graphs, fast triangle counting, and a new i.i.d. test 🧪
GNN
• Adversarial Directed Graph Embedding with Philip S. Yu
• CoCoS: Fast and Accurate Distributed Triangle Counting in Graph Streams with Christos Faloutsos
• Network comparison and the within-ensemble graph distance
Math
• Using Expander Graphs to test whether samples are i.i.d
Surveys
• Graph Signal Processing for Geometric Data and Beyond: Theory and Applications
• Graph Neural Networks: Architectures, Stability and Transferability
• Random Walks: A Review of Algorithms and Applications
• Big Networks: A Survey
This week discusses ways to compute a distance between graphs, fast triangle counting, and a new i.i.d. test 🧪
GNN
• Adversarial Directed Graph Embedding with Philip S. Yu
• CoCoS: Fast and Accurate Distributed Triangle Counting in Graph Streams with Christos Faloutsos
• Network comparison and the within-ensemble graph distance
Math
• Using Expander Graphs to test whether samples are i.i.d
Surveys
• Graph Signal Processing for Geometric Data and Beyond: Theory and Applications
• Graph Neural Networks: Architectures, Stability and Transferability
• Random Walks: A Review of Algorithms and Applications
• Big Networks: A Survey
Israeli Geometric Deep Learning Workshop
Many cool presentations at the recent DGL workshop, including Yaron Lipman, Gal Chechik, Gal Chechik, and many other experienced people in this field. The video is on YouTube.
Many cool presentations at the recent DGL workshop, including Yaron Lipman, Gal Chechik, Gal Chechik, and many other experienced people in this field. The video is on YouTube.
YouTube
iGDL 2020: Israeli Geometric Deep Learning Workshop
We are excited to announce the first Israeli workshop on geometric deep learning (iGDL) that will take place on August 2nd, 2020 2 PM-7 PM (Israel timezone). The workshop will be in English, and will take place virtually via Zoom due to COVID19 restrictions.…
The Quantum Graph Recurrent Neural Network
This demonstration by pennylane investigates quantum graph recurrent neural networks (QGRNN), which are the quantum analogue of a classical graph recurrent neural network, and a subclass of the more general quantum graph neural network ansatz. Both the QGNN and QGRNN were introduced in this paper (2019) by Google X.
This demonstration by pennylane investigates quantum graph recurrent neural networks (QGRNN), which are the quantum analogue of a classical graph recurrent neural network, and a subclass of the more general quantum graph neural network ansatz. Both the QGNN and QGRNN were introduced in this paper (2019) by Google X.
Drawing neural networks in LaTeX
There is a repo of good examples by Petar Veličković of how you can draw Tikz images in LaTeX. Here is an example of 1-layer GNN by Matthias Fey.
There is a repo of good examples by Petar Veličković of how you can draw Tikz images in LaTeX. Here is an example of 1-layer GNN by Matthias Fey.
GitHub
GitHub - PetarV-/TikZ: Complete collection of my PGF/TikZ figures.
Complete collection of my PGF/TikZ figures. Contribute to PetarV-/TikZ development by creating an account on GitHub.