Graph Nets library
Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet: https://github.com/deepmind/graph_nets
#ArtificialIntelligence #GraphNetworks #Graphs #DeepLearning #NeuralNetworks #TensorFlow
Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet: https://github.com/deepmind/graph_nets
#ArtificialIntelligence #GraphNetworks #Graphs #DeepLearning #NeuralNetworks #TensorFlow
GitHub
GitHub - google-deepmind/graph_nets: Build Graph Nets in Tensorflow
Build Graph Nets in Tensorflow. Contribute to google-deepmind/graph_nets development by creating an account on GitHub.
Complex-YOLO: Real-time 3D Object Detection on Point Clouds
Simon et al.: https://arxiv.org/abs/1803.06199
#ArtificialIntelligence #ComputerVision #DeepLearning #MachineLearning #PatternRecognition
Simon et al.: https://arxiv.org/abs/1803.06199
#ArtificialIntelligence #ComputerVision #DeepLearning #MachineLearning #PatternRecognition
Variational Autoencoders and Nonlinear ICA: A Unifying Framework
Khemakhem et al.: https://arxiv.org/abs/1907.04809
#MachineLearning #GenerativeModels #VariationalAutoencoders
Khemakhem et al.: https://arxiv.org/abs/1907.04809
#MachineLearning #GenerativeModels #VariationalAutoencoders
arXiv.org
Variational Autoencoders and Nonlinear ICA: A Unifying Framework
The framework of variational autoencoders allows us to efficiently learn deep latent-variable models, such that the model's marginal distribution over observed variables fits the data. Often,...
Geoffrey Hinton: "Does the Brain do Inverse Graphics?"
https://www.youtube.com/watch?v=TFIMqt0yT2I
https://www.youtube.com/watch?v=TFIMqt0yT2I
YouTube
Geoffrey Hinton: "Does the Brain do Inverse Graphics?"
Graduate Summer School 2012: Deep Learning, Feature Learning
"Does the Brain do Inverse Graphics?"
Geoffrey Hinton, University of Toronto
Institute for Pure and Applied Mathematics, UCLA
July 12, 2012
For more information: https://www.ipam.ucla.edu/programs/summer…
"Does the Brain do Inverse Graphics?"
Geoffrey Hinton, University of Toronto
Institute for Pure and Applied Mathematics, UCLA
July 12, 2012
For more information: https://www.ipam.ucla.edu/programs/summer…
A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms
Bengio et al.: https://arxiv.org/abs/1901.10912
#MetaTransfer #CausalMechanisms #ArtificialIntelligence
Bengio et al.: https://arxiv.org/abs/1901.10912
#MetaTransfer #CausalMechanisms #ArtificialIntelligence
Learning Neural Causal Models from Unknown Interventions
Nan Rosemary Ke, Olexa Bilaniuk, Anirudh Goyal, Stefan Bauer, Hugo Larochelle, Chris Pal, Yoshua Bengio : https://arxiv.org/abs/1910.01075
#CausalModels #MachineLearning #ArtificialIntelligence
Nan Rosemary Ke, Olexa Bilaniuk, Anirudh Goyal, Stefan Bauer, Hugo Larochelle, Chris Pal, Yoshua Bengio : https://arxiv.org/abs/1910.01075
#CausalModels #MachineLearning #ArtificialIntelligence
arXiv.org
Learning Neural Causal Models from Unknown Interventions
Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data. However, there are theoretical...
Handbook of Graphical Models
Marloes Maathuis, Mathias Drton, Steen Lauritzen and Martin Wainwright : https://stat.ethz.ch/~maathuis/papers/Handbook.pdf
#Handbook #GraphicalModels
Marloes Maathuis, Mathias Drton, Steen Lauritzen and Martin Wainwright : https://stat.ethz.ch/~maathuis/papers/Handbook.pdf
#Handbook #GraphicalModels
Causal Inference: What If
Miguel A. Hernán, James M. Robins : https://cdn1.sph.harvard.edu/wp-content/uploads/sites/1268/2019/11/ci_hernanrobins_10nov19.pdf
#CausalInference
Miguel A. Hernán, James M. Robins : https://cdn1.sph.harvard.edu/wp-content/uploads/sites/1268/2019/11/ci_hernanrobins_10nov19.pdf
#CausalInference
Breast Histopathology Images Dataset
Download: https://www.kaggle.com/paultimothymooney/breast-histopathology-images
Download: https://www.kaggle.com/paultimothymooney/breast-histopathology-images
Kaggle
Breast Histopathology Images
198,738 IDC(-) image patches; 78,786 IDC(+) image patches
A tutorial to implement state-of-the-art NLP models with Fastai for Sentiment Analysis
Maximilien Roberti : https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
#FastAI #NLP #Transformers
Maximilien Roberti : https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
#FastAI #NLP #Transformers
Pre-Debate Material :
Recurrent Independent Mechanisms
Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf : https://arxiv.org/abs/1909.10893
#MachineLearning #Generalization #ArtificialIntelligence
Recurrent Independent Mechanisms
Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf : https://arxiv.org/abs/1909.10893
#MachineLearning #Generalization #ArtificialIntelligence
arXiv.org
Recurrent Independent Mechanisms
Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes. We propose...
Pre-Debate Material :
Meta transfer learning for factorizing representations and knowledge for AI - Yoshua Bengio : https://youtu.be/CHnJYBpMjNY
#AIDebate #MontrealAI
Meta transfer learning for factorizing representations and knowledge for AI - Yoshua Bengio : https://youtu.be/CHnJYBpMjNY
#AIDebate #MontrealAI
YouTube
Meta transfer learning for factorizing representations and knowledge for AI - Yoshua Bengio
Speaker: Yoshua Bengio
Title: Meta transfer learning for factorizing representations and knowledge for AI
Abstract:
Whereas machine learning theory has focused on generalization to examples from the same distribution as the training data, better understanding…
Title: Meta transfer learning for factorizing representations and knowledge for AI
Abstract:
Whereas machine learning theory has focused on generalization to examples from the same distribution as the training data, better understanding…
A decade ago we weren’t sure neural nets could ever deal with language.
Now the latest AI models crush language benchmarks faster than we can come up with language benchmarks.
Far from having an AI winter, we are having a second AI spring.
The first AI spring is of course ImageNet. Created by Dr. Fei Fei Li and team in 2009, it was the first large scale image classification problem for photos instead of handwriting and thumbnails.
In 2012, AlexNet using GPUs took 1st place. In 2015, ResNet reached human performance.
In the years that followed neural nets made strong progress on voice recognition and machine translation.
Baidu’s Deep Speech 2 recognized spoken Chinese on-par with humans. Google’s Neural-Machine-Translation beat existing phrase-based translation system by 60%.
In language understanding, neural nets did well in single tasks such as WikiQA, TREC, and SQuAD but it wasn’t clear they could master a range of tasks like humans.
Thus GLUE was created—a set of 9 diverse language tasks that hopefully would keep researchers busy for years.
It took six years for neural nets to catch up to human performance in ImageNet.
Transformer based neural nets (BERT, GPT) beat human performance in GLUE in less than one year.
Progress in language-understanding was so rapid, the authors of GLUE was forced to create a new version of the benchmark “SuperGLUE” in 2019.
SuperGlue is HARD, far harder than a naive Turing Test. Just look at these prompts.
Will SuperGLUE stand the test of time? It appears not. Six months in and the Google T5 model is within 1% of human performance.
Neural nets are beating language benchmarks faster than benchmarks can be created.
Yet first hand experience contradicts this progress—Alexa/Siri/Google still lack basic common sense.
Why? Is it a matter of time to deployment or are diverse human questions just much harder?
See:
Google's T5 (Text-To-Text Transfer Transformer) language model set new record and gets very close to human on SuperGLUE benchmark.
https://bit.ly/2XQkKxO
Paper: https://arxiv.org/abs/1910.10683
Code: https://github.com/google…/text-to-text-transfer-transformer
Now the latest AI models crush language benchmarks faster than we can come up with language benchmarks.
Far from having an AI winter, we are having a second AI spring.
The first AI spring is of course ImageNet. Created by Dr. Fei Fei Li and team in 2009, it was the first large scale image classification problem for photos instead of handwriting and thumbnails.
In 2012, AlexNet using GPUs took 1st place. In 2015, ResNet reached human performance.
In the years that followed neural nets made strong progress on voice recognition and machine translation.
Baidu’s Deep Speech 2 recognized spoken Chinese on-par with humans. Google’s Neural-Machine-Translation beat existing phrase-based translation system by 60%.
In language understanding, neural nets did well in single tasks such as WikiQA, TREC, and SQuAD but it wasn’t clear they could master a range of tasks like humans.
Thus GLUE was created—a set of 9 diverse language tasks that hopefully would keep researchers busy for years.
It took six years for neural nets to catch up to human performance in ImageNet.
Transformer based neural nets (BERT, GPT) beat human performance in GLUE in less than one year.
Progress in language-understanding was so rapid, the authors of GLUE was forced to create a new version of the benchmark “SuperGLUE” in 2019.
SuperGlue is HARD, far harder than a naive Turing Test. Just look at these prompts.
Will SuperGLUE stand the test of time? It appears not. Six months in and the Google T5 model is within 1% of human performance.
Neural nets are beating language benchmarks faster than benchmarks can be created.
Yet first hand experience contradicts this progress—Alexa/Siri/Google still lack basic common sense.
Why? Is it a matter of time to deployment or are diverse human questions just much harder?
See:
Google's T5 (Text-To-Text Transfer Transformer) language model set new record and gets very close to human on SuperGLUE benchmark.
https://bit.ly/2XQkKxO
Paper: https://arxiv.org/abs/1910.10683
Code: https://github.com/google…/text-to-text-transfer-transformer
SuperGLUE Benchmark
SuperGLUE is a new benchmark styled after original GLUE benchmark with a set of more difficult language understanding tasks, improved resources, and a new public leaderboard.
Pre-Debate Material :
WSAI Americas 2019 - Yoshua Bengio - Moving beyond supervised deep learning : https://youtu.be/0GsZ_LN9B24
#AIDebate #MontrealAI
WSAI Americas 2019 - Yoshua Bengio - Moving beyond supervised deep learning : https://youtu.be/0GsZ_LN9B24
#AIDebate #MontrealAI
YouTube
WSAI Americas 2019 - Yoshua Bengio - Moving beyond supervised deep learning
Moving beyond supervised deep learning
Watch Yoshua Bengio, Professor of Computer Science and Operations Research at Université de Montréal on stage at World Summit AI Americas 2019. americas.worldsummit.ai
Watch Yoshua Bengio, Professor of Computer Science and Operations Research at Université de Montréal on stage at World Summit AI Americas 2019. americas.worldsummit.ai
Meta-transfer learning for factorizing representations, casual graphs and knowledge for AI
Discover causal representation
Beyond i.i.d, independent mechanism, and single variable intervention
Causal structure and knowledge factorization, correct causal -> faster adaptation & better transfer
Hindrances are not problems, they are features.
Meta-optimizer: online learning errors promote changing in structural parameters (i.e. the network architecture)
Complex models and small data could generalize will under the right causal structure!
The consciousness prior
The future: the brain has different learning rate for different sections => fast/slow weight, long/short term parameters, causal without direct intervention but passive observation (like a child learning)
Paper:
https://arxiv.org/pdf/1901.10912.pdf
https://slideslive.com/38915855
Discover causal representation
Beyond i.i.d, independent mechanism, and single variable intervention
Causal structure and knowledge factorization, correct causal -> faster adaptation & better transfer
Hindrances are not problems, they are features.
Meta-optimizer: online learning errors promote changing in structural parameters (i.e. the network architecture)
Complex models and small data could generalize will under the right causal structure!
The consciousness prior
The future: the brain has different learning rate for different sections => fast/slow weight, long/short term parameters, causal without direct intervention but passive observation (like a child learning)
Paper:
https://arxiv.org/pdf/1901.10912.pdf
https://slideslive.com/38915855
Best Releases and Papers from OpenAI in 2019 So Far
https://opendatascience.com/best-releases-and-papers-from-openai-in-2019-so-far/
https://opendatascience.com/best-releases-and-papers-from-openai-in-2019-so-far/
Open Data Science - Your News Source for AI, Machine Learning & more
Best Releases and Papers from OpenAI in 2019 So Far
OpenAI is one of the leaders in research on artificial general intelligence, here’s our picks of the 9 best releases and papers from OpenAI in 2019 so far.
Top 100 Neuroscience Blogs And Websites For Neuroscientists in 2019
https://blog.feedspot.com/neuroscience_blogs/
@ArtificialIntelligenceArticles
https://blog.feedspot.com/neuroscience_blogs/
@ArtificialIntelligenceArticles
Feedspot
90 Best Neuroscience Blogs and Websites To Follow in 2023
Neuroscience Blogs Best List. Find information on neuroscience news, journals, research papers, neurology, cognitive neuroscience, neuropsychology, neurosurgery, brain science, neurodegeneration research at the molecular and cellular levels, neuropatholog
The first video GAN with sparse input release by Facebook recently
Paper: https://research.fb.com/publications/deepfovea-neural-reconstruction-for-foveated-rendering-and-video-compression-using-learned-statistics-of-natural-videos/
Github: https://github.com/facebookresearch/DeepFovea
DeepFovea can decrease the number of computing resources needed for rendering by as much as 10-14x while any image differences remain imperceptible to the human eye.
Paper: https://research.fb.com/publications/deepfovea-neural-reconstruction-for-foveated-rendering-and-video-compression-using-learned-statistics-of-natural-videos/
Github: https://github.com/facebookresearch/DeepFovea
DeepFovea can decrease the number of computing resources needed for rendering by as much as 10-14x while any image differences remain imperceptible to the human eye.
Facebook Research
DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos
Foveated rendering and compression can save computations by reducing the image quality in the peripheral vision. However, this can cause noticeable artifacts in the periphery, or, if done conservatively, would provide only modest savings. In this work, we…
An Epidemic of AI Misinformation
Gary Marcus : https://thegradient.pub/an-epidemic-of-ai-misinformation/
#ArtificialIntelligence #DeepLearning #MachineLearning
Gary Marcus : https://thegradient.pub/an-epidemic-of-ai-misinformation/
#ArtificialIntelligence #DeepLearning #MachineLearning
The Gradient
An Epidemic of AI Misinformation
> Maybe every paper abstract should have a mandatory field of what the limitations of the proposed approach are. That way some of the science miscommunications and hypes could maybe be avoided. — Sebastian Risi (@risi1979) October 28, 2019 [https://twitt…
Deep Learning
[https://web.stanford.edu/class/cs230/](https://web.stanford.edu/class/cs230/)
[ Natural Language Processing ]
CS 124: From Languages to Information (LINGUIST 180, LINGUIST 280)
[https://web.stanford.edu/class/cs124/](https://web.stanford.edu/class/cs124/)
CS 224N: Natural Language Processing with Deep Learning (LINGUIST 284)
[https://web.stanford.edu/class/cs224n/](https://web.stanford.edu/class/cs224n/)
CS 224U: Natural Language Understanding (LINGUIST 188, LINGUIST 288)
[https://web.stanford.edu/class/cs224u/](https://web.stanford.edu/class/cs224u/)
CS 276: Information Retrieval and Web Search (LINGUIST 286)
[https://web.stanford.edu/class/cs](https://web.stanford.edu/class/cs224u/)276
[ Computer Vision ]
CS 131: Computer Vision: Foundations and Applications
https://[cs131.stanford.edu](https://cs131.stanford.edu/)
CS 205L: Continuous Mathematical Methods with an Emphasis on Machine Learning
[https://web.stanford.edu/class/cs205l/](https://web.stanford.edu/class/cs205l/)
CS 231N: Convolutional Neural Networks for Visual Recognition
[https://cs231n.stanford.edu/](https://cs231n.stanford.edu/)
CS 348K: Visual Computing Systems
[https://graphics.stanford.edu/courses/cs348v-18-winter/](https://graphics.stanford.edu/courses/cs348v-18-winter/)
[ Others ]
CS224W: Machine Learning with Graphs([Yong Dam Kim](https://www.facebook.com/yongdam.kim) )
[https://web.stanford.edu/class/cs224w/](https://web.stanford.edu/class/cs224w/)
CS 273B: Deep Learning in Genomics and Biomedicine (BIODS 237, BIOMEDIN 273B, GENE 236)
[https://canvas.stanford.edu/courses/51037](https://canvas.stanford.edu/courses/51037)
CS 236: Deep Generative Models
[https://deepgenerativemodels.github.io/](https://deepgenerativemodels.github.io/)
CS 228: Probabilistic Graphical Models: Principles and Techniques
[https://cs228.stanford.edu/](https://cs228.stanford.edu/)
CS 337: Al-Assisted Care (MED 277)
[https://cs337.stanford.edu/](https://cs337.stanford.edu/)
CS 229: Machine Learning (STATS 229)
[https://cs229.stanford.edu/](https://cs229.stanford.edu/)
CS 229A: Applied Machine Learning
[https://cs229a.stanford.edu](https://cs229a.stanford.edu/)
CS 234: Reinforcement Learning
https://[s234.stanford.edu](https://cs234.stanford.edu/)
CS 221: Artificial Intelligence: Principles and Techniques
[https://stanford-cs221.github.io/autumn2019/](https://stanford-cs221.github.io/autumn2019/)
[https://web.stanford.edu/class/cs230/](https://web.stanford.edu/class/cs230/)
[ Natural Language Processing ]
CS 124: From Languages to Information (LINGUIST 180, LINGUIST 280)
[https://web.stanford.edu/class/cs124/](https://web.stanford.edu/class/cs124/)
CS 224N: Natural Language Processing with Deep Learning (LINGUIST 284)
[https://web.stanford.edu/class/cs224n/](https://web.stanford.edu/class/cs224n/)
CS 224U: Natural Language Understanding (LINGUIST 188, LINGUIST 288)
[https://web.stanford.edu/class/cs224u/](https://web.stanford.edu/class/cs224u/)
CS 276: Information Retrieval and Web Search (LINGUIST 286)
[https://web.stanford.edu/class/cs](https://web.stanford.edu/class/cs224u/)276
[ Computer Vision ]
CS 131: Computer Vision: Foundations and Applications
https://[cs131.stanford.edu](https://cs131.stanford.edu/)
CS 205L: Continuous Mathematical Methods with an Emphasis on Machine Learning
[https://web.stanford.edu/class/cs205l/](https://web.stanford.edu/class/cs205l/)
CS 231N: Convolutional Neural Networks for Visual Recognition
[https://cs231n.stanford.edu/](https://cs231n.stanford.edu/)
CS 348K: Visual Computing Systems
[https://graphics.stanford.edu/courses/cs348v-18-winter/](https://graphics.stanford.edu/courses/cs348v-18-winter/)
[ Others ]
CS224W: Machine Learning with Graphs([Yong Dam Kim](https://www.facebook.com/yongdam.kim) )
[https://web.stanford.edu/class/cs224w/](https://web.stanford.edu/class/cs224w/)
CS 273B: Deep Learning in Genomics and Biomedicine (BIODS 237, BIOMEDIN 273B, GENE 236)
[https://canvas.stanford.edu/courses/51037](https://canvas.stanford.edu/courses/51037)
CS 236: Deep Generative Models
[https://deepgenerativemodels.github.io/](https://deepgenerativemodels.github.io/)
CS 228: Probabilistic Graphical Models: Principles and Techniques
[https://cs228.stanford.edu/](https://cs228.stanford.edu/)
CS 337: Al-Assisted Care (MED 277)
[https://cs337.stanford.edu/](https://cs337.stanford.edu/)
CS 229: Machine Learning (STATS 229)
[https://cs229.stanford.edu/](https://cs229.stanford.edu/)
CS 229A: Applied Machine Learning
[https://cs229a.stanford.edu](https://cs229a.stanford.edu/)
CS 234: Reinforcement Learning
https://[s234.stanford.edu](https://cs234.stanford.edu/)
CS 221: Artificial Intelligence: Principles and Techniques
[https://stanford-cs221.github.io/autumn2019/](https://stanford-cs221.github.io/autumn2019/)
web.stanford.edu
CS230 Deep Learning
Deep Learning is one of the most highly sought after skills in AI. In this course, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional…