Deep learning from the topological, metric, information, causal, physics, computational, and neuroscience perspective. A nice assay by Raul Vicente: "The many faces of deep learning:" https://arxiv.org/abs/1908.10206
Contrastive Learning of Structured World Models
Kipf et al.: https://arxiv.org/abs/1911.12247
Code: https://github.com/tkipf/c-swm
#MachineLearning #DeepLearning #ArtificialIntelligence
Kipf et al.: https://arxiv.org/abs/1911.12247
Code: https://github.com/tkipf/c-swm
#MachineLearning #DeepLearning #ArtificialIntelligence
GitHub
GitHub - tkipf/c-swm: Contrastive Learning of Structured World Models
Contrastive Learning of Structured World Models. Contribute to tkipf/c-swm development by creating an account on GitHub.
Graph Nets library
Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet: https://github.com/deepmind/graph_nets
#ArtificialIntelligence #GraphNetworks #Graphs #DeepLearning #NeuralNetworks #TensorFlow
Graph Nets is DeepMind's library for building graph networks in Tensorflow and Sonnet: https://github.com/deepmind/graph_nets
#ArtificialIntelligence #GraphNetworks #Graphs #DeepLearning #NeuralNetworks #TensorFlow
GitHub
GitHub - google-deepmind/graph_nets: Build Graph Nets in Tensorflow
Build Graph Nets in Tensorflow. Contribute to google-deepmind/graph_nets development by creating an account on GitHub.
Complex-YOLO: Real-time 3D Object Detection on Point Clouds
Simon et al.: https://arxiv.org/abs/1803.06199
#ArtificialIntelligence #ComputerVision #DeepLearning #MachineLearning #PatternRecognition
Simon et al.: https://arxiv.org/abs/1803.06199
#ArtificialIntelligence #ComputerVision #DeepLearning #MachineLearning #PatternRecognition
Variational Autoencoders and Nonlinear ICA: A Unifying Framework
Khemakhem et al.: https://arxiv.org/abs/1907.04809
#MachineLearning #GenerativeModels #VariationalAutoencoders
Khemakhem et al.: https://arxiv.org/abs/1907.04809
#MachineLearning #GenerativeModels #VariationalAutoencoders
arXiv.org
Variational Autoencoders and Nonlinear ICA: A Unifying Framework
The framework of variational autoencoders allows us to efficiently learn deep latent-variable models, such that the model's marginal distribution over observed variables fits the data. Often,...
Geoffrey Hinton: "Does the Brain do Inverse Graphics?"
https://www.youtube.com/watch?v=TFIMqt0yT2I
https://www.youtube.com/watch?v=TFIMqt0yT2I
YouTube
Geoffrey Hinton: "Does the Brain do Inverse Graphics?"
Graduate Summer School 2012: Deep Learning, Feature Learning
"Does the Brain do Inverse Graphics?"
Geoffrey Hinton, University of Toronto
Institute for Pure and Applied Mathematics, UCLA
July 12, 2012
For more information: https://www.ipam.ucla.edu/programs/summer…
"Does the Brain do Inverse Graphics?"
Geoffrey Hinton, University of Toronto
Institute for Pure and Applied Mathematics, UCLA
July 12, 2012
For more information: https://www.ipam.ucla.edu/programs/summer…
A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms
Bengio et al.: https://arxiv.org/abs/1901.10912
#MetaTransfer #CausalMechanisms #ArtificialIntelligence
Bengio et al.: https://arxiv.org/abs/1901.10912
#MetaTransfer #CausalMechanisms #ArtificialIntelligence
Learning Neural Causal Models from Unknown Interventions
Nan Rosemary Ke, Olexa Bilaniuk, Anirudh Goyal, Stefan Bauer, Hugo Larochelle, Chris Pal, Yoshua Bengio : https://arxiv.org/abs/1910.01075
#CausalModels #MachineLearning #ArtificialIntelligence
Nan Rosemary Ke, Olexa Bilaniuk, Anirudh Goyal, Stefan Bauer, Hugo Larochelle, Chris Pal, Yoshua Bengio : https://arxiv.org/abs/1910.01075
#CausalModels #MachineLearning #ArtificialIntelligence
arXiv.org
Learning Neural Causal Models from Unknown Interventions
Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data. However, there are theoretical...
Handbook of Graphical Models
Marloes Maathuis, Mathias Drton, Steen Lauritzen and Martin Wainwright : https://stat.ethz.ch/~maathuis/papers/Handbook.pdf
#Handbook #GraphicalModels
Marloes Maathuis, Mathias Drton, Steen Lauritzen and Martin Wainwright : https://stat.ethz.ch/~maathuis/papers/Handbook.pdf
#Handbook #GraphicalModels
Causal Inference: What If
Miguel A. Hernán, James M. Robins : https://cdn1.sph.harvard.edu/wp-content/uploads/sites/1268/2019/11/ci_hernanrobins_10nov19.pdf
#CausalInference
Miguel A. Hernán, James M. Robins : https://cdn1.sph.harvard.edu/wp-content/uploads/sites/1268/2019/11/ci_hernanrobins_10nov19.pdf
#CausalInference
Breast Histopathology Images Dataset
Download: https://www.kaggle.com/paultimothymooney/breast-histopathology-images
Download: https://www.kaggle.com/paultimothymooney/breast-histopathology-images
Kaggle
Breast Histopathology Images
198,738 IDC(-) image patches; 78,786 IDC(+) image patches
A tutorial to implement state-of-the-art NLP models with Fastai for Sentiment Analysis
Maximilien Roberti : https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
#FastAI #NLP #Transformers
Maximilien Roberti : https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
#FastAI #NLP #Transformers
Pre-Debate Material :
Recurrent Independent Mechanisms
Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf : https://arxiv.org/abs/1909.10893
#MachineLearning #Generalization #ArtificialIntelligence
Recurrent Independent Mechanisms
Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, Bernhard Schölkopf : https://arxiv.org/abs/1909.10893
#MachineLearning #Generalization #ArtificialIntelligence
arXiv.org
Recurrent Independent Mechanisms
Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes. We propose...
Pre-Debate Material :
Meta transfer learning for factorizing representations and knowledge for AI - Yoshua Bengio : https://youtu.be/CHnJYBpMjNY
#AIDebate #MontrealAI
Meta transfer learning for factorizing representations and knowledge for AI - Yoshua Bengio : https://youtu.be/CHnJYBpMjNY
#AIDebate #MontrealAI
YouTube
Meta transfer learning for factorizing representations and knowledge for AI - Yoshua Bengio
Speaker: Yoshua Bengio
Title: Meta transfer learning for factorizing representations and knowledge for AI
Abstract:
Whereas machine learning theory has focused on generalization to examples from the same distribution as the training data, better understanding…
Title: Meta transfer learning for factorizing representations and knowledge for AI
Abstract:
Whereas machine learning theory has focused on generalization to examples from the same distribution as the training data, better understanding…
A decade ago we weren’t sure neural nets could ever deal with language.
Now the latest AI models crush language benchmarks faster than we can come up with language benchmarks.
Far from having an AI winter, we are having a second AI spring.
The first AI spring is of course ImageNet. Created by Dr. Fei Fei Li and team in 2009, it was the first large scale image classification problem for photos instead of handwriting and thumbnails.
In 2012, AlexNet using GPUs took 1st place. In 2015, ResNet reached human performance.
In the years that followed neural nets made strong progress on voice recognition and machine translation.
Baidu’s Deep Speech 2 recognized spoken Chinese on-par with humans. Google’s Neural-Machine-Translation beat existing phrase-based translation system by 60%.
In language understanding, neural nets did well in single tasks such as WikiQA, TREC, and SQuAD but it wasn’t clear they could master a range of tasks like humans.
Thus GLUE was created—a set of 9 diverse language tasks that hopefully would keep researchers busy for years.
It took six years for neural nets to catch up to human performance in ImageNet.
Transformer based neural nets (BERT, GPT) beat human performance in GLUE in less than one year.
Progress in language-understanding was so rapid, the authors of GLUE was forced to create a new version of the benchmark “SuperGLUE” in 2019.
SuperGlue is HARD, far harder than a naive Turing Test. Just look at these prompts.
Will SuperGLUE stand the test of time? It appears not. Six months in and the Google T5 model is within 1% of human performance.
Neural nets are beating language benchmarks faster than benchmarks can be created.
Yet first hand experience contradicts this progress—Alexa/Siri/Google still lack basic common sense.
Why? Is it a matter of time to deployment or are diverse human questions just much harder?
See:
Google's T5 (Text-To-Text Transfer Transformer) language model set new record and gets very close to human on SuperGLUE benchmark.
https://bit.ly/2XQkKxO
Paper: https://arxiv.org/abs/1910.10683
Code: https://github.com/google…/text-to-text-transfer-transformer
Now the latest AI models crush language benchmarks faster than we can come up with language benchmarks.
Far from having an AI winter, we are having a second AI spring.
The first AI spring is of course ImageNet. Created by Dr. Fei Fei Li and team in 2009, it was the first large scale image classification problem for photos instead of handwriting and thumbnails.
In 2012, AlexNet using GPUs took 1st place. In 2015, ResNet reached human performance.
In the years that followed neural nets made strong progress on voice recognition and machine translation.
Baidu’s Deep Speech 2 recognized spoken Chinese on-par with humans. Google’s Neural-Machine-Translation beat existing phrase-based translation system by 60%.
In language understanding, neural nets did well in single tasks such as WikiQA, TREC, and SQuAD but it wasn’t clear they could master a range of tasks like humans.
Thus GLUE was created—a set of 9 diverse language tasks that hopefully would keep researchers busy for years.
It took six years for neural nets to catch up to human performance in ImageNet.
Transformer based neural nets (BERT, GPT) beat human performance in GLUE in less than one year.
Progress in language-understanding was so rapid, the authors of GLUE was forced to create a new version of the benchmark “SuperGLUE” in 2019.
SuperGlue is HARD, far harder than a naive Turing Test. Just look at these prompts.
Will SuperGLUE stand the test of time? It appears not. Six months in and the Google T5 model is within 1% of human performance.
Neural nets are beating language benchmarks faster than benchmarks can be created.
Yet first hand experience contradicts this progress—Alexa/Siri/Google still lack basic common sense.
Why? Is it a matter of time to deployment or are diverse human questions just much harder?
See:
Google's T5 (Text-To-Text Transfer Transformer) language model set new record and gets very close to human on SuperGLUE benchmark.
https://bit.ly/2XQkKxO
Paper: https://arxiv.org/abs/1910.10683
Code: https://github.com/google…/text-to-text-transfer-transformer
SuperGLUE Benchmark
SuperGLUE is a new benchmark styled after original GLUE benchmark with a set of more difficult language understanding tasks, improved resources, and a new public leaderboard.
Pre-Debate Material :
WSAI Americas 2019 - Yoshua Bengio - Moving beyond supervised deep learning : https://youtu.be/0GsZ_LN9B24
#AIDebate #MontrealAI
WSAI Americas 2019 - Yoshua Bengio - Moving beyond supervised deep learning : https://youtu.be/0GsZ_LN9B24
#AIDebate #MontrealAI
YouTube
WSAI Americas 2019 - Yoshua Bengio - Moving beyond supervised deep learning
Moving beyond supervised deep learning
Watch Yoshua Bengio, Professor of Computer Science and Operations Research at Université de Montréal on stage at World Summit AI Americas 2019. americas.worldsummit.ai
Watch Yoshua Bengio, Professor of Computer Science and Operations Research at Université de Montréal on stage at World Summit AI Americas 2019. americas.worldsummit.ai
Meta-transfer learning for factorizing representations, casual graphs and knowledge for AI
Discover causal representation
Beyond i.i.d, independent mechanism, and single variable intervention
Causal structure and knowledge factorization, correct causal -> faster adaptation & better transfer
Hindrances are not problems, they are features.
Meta-optimizer: online learning errors promote changing in structural parameters (i.e. the network architecture)
Complex models and small data could generalize will under the right causal structure!
The consciousness prior
The future: the brain has different learning rate for different sections => fast/slow weight, long/short term parameters, causal without direct intervention but passive observation (like a child learning)
Paper:
https://arxiv.org/pdf/1901.10912.pdf
https://slideslive.com/38915855
Discover causal representation
Beyond i.i.d, independent mechanism, and single variable intervention
Causal structure and knowledge factorization, correct causal -> faster adaptation & better transfer
Hindrances are not problems, they are features.
Meta-optimizer: online learning errors promote changing in structural parameters (i.e. the network architecture)
Complex models and small data could generalize will under the right causal structure!
The consciousness prior
The future: the brain has different learning rate for different sections => fast/slow weight, long/short term parameters, causal without direct intervention but passive observation (like a child learning)
Paper:
https://arxiv.org/pdf/1901.10912.pdf
https://slideslive.com/38915855
Best Releases and Papers from OpenAI in 2019 So Far
https://opendatascience.com/best-releases-and-papers-from-openai-in-2019-so-far/
https://opendatascience.com/best-releases-and-papers-from-openai-in-2019-so-far/
Open Data Science - Your News Source for AI, Machine Learning & more
Best Releases and Papers from OpenAI in 2019 So Far
OpenAI is one of the leaders in research on artificial general intelligence, here’s our picks of the 9 best releases and papers from OpenAI in 2019 so far.
Top 100 Neuroscience Blogs And Websites For Neuroscientists in 2019
https://blog.feedspot.com/neuroscience_blogs/
@ArtificialIntelligenceArticles
https://blog.feedspot.com/neuroscience_blogs/
@ArtificialIntelligenceArticles
Feedspot
90 Best Neuroscience Blogs and Websites To Follow in 2023
Neuroscience Blogs Best List. Find information on neuroscience news, journals, research papers, neurology, cognitive neuroscience, neuropsychology, neurosurgery, brain science, neurodegeneration research at the molecular and cellular levels, neuropatholog