ArtificialIntelligenceArticles
2.96K subscribers
1.64K photos
9 videos
5 files
3.86K links
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience

6. #ResearchPapers

7. Related Courses and Ebooks
Download Telegram
Deep learning from the topological, metric, information, causal, physics, computational, and neuroscience perspective. A nice assay by Raul Vicente: "The many faces of deep learning:" https://arxiv.org/abs/1908.10206
A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms
Bengio et al.: https://arxiv.org/abs/1901.10912
#MetaTransfer #CausalMechanisms #ArtificialIntelligence
Handbook of Graphical Models
Marloes Maathuis, Mathias Drton, Ste􏰆en Lauritzen and Martin Wainwright : https://stat.ethz.ch/~maathuis/papers/Handbook.pdf
#Handbook #GraphicalModels
A tutorial to implement state-of-the-art NLP models with Fastai for Sentiment Analysis
Maximilien Roberti : https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
#FastAI #NLP #Transformers
A decade ago we weren’t sure neural nets could ever deal with language.

Now the latest AI models crush language benchmarks faster than we can come up with language benchmarks.

Far from having an AI winter, we are having a second AI spring.

The first AI spring is of course ImageNet. Created by Dr. Fei Fei Li and team in 2009, it was the first large scale image classification problem for photos instead of handwriting and thumbnails.

In 2012, AlexNet using GPUs took 1st place. In 2015, ResNet reached human performance.

In the years that followed neural nets made strong progress on voice recognition and machine translation.

Baidu’s Deep Speech 2 recognized spoken Chinese on-par with humans. Google’s Neural-Machine-Translation beat existing phrase-based translation system by 60%.

In language understanding, neural nets did well in single tasks such as WikiQA, TREC, and SQuAD but it wasn’t clear they could master a range of tasks like humans.

Thus GLUE was created—a set of 9 diverse language tasks that hopefully would keep researchers busy for years.
It took six years for neural nets to catch up to human performance in ImageNet.

Transformer based neural nets (BERT, GPT) beat human performance in GLUE in less than one year.

Progress in language-understanding was so rapid, the authors of GLUE was forced to create a new version of the benchmark “SuperGLUE” in 2019.

SuperGlue is HARD, far harder than a naive Turing Test. Just look at these prompts.

Will SuperGLUE stand the test of time? It appears not. Six months in and the Google T5 model is within 1% of human performance.

Neural nets are beating language benchmarks faster than benchmarks can be created.

Yet first hand experience contradicts this progress—Alexa/Siri/Google still lack basic common sense.

Why? Is it a matter of time to deployment or are diverse human questions just much harder?

See:

Google's T5 (Text-To-Text Transfer Transformer) language model set new record and gets very close to human on SuperGLUE benchmark.

https://bit.ly/2XQkKxO

Paper: https://arxiv.org/abs/1910.10683
Code: https://github.com/google…/text-to-text-transfer-transformer
Meta-transfer learning for factorizing representations, casual graphs and knowledge for AI
Discover causal representation
Beyond i.i.d, independent mechanism, and single variable intervention
Causal structure and knowledge factorization, correct causal -> faster adaptation & better transfer
Hindrances are not problems, they are features.
Meta-optimizer: online learning errors promote changing in structural parameters (i.e. the network architecture)
Complex models and small data could generalize will under the right causal structure!
The consciousness prior
The future: the brain has different learning rate for different sections => fast/slow weight, long/short term parameters, causal without direct intervention but passive observation (like a child learning)
Paper:
https://arxiv.org/pdf/1901.10912.pdf
https://slideslive.com/38915855