This is curated collection of free places where you can learn about deep-learning. If you've always wanted to learn about deep-learning but don't know where to start, you might have stumbled upon the right place!
https://mithi.github.io/deep-blueberry/ch0-introduction.html
  https://mithi.github.io/deep-blueberry/ch0-introduction.html
Deep Learning for Sentiment Analysis: A Survey
https://arxiv.org/ftp/arxiv/papers/1801/1801.07883.pdf
Sentiment Analysis Benchmark Datasets & State of the art papers:
https://github.com/sebastianruder/NLP-progress/blob/master/english/sentiment_analysis.md
  
  https://arxiv.org/ftp/arxiv/papers/1801/1801.07883.pdf
Sentiment Analysis Benchmark Datasets & State of the art papers:
https://github.com/sebastianruder/NLP-progress/blob/master/english/sentiment_analysis.md
GitHub
  
  NLP-progress/sentiment_analysis.md at master · sebastianruder/NLP-progress
  Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks. - NLP-progress/sentiment_analysis.md at...
  Forwarded from The Devs
  
  Forwarded from School of AI
  
  zisserman-self-supervised.pdf
    9.1 MB
  A tutorial on Self-supervised Learning by Andrew Zisserman from Google Deep Mind.
Talks in ICML2019:
https://www.facebook.com/icml.imls/videos/2030095370631729/
  Talks in ICML2019:
https://www.facebook.com/icml.imls/videos/2030095370631729/
An overview of old and new nlp pretrained models use-cases (Excluding XLNet):
https://www.youtube.com/watch?v=0EtD5ybnh_s
#NLP
  
  https://www.youtube.com/watch?v=0EtD5ybnh_s
#NLP
YouTube
  
  Language Learning with BERT - TensorFlow and Deep Learning Singapore
  Speaker: Martin Andrews 
 
Event Page: https://www.meetup.com/TensorFlow-and-Deep-Learning-Singapore/events/256431012/
 
Produced by Engineers.SG
Help us caption & translate this video!
https://amara.org/v/mToR/
  Event Page: https://www.meetup.com/TensorFlow-and-Deep-Learning-Singapore/events/256431012/
Produced by Engineers.SG
Help us caption & translate this video!
https://amara.org/v/mToR/
Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.
https://arxiv.org/abs/1906.08237#
#NLP
  
  https://arxiv.org/abs/1906.08237#
#NLP
arXiv.org
  
  XLNet: Generalized Autoregressive Pretraining for Language Understanding
  With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language...
  A great introduction to Multivariate Gaussian Distributations:
https://www.youtube.com/watch?v=eho8xH3E6mE
#statistics
  
  https://www.youtube.com/watch?v=eho8xH3E6mE
#statistics
YouTube
  
  Multivariate Gaussian distributions
  Properties of the multivariate Gaussian probability distribution
  A great introduction to Multivariate Gaussian Distributations:
https://www.youtube.com/watch?v=JNlEIEwe-Cg
#statistics
  
  https://www.youtube.com/watch?v=JNlEIEwe-Cg
#statistics
YouTube
  
  Gaussian Mixture Models - The Math of Intelligence (Week 7)
  We're going to predict customer churn using a clustering technique called the Gaussian Mixture Model! This is a probability distribution that consists of multiple Gaussian distributions, very cool. I also have something important but unrelated to say in the…
  Forwarded from Machine learning books and papers (Ramin Mousa)
Adapters: A Compact and Extensible Transfer Learning Method for NLP
____
@Machine_learn
___
https://medium.com/dair-ai/adapters-a-compact-and-extensible-transfer-learning-method-for-nlp-6d18c2399f62
  
  ____
@Machine_learn
___
https://medium.com/dair-ai/adapters-a-compact-and-extensible-transfer-learning-method-for-nlp-6d18c2399f62
Medium
  
  Adapters: A Compact and Extensible Transfer Learning Method for NLP
  Adapters obtain comparable results to BERT on several NLP tasks while achieving parameter efficiency.
  Forwarded from School of AI
Neural Code Search: ML-based code search using natural language queries:
https://ai.facebook.com/blog/neural-code-search-ml-based-code-search-using-natural-language-queries/
  
  https://ai.facebook.com/blog/neural-code-search-ml-based-code-search-using-natural-language-queries/
Facebook
  
  Neural Code Search: ML-based code search using natural language queries
  We’ve developed an internal tool that applies natural language processing and information retrieval techniques directly to source code text, in order to produce an machine learning-based code search system.
  A great tutorial which shows you how you can implement algorithms above, in Python:
https://github.com/MorvanZhou/Evolutionary-Algorithm
  https://github.com/MorvanZhou/Evolutionary-Algorithm
Deep unsupervised Learning Course by Google:
https://sites.google.com/view/berkeley-cs294-158-sp19/home
  
  https://sites.google.com/view/berkeley-cs294-158-sp19/home
Google
  
  CS294-158-SP19 Deep Unsupervised Learning Spring 2019
  About: This course will cover two areas of deep learning in which labeled data is not required: Deep Generative Models and Self-supervised Learning. Recent advances in generative models have made it possible to realistically model high-dimensional raw data…
  Forwarded from Tensorflow(@CVision) (Vahid Reza Khazaie)
New Google Brain Optimizer Reduces BERT Pre-Training Time From Days to Minutes
کاهش مدت زمان pre-training مدل زبانی BERT از سه روز به 76 دقیقه با ارائه یک تابع بهینه ساز جدید!
Google Brain researchers have proposed LAMB (Layer-wise Adaptive Moments optimizer for Batch training), a new optimizer which reduces training time for its NLP training model BERT (Bidirectional Encoder Representations from Transformers) from three days to just 76 minutes.
لینک مقاله: https://arxiv.org/abs/1904.00962
لینک بلاگ پست: https://medium.com/syncedreview/new-google-brain-optimizer-reduces-bert-pre-training-time-from-days-to-minutes-b454e54eda1d
#BERT #language_model #optimizer
  
  کاهش مدت زمان pre-training مدل زبانی BERT از سه روز به 76 دقیقه با ارائه یک تابع بهینه ساز جدید!
Google Brain researchers have proposed LAMB (Layer-wise Adaptive Moments optimizer for Batch training), a new optimizer which reduces training time for its NLP training model BERT (Bidirectional Encoder Representations from Transformers) from three days to just 76 minutes.
لینک مقاله: https://arxiv.org/abs/1904.00962
لینک بلاگ پست: https://medium.com/syncedreview/new-google-brain-optimizer-reduces-bert-pre-training-time-from-days-to-minutes-b454e54eda1d
#BERT #language_model #optimizer
arXiv.org
  
  Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
  Training large deep neural networks on massive datasets is computationally very challenging. There has been recent surge in interest in using large batch stochastic optimization methods to tackle...