ever wondered how we translate questions and commands into programs a machine can run? Jonathan Berant gives us an overview of (executable) semantic parsing.
#NLP
https://t.co/Mzvks7f9GR
❇️ @AI_Python_EN
  #NLP
https://t.co/Mzvks7f9GR
❇️ @AI_Python_EN
Here is a great explanation of how to combine Transformers and fastai to get great results from your NLP models
https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
  https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2
Free 81-page guide on learning #ComputerVision, #DeepLearning, and #OpenCV!
Includes step-by-step instructions on:
- Getting Started
- Face Applications
- Object Detection
- OCR
- Embedded/IoT
- ...and more
https://www.pyimagesearch.com/start-here
  Includes step-by-step instructions on:
- Getting Started
- Face Applications
- Object Detection
- OCR
- Embedded/IoT
- ...and more
https://www.pyimagesearch.com/start-here
It should be really useful as according to this paper
https://arxiv.org/abs/1905.05583, the unsupervised finetuning and layer wise LR , and one-cycle are crucial for BERT performance. They mange to beat ULMFiT on IMDB with BERT-Base
  https://arxiv.org/abs/1905.05583, the unsupervised finetuning and layer wise LR , and one-cycle are crucial for BERT performance. They mange to beat ULMFiT on IMDB with BERT-Base
A good introduction to #MachineLearning and its 4 approaches:
https://towardsdatascience.com/machine-learning-an-introduction-23b84d51e6d0?gi=10a5fcd4decd
#BigData #DataScience #AI #Algorithms #ReinforcementLearning
❇️ @AI_Python_EN
  https://towardsdatascience.com/machine-learning-an-introduction-23b84d51e6d0?gi=10a5fcd4decd
#BigData #DataScience #AI #Algorithms #ReinforcementLearning
❇️ @AI_Python_EN
Want to see how downstream results are affected by LSTM LM training configurations?
Save time/compute: use 125 pretrained LSTM LMs.
https://zenodo.org/record/3556943
❇️ @AI_Python_EN
  Save time/compute: use 125 pretrained LSTM LMs.
https://zenodo.org/record/3556943
❇️ @AI_Python_EN
Yoshua explains how #DeepLearning has developed in 2019 
https://www.youtube.com/watch?v=eKMA1Tscdag
❇️ @AI_Python_EN
  https://www.youtube.com/watch?v=eKMA1Tscdag
❇️ @AI_Python_EN
DEBATE : Yoshua Bengio | Gary Marcus Pre-readings recommended to the audience before the Debate : 
Yoshua Bengio | Gary Marcus
 
This Is The Debate The #AI World Has Been Waiting For
❇️ @AI_Python_EN
  
  
  
  
  
  Yoshua Bengio | Gary Marcus
This Is The Debate The #AI World Has Been Waiting For
❇️ @AI_Python_EN
Machine Learning Models
☞ https://morioh.com/p/1dc7518426c2
#TensorFlow #machinelearning
❇️ @AI_Python_EN
  ☞ https://morioh.com/p/1dc7518426c2
#TensorFlow #machinelearning
❇️ @AI_Python_EN
nbdev: use Jupyter Notebooks for everything
https://www.fast.ai//2019/12/02/nbdev/
https://github.com/fastai/nbdev/
❇️ @AI_Python_EN
  https://www.fast.ai//2019/12/02/nbdev/
https://github.com/fastai/nbdev/
❇️ @AI_Python_EN
Jupyter on Steroids: Create Packages, Tests, and Rich Documents https://t.co/w3K6D0Cgp6
  
  Hackernoon
  
  #Jupyter on Steroids: Create Packages, Tests, and Rich Documents | Hacker Noon
  "I really do think [nbdev] is a huge step forward for programming environments": Chris Lattner, inventor of Swift, LLVM, and Swift Playgrounds.
  Identifying Hate Speech with BERT and CNN
https://link.medium.com/7FaReCD781
  
  https://link.medium.com/7FaReCD781
Medium
  
  Identifying Hate Speech with BERT and CNN
  A tool that can help us to recognize online abuse and harassment by analyzing text
  💡 What's the difference between bagging and boosting? 
Bagging and boosting are both ensemble methods, meaning they combine many weak predictors to create a strong predictor.
One key difference is that bagging builds independent models in parallel and "averages" their results in the end, whereas boosting builds models sequentially, at each step emphasizing reducing error that remains in the model by better fitting to the observations that were missed in previous steps.
❇️ @AI_Python_EN
  Bagging and boosting are both ensemble methods, meaning they combine many weak predictors to create a strong predictor.
One key difference is that bagging builds independent models in parallel and "averages" their results in the end, whereas boosting builds models sequentially, at each step emphasizing reducing error that remains in the model by better fitting to the observations that were missed in previous steps.
❇️ @AI_Python_EN
Pre-Debate Material
“Yoshua Bengio, Revered Architect of AI, Has Some Ideas About What to Build Next”
The Turing Award winner wants AI systems that can reason, plan, and imagine
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/yoshua-bengio-revered-architect-of-ai-has-some-ideas-about-what-to-build-next
❇️ @AI_Python_EN
  “Yoshua Bengio, Revered Architect of AI, Has Some Ideas About What to Build Next”
The Turing Award winner wants AI systems that can reason, plan, and imagine
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/yoshua-bengio-revered-architect-of-ai-has-some-ideas-about-what-to-build-next
❇️ @AI_Python_EN
Machine Learning in a company is 10% Data Science & 90% other challenges It's VERY hard. Everything in this guide is ON POINT, and it's stuff you won't learn in an ML book "Best Practices of ML Engineering" This is a lifesaver.
project:
https://martin.zinkevich.org/rules_of_ml/rules_of_ml.pdf
  
  
  
  
  
  project:
https://martin.zinkevich.org/rules_of_ml/rules_of_ml.pdf
Very interesting use of #AI to tackle bias in the written text by substituting words automatically to more neutral wording. However, one must also consider the challenges and ramifications such technology could mean to the written language as it can not only accidentally change the meaning of what was written, it can also change the tone and expression of the author and neutralize the point-of-view and remove emotion from language. 
#NLP
https://arxiv.org/pdf/1911.09709.pdf
❇️ @AI_Python_EN
  #NLP
https://arxiv.org/pdf/1911.09709.pdf
❇️ @AI_Python_EN
Named Entity Recognition Benchmark: spaCy, Flair, m-BERT and camemBERT on anonymizing French commercial legal cases
https://bit.ly/2rq1I5H
#DataScience #MachineLearning #ArtificialIntelligence #NLP
❇️ @AI_Python_EN
  
  https://bit.ly/2rq1I5H
#DataScience #MachineLearning #ArtificialIntelligence #NLP
❇️ @AI_Python_EN
Medium
  
  NER algo benchmark: spaCy, Flair, m-BERT and camemBERT on anonymizing French commercial legal cases
  Does (model) size matters?
  "If the future can be different from the past and you don't have deep understanding, you should not rely on AI." - a rule from Ray Dalio for when to leverage machine learning for decision-making. 
Full conversation:
❇️ @AI_Python_EN
  
  Full conversation:
❇️ @AI_Python_EN
YouTube
  
  Ray Dalio: Principles, the Economic Machine, AI & the Arc of Life | Lex Fridman Podcast #54
  
  