DELTA - a DEep Language Technology plAtform
DELTA is a deep learning based end-to-end natural language and speech processing platform. DELTA aims to provide easy and fast experiences for using, deploying, and developing natural language processing and speech models for both academia and industry use cases. DELTA is mainly implemented using TensorFlow and Python 3.
https://github.com/didi/delta
DELTA is a deep learning based end-to-end natural language and speech processing platform. DELTA aims to provide easy and fast experiences for using, deploying, and developing natural language processing and speech models for both academia and industry use cases. DELTA is mainly implemented using TensorFlow and Python 3.
https://github.com/didi/delta
GitHub
GitHub - Delta-ML/delta: DELTA is a deep learning based natural language and speech processing platform.
DELTA is a deep learning based natural language and speech processing platform. - GitHub - Delta-ML/delta: DELTA is a deep learning based natural language and speech processing platform.
Deblending and Classifying Astronomical Sources with Mask R-CNN Deep Learning
https://arxiv.org/abs/1908.02748
https://arxiv.org/abs/1908.02748
arXiv.org
Deblending and Classifying Astronomical Sources with Mask R-CNN...
We apply a new deep learning technique to detect, classify, and deblend sources in multi-band astronomical images. We train and evaluate the performance of an artificial neural network built on...
1st edition of "Interpretable Machine Learning
By Christoph Molnar.
Book: https://christophm.github.io/interpretable-ml-book/
#artificalintelligence #deeplearning #machinelearning
By Christoph Molnar.
Book: https://christophm.github.io/interpretable-ml-book/
#artificalintelligence #deeplearning #machinelearning
christophm.github.io
Interpretable Machine Learning
Sparse Networks from Scratch: Faster Training without Losing Performance
https://arxiv.org/abs/1907.04840
https://arxiv.org/abs/1907.04840
arXiv.org
Sparse Networks from Scratch: Faster Training without Losing Performance
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance...
A new model for word embeddings that are resilient to misspellings
https://ai.facebook.com/blog/-a-new-model-for-word-embeddings-that-are-resilient-to-misspellings-/
https://github.com/facebookresearch/moe?fbclid=IwAR3pCHx4-8oWTqgYqUnKHxcVWdDzPuOVTL0sTidyDBX9J7UPt2HcWxRG9AA
https://ai.facebook.com/blog/-a-new-model-for-word-embeddings-that-are-resilient-to-misspellings-/
https://github.com/facebookresearch/moe?fbclid=IwAR3pCHx4-8oWTqgYqUnKHxcVWdDzPuOVTL0sTidyDBX9J7UPt2HcWxRG9AA
Facebook
A new model for word embeddings that are resilient to misspellings
Misspelling Oblivious Embeddings (MOE) is a new model for word embeddings that are resilient to misspellings, improving the ability to apply word embeddings to real-world situations, where misspellings are common.
The Best of AI: New Articles Published This Month (July 2019)
https://blog.sicara.com/07-2019-best-ai-new-articles-this-month-3e1fa3f6c321
https://blog.sicara.com/07-2019-best-ai-new-articles-this-month-3e1fa3f6c321
NVIDIA Clocks World’s Fastest BERT Training Time and Largest Transformer Based Model, Paving Path For Advanced Conversational AI
https://devblogs.nvidia.com/training-bert-with-gpus/
https://devblogs.nvidia.com/training-bert-with-gpus/
NVIDIA Technical Blog
NVIDIA Clocks World’s Fastest BERT Training Time and Largest Transformer Based Model, Paving Path For Advanced Conversational AI
NVIDIA DGX SuperPOD trains BERT-Large in just 47 minutes, and trains GPT-2 8B, the largest Transformer Network Ever with 8.3Bn parameters Conversational AI is an essential building block of human…
The Illustrated GPT-2 (Visualizing Transformer Language Models)
https://jalammar.github.io/illustrated-gpt2/
https://jalammar.github.io/illustrated-gpt2/
jalammar.github.io
The Illustrated GPT-2 (Visualizing Transformer Language Models)
Discussions:
Hacker News (64 points, 3 comments), Reddit r/MachineLearning (219 points, 18 comments)
Translations: Simplified Chinese, French, Korean, Russian, Turkish
This year, we saw a dazzling application of machine learning. The OpenAI GPT…
Hacker News (64 points, 3 comments), Reddit r/MachineLearning (219 points, 18 comments)
Translations: Simplified Chinese, French, Korean, Russian, Turkish
This year, we saw a dazzling application of machine learning. The OpenAI GPT…
We have a new coll articles made by our team . Please forward channel posts to your friends and collegues to help our channel. Thank you!
Continuous Control for High-Dimensional State Spaces: An Interactive Learning Approach
https://arxiv.org/abs/1908.05256
Continuous Control for High-Dimensional State Spaces: An Interactive Learning Approach
https://arxiv.org/abs/1908.05256
arXiv.org
Continuous Control for High-Dimensional State Spaces: An...
Deep Reinforcement Learning (DRL) has become a powerful methodology to solve
complex decision-making problems. However, DRL has several limitations when
used in real-world problems (e.g., robotics...
complex decision-making problems. However, DRL has several limitations when
used in real-world problems (e.g., robotics...
Efficient Segmentation: Learning Downsampling Near Semantic Boundaries
abstract: https://research.fb.com/publications/efficient-segmentation-learning-downsampling-near-semantic-boundaries
paper: https://research.fb.com/wp-content/uploads/2019/08/Efficient-Segmentation-Learning-Downsampling-Near-Semantic-Boundaries.pdf?
abstract: https://research.fb.com/publications/efficient-segmentation-learning-downsampling-near-semantic-boundaries
paper: https://research.fb.com/wp-content/uploads/2019/08/Efficient-Segmentation-Learning-Downsampling-Near-Semantic-Boundaries.pdf?
Facebook Research
Efficient Segmentation: Learning Downsampling Near Semantic Boundaries - Facebook Research
Many automated processes such as auto-piloting rely on a good semantic segmentation as a critical component. To speed up performance, it is common to downsample the input frame. However, this comes at the cost of missed small objects and reduced accuracy…
New State of the Art AI Optimizer: Rectified Adam (RAdam). Improve your AI accuracy instantly versus Adam, and why it works.
https://medium.com/@lessw/new-state-of-the-art-ai-optimizer-rectified-adam-radam-5d854730807b
https://medium.com/@lessw/new-state-of-the-art-ai-optimizer-rectified-adam-radam-5d854730807b
Medium
New State of the Art AI Optimizer: Rectified Adam (RAdam). Improve your AI accuracy instantly versus Adam, and why it works.
A new paper by Liu, Jian, He et al introduces RAdam, or “Rectified Adam”. It’s a new variation of the classic Adam optimizer that provides…
Evaluating and Testing Unintended Memorization in Neural Networks
https://bair.berkeley.edu/blog/2019/08/13/memorization/
https://bair.berkeley.edu/blog/2019/08/13/memorization/
The Berkeley Artificial Intelligence Research Blog
Evaluating and Testing Unintended Memorization in Neural Networks
The BAIR Blog
ai ,machine learning
• 1146 leaderboards
• 1223 tasks
• 1105 datasets
• 14779 papers with code
https://paperswithcode.com/sota
• 1146 leaderboards
• 1223 tasks
• 1105 datasets
• 14779 papers with code
https://paperswithcode.com/sota
huggingface.co
Trending Papers - Hugging Face
Your daily dose of AI research from AK
Transfer learning in natural language processing tutorial
https://docs.google.com/presentation/d/1fIhGikFPnb7G5kr58OvYC3GN4io7MznnM0aAgadvJfc/mobilepresent?slide=id.g58bdd596a1_0_0
https://docs.google.com/presentation/d/1fIhGikFPnb7G5kr58OvYC3GN4io7MznnM0aAgadvJfc/mobilepresent?slide=id.g58bdd596a1_0_0