The Annotated Transformer
The Transformer – a model that uses attention to boost the speed with which these models can be trained.
https://nlp.seas.harvard.edu/2018/04/03/attention.html
The Illustrated Transformer: https://jalammar.github.io/illustrated-transformer/
Habr: https://habr.com/ru/post/486358/
The Transformer – a model that uses attention to boost the speed with which these models can be trained.
https://nlp.seas.harvard.edu/2018/04/03/attention.html
The Illustrated Transformer: https://jalammar.github.io/illustrated-transformer/
Habr: https://habr.com/ru/post/486358/
TensorFlow Lattice: Flexible, controlled and interpretable ML
The library enables you to inject domain knowledge into the learning process through common-sense or policy-driven shape constraints.
https://blog.tensorflow.org/2020/02/tensorflow-lattice-flexible-controlled-and-interpretable-ML.html
Video: https://www.youtube.com/watch?v=ABBnNjbjv2Q&feature=emb_logo
Github: https://github.com/tensorflow/lattice
The library enables you to inject domain knowledge into the learning process through common-sense or policy-driven shape constraints.
https://blog.tensorflow.org/2020/02/tensorflow-lattice-flexible-controlled-and-interpretable-ML.html
Video: https://www.youtube.com/watch?v=ABBnNjbjv2Q&feature=emb_logo
Github: https://github.com/tensorflow/lattice
Recurrent Neural Networks (RNN) with Keras
Recurrent neural networks (RNN) are a class of neural networks that is powerful for modeling sequence data such as time series or natural language.
https://www.tensorflow.org/guide/keras/rnn
Source code: https://github.com/tensorflow/docs/blob/master/site/en/guide/keras/rnn.ipynb
Habr : https://habr.com/ru/post/487808/
Recurrent neural networks (RNN) are a class of neural networks that is powerful for modeling sequence data such as time series or natural language.
https://www.tensorflow.org/guide/keras/rnn
Source code: https://github.com/tensorflow/docs/blob/master/site/en/guide/keras/rnn.ipynb
Habr : https://habr.com/ru/post/487808/
Unsupervised Discovery of Interpretable Directions in the GAN Latent Space
Official PyTorch implementation of pre-print Unsupervised Discovery of Interpretable Directions in the GAN Latent
Code: https://github.com/anvoynov/GANLatentDiscovery
Paper: https://arxiv.org/abs/2002.03754
Official PyTorch implementation of pre-print Unsupervised Discovery of Interpretable Directions in the GAN Latent
Code: https://github.com/anvoynov/GANLatentDiscovery
Paper: https://arxiv.org/abs/2002.03754
The popularity of machine learning is so great that people try to use it wherever they can. Some attempts to replace classical approaches with neural networks turn up unsuccessful. This time we'll consider machine learning in terms of creating effective static code analyzers for finding bugs and potential vulnerabilities.
The PVS-Studio team believes that with machine learning, there are many pitfalls lurking in code analysis tasks.
https://bit.ly/2vqmeV7
The PVS-Studio team believes that with machine learning, there are many pitfalls lurking in code analysis tasks.
https://bit.ly/2vqmeV7
Learning to See Transparent Objects
ClearGrasp uses 3 neural networks: a network to estimate surface normals, one for occlusion boundaries (depth discontinuities), and one that masks transparent objects
Google research: https://ai.googleblog.com/2020/02/learning-to-see-transparent-objects.html
Code: https://github.com/Shreeyak/cleargrasp
Dataset: https://sites.google.com/view/transparent-objects
3D Shape Estimation of Transparent Objects for Manipulation: https://sites.google.com/view/cleargrasp
ClearGrasp uses 3 neural networks: a network to estimate surface normals, one for occlusion boundaries (depth discontinuities), and one that masks transparent objects
Google research: https://ai.googleblog.com/2020/02/learning-to-see-transparent-objects.html
Code: https://github.com/Shreeyak/cleargrasp
Dataset: https://sites.google.com/view/transparent-objects
3D Shape Estimation of Transparent Objects for Manipulation: https://sites.google.com/view/cleargrasp
fastai—A Layered API for Deep Learning
https://www.fast.ai//2020/02/13/fastai-A-Layered-API-for-Deep-Learning/
Complete documentation and tutorials:
https://docs.fast.ai/
https://www.fast.ai//2020/02/13/fastai-A-Layered-API-for-Deep-Learning/
Complete documentation and tutorials:
https://docs.fast.ai/
Capsules with Inverted Dot-Product Attention Routing
New routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent’s state and the child’s vote.
Code: https://github.com/apple/ml-capsules-inverted-attention-routing
Paper: https://openreview.net/pdf?id=HJe6uANtwH
New routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent’s state and the child’s vote.
Code: https://github.com/apple/ml-capsules-inverted-attention-routing
Paper: https://openreview.net/pdf?id=HJe6uANtwH
GitHub
GitHub - apple/ml-capsules-inverted-attention-routing
Contribute to apple/ml-capsules-inverted-attention-routing development by creating an account on GitHub.
Matrix Compression Operator
https://blog.tensorflow.org/2020/02/matrix-compression-operator-tensorflow.html
Experimental API that facilitates matrix compression of a neural network's weight tensors: https://github.com/google-research/google-research/tree/master/graph_compression
Full documentation: https://drive.google.com/file/d/1843aNpKx_rznpuh9AmEshgAKmISVdpJY/view
https://blog.tensorflow.org/2020/02/matrix-compression-operator-tensorflow.html
Experimental API that facilitates matrix compression of a neural network's weight tensors: https://github.com/google-research/google-research/tree/master/graph_compression
Full documentation: https://drive.google.com/file/d/1843aNpKx_rznpuh9AmEshgAKmISVdpJY/view
GANILLA: Generative Adversarial Networks for Image to Illustration Translation.
Github: https://github.com/giddyyupp/ganilla
Dataset: https://github.com/giddyyupp/ganilla/blob/master/docs/datasets.md
Paper: https://arxiv.org/abs/2002.05638v1
Github: https://github.com/giddyyupp/ganilla
Dataset: https://github.com/giddyyupp/ganilla/blob/master/docs/datasets.md
Paper: https://arxiv.org/abs/2002.05638v1
Learning to Rank with XGBoost and GPU | NVIDIA Developer Blog
https://devblogs.nvidia.com/learning-to-rank-with-xgboost-and-gpu/
https://devblogs.nvidia.com/learning-to-rank-with-xgboost-and-gpu/
NVIDIA Technical Blog
Learning to Rank with XGBoost and GPU
XGBoost is a widely used machine learning library, which uses gradient boosting techniques to incrementally build a better model during the training phase by combining multiple weak models.
The Illustrated BERT, ELMo, and co. (How NLP Cracked Transfer Learning)
https://jalammar.github.io/illustrated-bert/
Habr ru: https://habr.com/ru/post/487358/
BERT FineTuning with Cloud TPUs notebook: https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb
https://jalammar.github.io/illustrated-bert/
Habr ru: https://habr.com/ru/post/487358/
BERT FineTuning with Cloud TPUs notebook: https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb
Deep learning of dynamical attractors from time series measurements
Embed complex time series using autoencoders and a loss function based on penalizing false-nearest-neighbors.
Code: https://github.com/williamgilpin/fnn
Paper: https://arxiv.org/abs/2002.05909
Embed complex time series using autoencoders and a loss function based on penalizing false-nearest-neighbors.
Code: https://github.com/williamgilpin/fnn
Paper: https://arxiv.org/abs/2002.05909
GitHub
GitHub - williamgilpin/fnn: Embed strange attractors using a regularizer for autoencoders
Embed strange attractors using a regularizer for autoencoders - williamgilpin/fnn
How to Develop an Imbalanced Classification Model to Detect Oil Spills
https://machinelearningmastery.com/imbalanced-classification-model-to-detect-oil-spills/
https://machinelearningmastery.com/imbalanced-classification-model-to-detect-oil-spills/
MachineLearningMastery.com
How to Develop an Imbalanced Classification Model to Detect Oil Spills - MachineLearningMastery.com
Many imbalanced classification tasks require a skillful model that predicts a crisp class label, where both classes are equally important. An example of an imbalanced classification problem where a class label is required and both classes are equally important…
This media is not supported in your browser
VIEW IN TELEGRAM
ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters
https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/
https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/
The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural Language Understanding
Code: https://github.com/namisan/mt-dnn
Paper: https://arxiv.org/abs/2002.07972v1
Code: https://github.com/namisan/mt-dnn
Paper: https://arxiv.org/abs/2002.07972v1
This media is not supported in your browser
VIEW IN TELEGRAM
Implementation of the BASIS algorithm for source separation with deep generative priors
This repository provides an implementation of the BASIS (Bayesian Annealed SIgnal Source) separation algorithm. BASIS separation uses annealed Langevin dynamics to sample from the posterior distribution of source components given a mixed signal.
Github: https://github.com/jthickstun/basis-separation
Paper: https://arxiv.org/abs/2002.07942
This repository provides an implementation of the BASIS (Bayesian Annealed SIgnal Source) separation algorithm. BASIS separation uses annealed Langevin dynamics to sample from the posterior distribution of source components given a mixed signal.
Github: https://github.com/jthickstun/basis-separation
Paper: https://arxiv.org/abs/2002.07942
A Gentle Introduction to the Fbeta-Measure for Machine Learning
https://machinelearningmastery.com/fbeta-measure-for-machine-learning/
https://machinelearningmastery.com/fbeta-measure-for-machine-learning/
JAX-based neural network library
https://github.com/deepmind/dm-haiku
Haiku Documentation: https://dm-haiku.readthedocs.io/en/latest/
https://github.com/deepmind/dm-haiku
Haiku Documentation: https://dm-haiku.readthedocs.io/en/latest/
GitHub
GitHub - google-deepmind/dm-haiku: JAX-based neural network library
JAX-based neural network library. Contribute to google-deepmind/dm-haiku development by creating an account on GitHub.