Uplift modeling tutorial
https://habr.com/ru/company/ru_mts/blog/485980/
Code example: https://nbviewer.jupyter.org/github/maks-sh/scikit-uplift/blob/master/notebooks/RetailHero.ipynb
https://habr.com/ru/company/ru_mts/blog/485980/
Code example: https://nbviewer.jupyter.org/github/maks-sh/scikit-uplift/blob/master/notebooks/RetailHero.ipynb
Хабр
Туториал по uplift моделированию. Часть 1
Команда Big Data МТС активно извлекает знания из имеющихся данных и решает большое количество задач для бизнеса. Один из типов задач машинного обучения, с которыми мы сталкиваемся – это задачи...
TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval
Github: https://github.com/jayleicn/TVRetrieval
PyTorch implementation of MultiModal Transformer (MMT), a method for multimodal (video + subtitle) captioning: https://github.com/jayleicn/TVCaption
Paper: https://arxiv.org/abs/2001.09099v1
Github: https://github.com/jayleicn/TVRetrieval
PyTorch implementation of MultiModal Transformer (MMT), a method for multimodal (video + subtitle) captioning: https://github.com/jayleicn/TVCaption
Paper: https://arxiv.org/abs/2001.09099v1
Statistical_Consequences_of_Fat.pdf
27.3 MB
📚Fresh book by Nassim Taleb
Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications
https://arxiv.org/abs/2001.10488
@ai_machinelearning_big_data
Statistical Consequences of Fat Tails: Real World Preasymptotics, Epistemology, and Applications
https://arxiv.org/abs/2001.10488
@ai_machinelearning_big_data
This media is not supported in your browser
VIEW IN TELEGRAM
Open Source Differentiable Computer Vision Library for PyTorch
https://kornia.org
Code: https://github.com/kornia/kornia
Paper: https://arxiv.org/abs/1910.02190v2
https://kornia.org
Code: https://github.com/kornia/kornia
Paper: https://arxiv.org/abs/1910.02190v2
Project DeepSpeech
A TensorFlow implementation of Baidu's DeepSpeech architecture
Code: https://github.com/mozilla/DeepSpeech
Tensorflow & Pytorch: https://github.com/DemisEom/SpecAugment
SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition:
https://arxiv.org/pdf/1904.08779.pdf
A TensorFlow implementation of Baidu's DeepSpeech architecture
Code: https://github.com/mozilla/DeepSpeech
Tensorflow & Pytorch: https://github.com/DemisEom/SpecAugment
SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition:
https://arxiv.org/pdf/1904.08779.pdf
GitHub
GitHub - mozilla/DeepSpeech: DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in…
DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers. - mozilla/DeepSpeech
Filter Sketch for Network Pruning
Framework of FilterSketch. The top displays the second-order covariance of the pre-trained CNN
Code: https://github.com/lmbxmu/FilterSketch
Paper: https://arxiv.org/abs/2001.08514v1
Framework of FilterSketch. The top displays the second-order covariance of the pre-trained CNN
Code: https://github.com/lmbxmu/FilterSketch
Paper: https://arxiv.org/abs/2001.08514v1
How to Configure XGBoost for Imbalanced Classification
https://machinelearningmastery.com/xgboost-for-imbalanced-classification/
https://machinelearningmastery.com/xgboost-for-imbalanced-classification/
Torch-Struct: Deep Structured Prediction Library
Code: https://github.com/harvardnlp/pytorch-struct
Paper: https://arxiv.org/abs/2002.00876v1
Fast, general, and tested differentiable structured prediction in PyTorch: https://nlp.seas.harvard.edu/pytorch-struct/
Code: https://github.com/harvardnlp/pytorch-struct
Paper: https://arxiv.org/abs/2002.00876v1
Fast, general, and tested differentiable structured prediction in PyTorch: https://nlp.seas.harvard.edu/pytorch-struct/
GitHub
GitHub - harvardnlp/pytorch-struct: Fast, general, and tested differentiable structured prediction in PyTorch
Fast, general, and tested differentiable structured prediction in PyTorch - harvardnlp/pytorch-struct
Using ‘radioactive data’ to detect if a data set was used for training
https://ai.facebook.com/blog/using-radioactive-data-to-detect-if-a-data-set-was-used-for-training/
Paper: https://arxiv.org/abs/2002.00937
https://ai.facebook.com/blog/using-radioactive-data-to-detect-if-a-data-set-was-used-for-training/
Paper: https://arxiv.org/abs/2002.00937
Facebook
Using ‘radioactive data’ to detect if a dataset was used for training
Facebook AI has developed a new technique to mark the images in a dataset, so that researchers can then determine if a particular machine learning model has been trained using those images.
Forwarded from Data Science
Agile Machine Learning.pdf
4.1 MB
Agile Machine Learning: Effective Machine Learning Inspired by the Agile Manifesto (2019)
@datascienceiot
@datascienceiot
🔥Multi-Channel Attention Selection GANs for Guided Image-to-Image Translation
SelectionGAN for guided image-to-image translation, where we translate an input image into another while respecting an external semantic guidance
Code: : https://github.com/Ha0Tang/SelectionGAN
Paper: https://arxiv.org/abs/2002.01048v1
@ai_machinelearning_big_data
SelectionGAN for guided image-to-image translation, where we translate an input image into another while respecting an external semantic guidance
Code: : https://github.com/Ha0Tang/SelectionGAN
Paper: https://arxiv.org/abs/2002.01048v1
@ai_machinelearning_big_data
This media is not supported in your browser
VIEW IN TELEGRAM
CCMatrix: A billion-scale bitext data set for training translation models
CCMatrix is the largest data set of high-quality, web-based bitexts for training translation models
https://ai.facebook.com/blog/ccmatrix-a-billion-scale-bitext-data-set-for-training-translation-models/
Paper: https://arxiv.org/abs/1911.04944
Github: https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix
@ai_machinelearning_big_data
CCMatrix is the largest data set of high-quality, web-based bitexts for training translation models
https://ai.facebook.com/blog/ccmatrix-a-billion-scale-bitext-data-set-for-training-translation-models/
Paper: https://arxiv.org/abs/1911.04944
Github: https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix
@ai_machinelearning_big_data
This media is not supported in your browser
VIEW IN TELEGRAM
Mutual Information-based State-Control for Intrinsically Motivated Reinforcement Learning
Agent Learning Framework: https://github.com/HorizonRobotics/alf
Github: https://github.com/ruizhaogit/misc
Paper: https://arxiv.org/abs/2002.01963v1
Agent Learning Framework: https://github.com/HorizonRobotics/alf
Github: https://github.com/ruizhaogit/misc
Paper: https://arxiv.org/abs/2002.01963v1
The Annotated Transformer
The Transformer – a model that uses attention to boost the speed with which these models can be trained.
https://nlp.seas.harvard.edu/2018/04/03/attention.html
The Illustrated Transformer: https://jalammar.github.io/illustrated-transformer/
Habr: https://habr.com/ru/post/486358/
The Transformer – a model that uses attention to boost the speed with which these models can be trained.
https://nlp.seas.harvard.edu/2018/04/03/attention.html
The Illustrated Transformer: https://jalammar.github.io/illustrated-transformer/
Habr: https://habr.com/ru/post/486358/
TensorFlow Lattice: Flexible, controlled and interpretable ML
The library enables you to inject domain knowledge into the learning process through common-sense or policy-driven shape constraints.
https://blog.tensorflow.org/2020/02/tensorflow-lattice-flexible-controlled-and-interpretable-ML.html
Video: https://www.youtube.com/watch?v=ABBnNjbjv2Q&feature=emb_logo
Github: https://github.com/tensorflow/lattice
The library enables you to inject domain knowledge into the learning process through common-sense or policy-driven shape constraints.
https://blog.tensorflow.org/2020/02/tensorflow-lattice-flexible-controlled-and-interpretable-ML.html
Video: https://www.youtube.com/watch?v=ABBnNjbjv2Q&feature=emb_logo
Github: https://github.com/tensorflow/lattice
Recurrent Neural Networks (RNN) with Keras
Recurrent neural networks (RNN) are a class of neural networks that is powerful for modeling sequence data such as time series or natural language.
https://www.tensorflow.org/guide/keras/rnn
Source code: https://github.com/tensorflow/docs/blob/master/site/en/guide/keras/rnn.ipynb
Habr : https://habr.com/ru/post/487808/
Recurrent neural networks (RNN) are a class of neural networks that is powerful for modeling sequence data such as time series or natural language.
https://www.tensorflow.org/guide/keras/rnn
Source code: https://github.com/tensorflow/docs/blob/master/site/en/guide/keras/rnn.ipynb
Habr : https://habr.com/ru/post/487808/
Unsupervised Discovery of Interpretable Directions in the GAN Latent Space
Official PyTorch implementation of pre-print Unsupervised Discovery of Interpretable Directions in the GAN Latent
Code: https://github.com/anvoynov/GANLatentDiscovery
Paper: https://arxiv.org/abs/2002.03754
Official PyTorch implementation of pre-print Unsupervised Discovery of Interpretable Directions in the GAN Latent
Code: https://github.com/anvoynov/GANLatentDiscovery
Paper: https://arxiv.org/abs/2002.03754
The popularity of machine learning is so great that people try to use it wherever they can. Some attempts to replace classical approaches with neural networks turn up unsuccessful. This time we'll consider machine learning in terms of creating effective static code analyzers for finding bugs and potential vulnerabilities.
The PVS-Studio team believes that with machine learning, there are many pitfalls lurking in code analysis tasks.
https://bit.ly/2vqmeV7
The PVS-Studio team believes that with machine learning, there are many pitfalls lurking in code analysis tasks.
https://bit.ly/2vqmeV7
Learning to See Transparent Objects
ClearGrasp uses 3 neural networks: a network to estimate surface normals, one for occlusion boundaries (depth discontinuities), and one that masks transparent objects
Google research: https://ai.googleblog.com/2020/02/learning-to-see-transparent-objects.html
Code: https://github.com/Shreeyak/cleargrasp
Dataset: https://sites.google.com/view/transparent-objects
3D Shape Estimation of Transparent Objects for Manipulation: https://sites.google.com/view/cleargrasp
ClearGrasp uses 3 neural networks: a network to estimate surface normals, one for occlusion boundaries (depth discontinuities), and one that masks transparent objects
Google research: https://ai.googleblog.com/2020/02/learning-to-see-transparent-objects.html
Code: https://github.com/Shreeyak/cleargrasp
Dataset: https://sites.google.com/view/transparent-objects
3D Shape Estimation of Transparent Objects for Manipulation: https://sites.google.com/view/cleargrasp