304K subscribers
4.02K photos
712 videos
17 files
4.6K links
Погружаемся в машинное обучение и Data Science

Показываем как запускать любые LLm на пальцах.

По всем вопросам - @haarrp

@itchannels_telegram -🔥best channels

Реестр РКН: clck.ru/3Fmqri
Download Telegram
The Annotated Transformer

The Transformer – a model that uses attention to boost the speed with which these models can be trained.

https://nlp.seas.harvard.edu/2018/04/03/attention.html

The Illustrated Transformer: https://jalammar.github.io/illustrated-transformer/

Habr: https://habr.com/ru/post/486358/
TensorFlow Lattice: Flexible, controlled and interpretable ML

The library enables you to inject domain knowledge into the learning process through common-sense or policy-driven shape constraints.

https://blog.tensorflow.org/2020/02/tensorflow-lattice-flexible-controlled-and-interpretable-ML.html

Video: https://www.youtube.com/watch?v=ABBnNjbjv2Q&feature=emb_logo

Github: https://github.com/tensorflow/lattice
Recurrent Neural Networks (RNN) with Keras

Recurrent neural networks (RNN) are a class of neural networks that is powerful for modeling sequence data such as time series or natural language.

https://www.tensorflow.org/guide/keras/rnn

Source code: https://github.com/tensorflow/docs/blob/master/site/en/guide/keras/rnn.ipynb

Habr : https://habr.com/ru/post/487808/
Unsupervised Discovery of Interpretable Directions in the GAN Latent Space

Official PyTorch implementation of pre-print Unsupervised Discovery of Interpretable Directions in the GAN Latent

Code: https://github.com/anvoynov/GANLatentDiscovery


Paper: https://arxiv.org/abs/2002.03754
The popularity of machine learning is so great that people try to use it wherever they can. Some attempts to replace classical approaches with neural networks turn up unsuccessful. This time we'll consider machine learning in terms of creating effective static code analyzers for finding bugs and potential vulnerabilities.

The PVS-Studio team believes that with machine learning, there are many pitfalls lurking in code analysis tasks.

https://bit.ly/2vqmeV7
Learning to See Transparent Objects

ClearGrasp uses 3 neural networks: a network to estimate surface normals, one for occlusion boundaries (depth discontinuities), and one that masks transparent objects

Google research: https://ai.googleblog.com/2020/02/learning-to-see-transparent-objects.html

Code: https://github.com/Shreeyak/cleargrasp

Dataset: https://sites.google.com/view/transparent-objects

3D Shape Estimation of Transparent Objects for Manipulation: https://sites.google.com/view/cleargrasp
fastai—A Layered API for Deep Learning

https://www.fast.ai//2020/02/13/fastai-A-Layered-API-for-Deep-Learning/

Complete documentation and tutorials:
https://docs.fast.ai/
Capsules with Inverted Dot-Product Attention Routing

New routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent’s state and the child’s vote.

Code: https://github.com/apple/ml-capsules-inverted-attention-routing

Paper: https://openreview.net/pdf?id=HJe6uANtwH
GANILLA: Generative Adversarial Networks for Image to Illustration Translation.

Github: https://github.com/giddyyupp/ganilla

Dataset: https://github.com/giddyyupp/ganilla/blob/master/docs/datasets.md

Paper: https://arxiv.org/abs/2002.05638v1
Detecting spam call with machine learning methods

https://habr.com/ru/company/ru_mts/blog/488828/
Deep learning of dynamical attractors from time series measurements

Embed complex time series using autoencoders and a loss function based on penalizing false-nearest-neighbors.

Code: https://github.com/williamgilpin/fnn

Paper: https://arxiv.org/abs/2002.05909
The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural Language Understanding

Code
: https://github.com/namisan/mt-dnn

Paper: https://arxiv.org/abs/2002.07972v1
This media is not supported in your browser
VIEW IN TELEGRAM
Implementation of the BASIS algorithm for source separation with deep generative priors

This repository provides an implementation of the BASIS (Bayesian Annealed SIgnal Source) separation algorithm. BASIS separation uses annealed Langevin dynamics to sample from the posterior distribution of source components given a mixed signal.


Github: https://github.com/jthickstun/basis-separation

Paper: https://arxiv.org/abs/2002.07942
A Gentle Introduction to the Fbeta-Measure for Machine Learning

https://machinelearningmastery.com/fbeta-measure-for-machine-learning/