🎇Announcing TensorFlow Quantum: An Open Source Library for Quantum Machine Learning
https://ai.googleblog.com/2020/03/announcing-tensorflow-quantum-open.html
https://ai.googleblog.com/2020/03/announcing-tensorflow-quantum-open.html
Lagrangian Neural Networks
In contrast to Hamiltonian Neural Networks, these models do not require canonical coordinates and perform well in situations where generalized momentum is difficult to compute
Code: https://github.com/MilesCranmer/lagrangian_nns
Paper: https://arxiv.org/abs/2003.04630v1
In contrast to Hamiltonian Neural Networks, these models do not require canonical coordinates and perform well in situations where generalized momentum is difficult to compute
Code: https://github.com/MilesCranmer/lagrangian_nns
Paper: https://arxiv.org/abs/2003.04630v1
Google just announced their new TensorFlow Developer Certificate, which is a great way to showcase your TF skills. Check it out
https://www.tensorflow.org/certificate
https://www.tensorflow.org/certificate
TensorFlow
Receive the TensorFlow Developer Certificate - TensorFlow
Demonstrate your level of proficiency in using TensorFlow to solve deep learning and ML problems by passing the TensorFlow Certificate program.
On the Texture Bias for Few-Shot CNN Segmentation
This repository contains the code for deep auto-encoder-decoder network for few-shot semantic segmentation with state of the art results on FSS 1000 class dataset and Pascal 5i
Code: https://github.com/rezazad68/fewshot-segmentation
Paper: https://arxiv.org/abs/2003.04052v1
Download 1000-class dataset
This repository contains the code for deep auto-encoder-decoder network for few-shot semantic segmentation with state of the art results on FSS 1000 class dataset and Pascal 5i
Code: https://github.com/rezazad68/fewshot-segmentation
Paper: https://arxiv.org/abs/2003.04052v1
Download 1000-class dataset
This media is not supported in your browser
VIEW IN TELEGRAM
🌐 Fast and Easy Infinitely Wide Networks with Neural Tangents
Neural Tangents is a high-level neural network API for specifying complex, hierarchical, neural networks of both finite and infinite width. Neural Tangents allows researchers to define, train, and evaluate infinite networks as easily as finite ones.
https://ai.googleblog.com/2020/03/fast-and-easy-infinitely-wide-networks.html
Colab notebook: https://colab.research.google.com/github/google/neural-tangents/blob/master/notebooks/neural_tangents_cookbook.ipynb#scrollTo=Lt74vgCVNN2b
Code: https://github.com/google/neural-tangents
Paper: https://arxiv.org/abs/1912.02803
Neural Tangents is a high-level neural network API for specifying complex, hierarchical, neural networks of both finite and infinite width. Neural Tangents allows researchers to define, train, and evaluate infinite networks as easily as finite ones.
https://ai.googleblog.com/2020/03/fast-and-easy-infinitely-wide-networks.html
Colab notebook: https://colab.research.google.com/github/google/neural-tangents/blob/master/notebooks/neural_tangents_cookbook.ipynb#scrollTo=Lt74vgCVNN2b
Code: https://github.com/google/neural-tangents
Paper: https://arxiv.org/abs/1912.02803
FastText: stepping through the code
fastText is a library for efficient learning of word representations and sentence classification.
Article: https://medium.com/@mariamestre/fasttext-stepping-through-the-code-259996d6ebc4
Habr ru: https://habr.com/ru/post/492432/
Code: https://github.com/facebookresearch/fastText
fastText is a library for efficient learning of word representations and sentence classification.
Article: https://medium.com/@mariamestre/fasttext-stepping-through-the-code-259996d6ebc4
Habr ru: https://habr.com/ru/post/492432/
Code: https://github.com/facebookresearch/fastText
Medium
FastText: stepping through the code
Little disclaimer: some of the information in this blog post might be incorrect. It will most probably become out-of-date very soon too. In…
Neural Networks are Function Approximation Algorithms
https://machinelearningmastery.com/neural-networks-are-function-approximators/
https://machinelearningmastery.com/neural-networks-are-function-approximators/
MachineLearningMastery.com
Neural Networks are Function Approximation Algorithms - MachineLearningMastery.com
Supervised learning in machine learning can be described in terms of function approximation. Given a dataset comprised of inputs and outputs, we assume that there is an unknown underlying function that is consistent in mapping inputs to outputs in the target…
Magenta: Music and Art Generation with Machine Intelligence
Magenta is a research project exploring the role of machine learning in the process of creating art and music.
Github: https://github.com/tensorflow/magenta
Colab notebooks: https://colab.research.google.com/notebooks/magenta/hello_magenta/hello_magenta.ipynb
Paper: https://arxiv.org/abs/1902.08710v2
Magenta is a research project exploring the role of machine learning in the process of creating art and music.
Github: https://github.com/tensorflow/magenta
Colab notebooks: https://colab.research.google.com/notebooks/magenta/hello_magenta/hello_magenta.ipynb
Paper: https://arxiv.org/abs/1902.08710v2
This media is not supported in your browser
VIEW IN TELEGRAM
Introducing Dreamer: Scalable Reinforcement Learning Using World Models
Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination.
https://ai.googleblog.com/2020/03/introducing-dreamer-scalable.html
Paper: https://arxiv.org/abs/1912.01603
Blog: https://dreamrl.github.io/
Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination.
https://ai.googleblog.com/2020/03/introducing-dreamer-scalable.html
Paper: https://arxiv.org/abs/1912.01603
Blog: https://dreamrl.github.io/
Few-Shot Object Detection (FsDet)
Detecting rare objects from a few examples is an emerging problem.
In addition to the benchmarks we introduce new benchmarks on three datasets: PASCAL VOC, COCO, and LVIS. We sample multiple groups of few-shot training examples for multiple runs of the experiments and report evaluation results on both the base classes and the novel classes.
Github: https://github.com/ucbdrive/few-shot-object-detection
Paper: https://arxiv.org/abs/2003.06957
Detecting rare objects from a few examples is an emerging problem.
In addition to the benchmarks we introduce new benchmarks on three datasets: PASCAL VOC, COCO, and LVIS. We sample multiple groups of few-shot training examples for multiple runs of the experiments and report evaluation results on both the base classes and the novel classes.
Github: https://github.com/ucbdrive/few-shot-object-detection
Paper: https://arxiv.org/abs/2003.06957
Scene Text Recognition via Transformer
The method use a convolutional feature maps as word embedding input into transformer.
Github: https://github.com/fengxinjie/Transformer-OCR
Paper: https://arxiv.org/abs/2003.08077
The transformer source code:https://nlp.seas.harvard.edu/2018/04/03/attention.html
The method use a convolutional feature maps as word embedding input into transformer.
Github: https://github.com/fengxinjie/Transformer-OCR
Paper: https://arxiv.org/abs/2003.08077
The transformer source code:https://nlp.seas.harvard.edu/2018/04/03/attention.html
High-Resolution Daytime Translation Without Domain Labels
HiDT combines a generative image-to-image model and a new upsampling scheme that allows to apply image translation at high resolution.
https://saic-mdal.github.io/HiDT/
Paper: https://arxiv.org/abs/2003.08791
Video: https://www.youtube.com/watch?v=DALQYKt-GJc&feature=youtu.be
HiDT combines a generative image-to-image model and a new upsampling scheme that allows to apply image translation at high resolution.
https://saic-mdal.github.io/HiDT/
Paper: https://arxiv.org/abs/2003.08791
Video: https://www.youtube.com/watch?v=DALQYKt-GJc&feature=youtu.be
PyTorch Tutorial: How to Develop Deep Learning Models with Python
https://machinelearningmastery.com/pytorch-tutorial-develop-deep-learning-models/
https://machinelearningmastery.com/pytorch-tutorial-develop-deep-learning-models/
NeRF: Neural Radiance Fields
Algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction
https://www.matthewtancik.com/nerf
Tensorflow implementation: https://github.com/bmild/nerf
Paper: https://arxiv.org/abs/2003.08934v1
Algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction
https://www.matthewtancik.com/nerf
Tensorflow implementation: https://github.com/bmild/nerf
Paper: https://arxiv.org/abs/2003.08934v1
Deep unfolding network for image super-resolution
Deep unfolding network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model, while maintaining the advantages of learning-based methods.
Github: https://github.com/cszn/USRNet
Paper: https://arxiv.org/pdf/2003.10428.pdf
Deep unfolding network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model, while maintaining the advantages of learning-based methods.
Github: https://github.com/cszn/USRNet
Paper: https://arxiv.org/pdf/2003.10428.pdf
Improved Techniques for Training Single-Image GANs
The latest convolutional layers are trained with a given learning rate, while previously existing convolutional layers are trained with a smaller learning rate
https://www.tobiashinz.com/2020/03/24/improved-techniques-for-training-single-image-gans.html
Code: https://github.com/tohinz/ConSinGAN
Paper: https://arxiv.org/abs/2003.11512
The latest convolutional layers are trained with a given learning rate, while previously existing convolutional layers are trained with a smaller learning rate
https://www.tobiashinz.com/2020/03/24/improved-techniques-for-training-single-image-gans.html
Code: https://github.com/tohinz/ConSinGAN
Paper: https://arxiv.org/abs/2003.11512
MoCo: Momentum Contrast for Unsupervised Visual Representation Learning
Github: https://github.com/facebookresearch/moco
Paper: https://arxiv.org/abs/1911.05722
Github: https://github.com/facebookresearch/moco
Paper: https://arxiv.org/abs/1911.05722
GitHub
GitHub - facebookresearch/moco: PyTorch implementation of MoCo: https://arxiv.org/abs/1911.05722
PyTorch implementation of MoCo: https://arxiv.org/abs/1911.05722 - facebookresearch/moco
New dataset from Google
The Taskmaster-2 dataset consists of 17,289 dialogs
https://research.google/tools/datasets/taskmaster-2/
The Taskmaster-2 dataset consists of 17,289 dialogs
https://research.google/tools/datasets/taskmaster-2/
iTAML: An Incremental Task-Agnostic Meta-learning Approach
iTAML hypothesizes that generalization is a key factor for continual learning Code is implemented using PyTorch and it includes code for running the incremental learning domain experiments
Code: https://github.com/brjathu/iTAML
Paper: https://arxiv.org/abs/2003.11652v1
iTAML hypothesizes that generalization is a key factor for continual learning Code is implemented using PyTorch and it includes code for running the incremental learning domain experiments
Code: https://github.com/brjathu/iTAML
Paper: https://arxiv.org/abs/2003.11652v1
🎲 Probabilistic Regression for Visual Tracking
A general python framework for training and running visual object trackers, based on PyTorch.
Code: https://github.com/visionml/pytracking
Paper: https://arxiv.org/abs/2003.12565
A general python framework for training and running visual object trackers, based on PyTorch.
Code: https://github.com/visionml/pytracking
Paper: https://arxiv.org/abs/2003.12565