Deep causal representation learning for unsupervised domain adaptation
Moraffah et al.: https://arxiv.org/abs/1910.12417
#DeepLearning #MachineLearning #UnsupervisedLearning
Moraffah et al.: https://arxiv.org/abs/1910.12417
#DeepLearning #MachineLearning #UnsupervisedLearning
arXiv.org
Deep causal representation learning for unsupervised domain adaptation
Studies show that the representations learned by deep neural networks can be transferred to similar prediction tasks in other domains for which we do not have enough labeled data. However, as we...
Neural Network Distiller: A Python Package For DNN Compression Research
Zmora et al.: https://arxiv.org/abs/1910.12232
#DeepLearning #MachineLearning #Python
Zmora et al.: https://arxiv.org/abs/1910.12232
#DeepLearning #MachineLearning #Python
arXiv.org
Neural Network Distiller: A Python Package For DNN Compression Research
This paper presents the philosophy, design and feature-set of Neural Network
Distiller, an open-source Python package for DNN compression research.
Distiller is a library of DNN compression...
Distiller, an open-source Python package for DNN compression research.
Distiller is a library of DNN compression...
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes
Greg Yang : https://arxiv.org/abs/1910.12478
#ArtificialIntelligence #DeepLearning #MachineLearning
Greg Yang : https://arxiv.org/abs/1910.12478
#ArtificialIntelligence #DeepLearning #MachineLearning
arXiv.org
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks...
Wide neural networks with random weights and biases are Gaussian processes, as originally observed by Neal (1995) and more recently by Lee et al. (2018) and Matthews et al. (2018) for deep...
Top reads in Data Science for today.
Beginner :
1. Understanding the Bias-Variance Tradeoff: For a beginner nothing is more important than understanding Bias Variance Tradeoff
Read at : https://scott.fortmann-roe.com/docs/BiasVariance.html
2. The Complete Guide to Resampling Methods and Regularization in Python :
Read at : https://towardsdatascience.com/the-complete-guide-to-resampling-methods-and-regularization-in-python-5037f4f8ae23
Intermediate :
1. Choosing a Machine Learning Model: The part art, part science of picking the perfect machine learning model.
Read at : https://towardsdatascience.com/part-i-choosing-a-machine-learning-model-9821eecdc4ce
2. Data Science’s Most Misunderstood Hero : It is the kind of beast where excellence in one area beats mediocrity in two.Each of the three data science disciplines has its own excellence. Statisticians bring rigor, ML engineers bring performance, and analysts bring speed.
Read at : https://towardsdatascience.com/data-sciences-most-misunderstood-hero-2705da366f40
Advanced :
1. Top 10 roles in AI and data science : If you’re keen to make your data useful with a decision intelligence engineering approach.
Read at : https://hackernoon.com/top-10-roles-for-your-data-science-team-e7f05d90d961
2. Trade and Invest Smarter — The Reinforcement Learning Way : Fantastic introduction to TensorTrade — the Python framework for trading and investing using deep reinforcement learning.
#androidabcd #instilllearning AndroidAbcd Instill Learning
Beginner :
1. Understanding the Bias-Variance Tradeoff: For a beginner nothing is more important than understanding Bias Variance Tradeoff
Read at : https://scott.fortmann-roe.com/docs/BiasVariance.html
2. The Complete Guide to Resampling Methods and Regularization in Python :
Read at : https://towardsdatascience.com/the-complete-guide-to-resampling-methods-and-regularization-in-python-5037f4f8ae23
Intermediate :
1. Choosing a Machine Learning Model: The part art, part science of picking the perfect machine learning model.
Read at : https://towardsdatascience.com/part-i-choosing-a-machine-learning-model-9821eecdc4ce
2. Data Science’s Most Misunderstood Hero : It is the kind of beast where excellence in one area beats mediocrity in two.Each of the three data science disciplines has its own excellence. Statisticians bring rigor, ML engineers bring performance, and analysts bring speed.
Read at : https://towardsdatascience.com/data-sciences-most-misunderstood-hero-2705da366f40
Advanced :
1. Top 10 roles in AI and data science : If you’re keen to make your data useful with a decision intelligence engineering approach.
Read at : https://hackernoon.com/top-10-roles-for-your-data-science-team-e7f05d90d961
2. Trade and Invest Smarter — The Reinforcement Learning Way : Fantastic introduction to TensorTrade — the Python framework for trading and investing using deep reinforcement learning.
#androidabcd #instilllearning AndroidAbcd Instill Learning
Fortmann-Roe
Understanding the Bias-Variance Tradeoff
When we discuss prediction models, prediction errors can be decomposed into two main subcomponents we care about: error due to bias and error due to variance. There is a tradeoff between a model's ability to minimize bias and variance. Understanding these…
Yoshua Bengio reflects his view on Deep Learning and Cognition
our intelligence is not gained through a big bag of tricks, but rather the use of mechanisms used to specifically acquire knowledge
bio-inspired techniques: curriculum learning, cultural evolution, lateral connections, attention, distributed representations
The ability for humans to generalize allows us to have a more powerful understanding of the world than machines currently do
three computational aspects of consciousness are that of access consciousness, self-consciousness and qualia (subjective perception)
Deep learning needs:
generalize faster and "further"
additional compositionality from reasoning & consciousness
causal structure
unsupervised exploration
disentangled representations
attention, intention
Link
https://blog.re-work.co/deep-learning-and-cognition-a-keynote-from-yoshua-bengio/
our intelligence is not gained through a big bag of tricks, but rather the use of mechanisms used to specifically acquire knowledge
bio-inspired techniques: curriculum learning, cultural evolution, lateral connections, attention, distributed representations
The ability for humans to generalize allows us to have a more powerful understanding of the world than machines currently do
three computational aspects of consciousness are that of access consciousness, self-consciousness and qualia (subjective perception)
Deep learning needs:
generalize faster and "further"
additional compositionality from reasoning & consciousness
causal structure
unsupervised exploration
disentangled representations
attention, intention
Link
https://blog.re-work.co/deep-learning-and-cognition-a-keynote-from-yoshua-bengio/
RE•WORK Blog - AI & Deep Learning News
Deep Learning & Cognition - A Keynote from Yoshua Bengio
Keynote summary and video from Deep Learning and AI pioneer, Yoshua Bengio.
Advanced Deep Learning Topics
https://lilianweng.github.io/lil-log/
https://lilianweng.github.io/lil-log/
Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs
Alexia Jolicoeur-Martineau and Ioannis Mitliagkas : https://arxiv.org/abs/1910.06922
#GenerativeAdversarialNetworks #RelativisticGAN #SVM
Alexia Jolicoeur-Martineau and Ioannis Mitliagkas : https://arxiv.org/abs/1910.06922
#GenerativeAdversarialNetworks #RelativisticGAN #SVM
Evaluating the Factual Consistency of Abstractive Text Summarization
Kryscinski et al.: https://arxiv.org/abs/1910.12840
#ArtificialIntelligence #DeepLearning #NaturalLanguageProcessing
Kryscinski et al.: https://arxiv.org/abs/1910.12840
#ArtificialIntelligence #DeepLearning #NaturalLanguageProcessing
arXiv.org
Evaluating the Factual Consistency of Abstractive Text Summarization
Currently used metrics for assessing summarization algorithms do not account for whether summaries are factually consistent with source documents. We propose a weakly-supervised, model-based...
From ICCV 2019: Great applications for the 3D scanning industry!
https://www.profillic.com/paper/arxiv:1909.00883
FACSIMILE: Fast and Accurate Scans From an Image in Less Than a Second
https://www.profillic.com/paper/arxiv:1909.00883
FACSIMILE: Fast and Accurate Scans From an Image in Less Than a Second
Yoshua Bengio on Human vs Machine Intelligence
https://medium.com/syncedreview/yoshua-bengio-on-human-vs-machine-intelligence-5f55ec8de9cf https://t.iss.one/ArtificialIntelligenceArticles
https://medium.com/syncedreview/yoshua-bengio-on-human-vs-machine-intelligence-5f55ec8de9cf https://t.iss.one/ArtificialIntelligenceArticles
Grandmaster level in StarCraft II using multi-agent reinforcement learning
#AI #artificialintelligence
#DeepLearning #ReinforcementLearning
#deepmind
Blog: https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning
https://www.nature.com/articles/s41586-019-1724-z
#AI #artificialintelligence
#DeepLearning #ReinforcementLearning
#deepmind
Blog: https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning
https://www.nature.com/articles/s41586-019-1724-z
Deepmind
AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning
AlphaStar is the first AI to reach the top league of a widely popular esport without any game restrictions. This January, a preliminary version of AlphaStar challenged two of the world's top players in StarCraft II, one of the most enduring and popular real…
GQN — Generative Query Network
The agent infers the image from a viewpoint based on the pre-knowledge of the environment and viewpoints
Comprised of three architectures:
The representation architecture takes images from different viewpoints to yield a concise abstract scene representation.
The generation architecture generates an image for a new query viewpoint.
The inference architecture served as the encoder in a variational autoencoder provides a way to train the other two architectures in an unsupervised manner.
Blogs
Very intuitive explanation:
https://xlnwel.github.io/blog/representation%20learning/GQN/
DeepMind: https://deepmind.com/blog/article/neural-scene-representation-and-rendering
The agent infers the image from a viewpoint based on the pre-knowledge of the environment and viewpoints
Comprised of three architectures:
The representation architecture takes images from different viewpoints to yield a concise abstract scene representation.
The generation architecture generates an image for a new query viewpoint.
The inference architecture served as the encoder in a variational autoencoder provides a way to train the other two architectures in an unsupervised manner.
Blogs
Very intuitive explanation:
https://xlnwel.github.io/blog/representation%20learning/GQN/
DeepMind: https://deepmind.com/blog/article/neural-scene-representation-and-rendering