Single Sample Feature Importance: An Interpretable Algorithm for Low-Level Feature Analysis. https://arxiv.org/abs/1911.11901
AttentionGAN: Unpaired Image-to-Image Translation using Attention-Guided Generative Adver... https://arxiv.org/abs/1911.11897
Visual Physics: Discovering Physical Laws from Videos. https://arxiv.org/abs/1911.11893
What's Hidden in a Randomly Weighted Neural Network? by Ali Farhadi, Mohammad Rastegari , Vivek Ramanujan, Mitchell Wortsman, Aniruddha Kembhavi,
Ramanujan et al.: https://arxiv.org/abs/1911.13299
#Artificialintelligence #DeepLearning #MachineLearning join https://t.iss.one/ArtificialIntelligenceArticles
Ramanujan et al.: https://arxiv.org/abs/1911.13299
#Artificialintelligence #DeepLearning #MachineLearning join https://t.iss.one/ArtificialIntelligenceArticles
Animesh Karnewar :
We are releasing the new version of our MSG-GAN work. https://arxiv.org/abs/1903.06048 today.
Code at https://github.com/akanimax/msg-stylegan-tf.
We present much better experimental evaluation of the method and also incorporate the Multi-scale modifications in stylegan. It was an honor collaborating with Oliver Wang. Special thanks to Alexia Jolicoeur-Martineau and Michael Hofmann for the encouragement and support.
We also experiment with our newly created (Indian Celebs) dataset (very small 3K) and get very nice results.
Please do check it out. Any feedback / suggestions are most welcome.
We are releasing the new version of our MSG-GAN work. https://arxiv.org/abs/1903.06048 today.
Code at https://github.com/akanimax/msg-stylegan-tf.
We present much better experimental evaluation of the method and also incorporate the Multi-scale modifications in stylegan. It was an honor collaborating with Oliver Wang. Special thanks to Alexia Jolicoeur-Martineau and Michael Hofmann for the encouragement and support.
We also experiment with our newly created (Indian Celebs) dataset (very small 3K) and get very nice results.
Please do check it out. Any feedback / suggestions are most welcome.
GitHub
GitHub - akanimax/msg-stylegan-tf: MSG StyleGAN in tensorflow
MSG StyleGAN in tensorflow. Contribute to akanimax/msg-stylegan-tf development by creating an account on GitHub.
Fast Task Inference with Variational Intrinsic Successor Features
A novel algorithm that learns controllable features that can be leveraged to provide enhanced generalization and fast task inference through the successor feature framework.
The fundamental problem they face is a need to generalize between different latent codes, a task to which neural networks alone seem poorly suited.
To solve this generalization and slow inference problem by making use of successor features
To show that variational-intrinsic-control/diversity-is-all-you-need algorithms can be adapted to learn precisely the features needed by successor features
PAPER
https://arxiv.org/pdf/1906.05030.pdf
A novel algorithm that learns controllable features that can be leveraged to provide enhanced generalization and fast task inference through the successor feature framework.
The fundamental problem they face is a need to generalize between different latent codes, a task to which neural networks alone seem poorly suited.
To solve this generalization and slow inference problem by making use of successor features
To show that variational-intrinsic-control/diversity-is-all-you-need algorithms can be adapted to learn precisely the features needed by successor features
PAPER
https://arxiv.org/pdf/1906.05030.pdf
Probing the State of the Art: A Critical Look at Visual Representation Evaluation
Resnick et al.: https://arxiv.org/abs/1912.00215
#ArtificialIntelligence #DeepLearning #MachineLearning
Resnick et al.: https://arxiv.org/abs/1912.00215
#ArtificialIntelligence #DeepLearning #MachineLearning
STANFORD CENTER FOR PROFESSIONAL DEVELOPMENT AI RESOURCE HUB
https://onlinehub.stanford.edu/
https://onlinehub.stanford.edu/
Latest from Google brain researchers: The intelligent (reinforcement learning) agent learns a world model from past experience and efficiently learns farsighted behaviors
https://www.profillic.com/paper/arxiv:1912.01603
https://www.profillic.com/paper/arxiv:1912.01603
Profillic
Dream to Control: Learning Behaviors by Latent Imagination - Profillic
Explore state-of-the-art in machine learning, AI, and robotics. Browse models, source code, papers by topics and authors. Connect with researchers and engineers working on related problems in machine learning, deep learning, natural language processing, robotics…
You have a paper accepted at #NeurIPS but will not be in Vancouver? Consider presenting your work at a local meetup →
https://www.google.com/maps/d/u/0/viewer?mid=1ezbjW7AGg_9APIshVTu09uhJbXkMO5SI&ll=17.28121049467974%2C13.983942200000001&z=-1&fbclid=IwAR2nvS6YA1SnvCetrI8noxOH5ZSPLjwydMxS2vbhlqd9OBQCpFY1krS5t0A
https://www.google.com/maps/d/u/0/viewer?mid=1ezbjW7AGg_9APIshVTu09uhJbXkMO5SI&ll=17.28121049467974%2C13.983942200000001&z=-1&fbclid=IwAR2nvS6YA1SnvCetrI8noxOH5ZSPLjwydMxS2vbhlqd9OBQCpFY1krS5t0A
We are recruiting new professors at Mila, again. Strong likelihood of obtaining a CIFAR AI Chair (with salary supplement and teaching reduction). This position is in my department at U. Montreal.
https://mila.quebec/en/2019/12/assistant-professor-in-machine-learning-faculte-des-arts-et-des-sciences-department-of-computer-science-and-operations-research-universite-de-montreal/?fbclid=IwAR3G5zZRNFqKnUVs9jVswJUH8qZWj2DQrpsnk4gmrlkvuA7ZkwDq6eLxGWE
https://mila.quebec/en/2019/12/assistant-professor-in-machine-learning-faculte-des-arts-et-des-sciences-department-of-computer-science-and-operations-research-universite-de-montreal/?fbclid=IwAR3G5zZRNFqKnUVs9jVswJUH8qZWj2DQrpsnk4gmrlkvuA7ZkwDq6eLxGWE
Stacked Capsule Autoencoders
Kosiorek et al.: https://arxiv.org/abs/1906.06818
#ArtificialIntelligence #Capsule #Autoencoders
Kosiorek et al.: https://arxiv.org/abs/1906.06818
#ArtificialIntelligence #Capsule #Autoencoders
Deep Learning for Symbolic Mathematics
Guillaume Lample, François Charton : https://arxiv.org/abs/1912.01412
#ArtificialIntelligence #DeepLearning #SymbolicAI
Guillaume Lample, François Charton : https://arxiv.org/abs/1912.01412
#ArtificialIntelligence #DeepLearning #SymbolicAI
Stacked Capsule Autoencoders by Geoffrey E. Hinton
Adam R. Kosiorek, Sara Sabour, Yee Whye Teh,
https://arxiv.org/abs/1906.06818 https://t.iss.one/ArtificialIntelligenceArticles
Adam R. Kosiorek, Sara Sabour, Yee Whye Teh,
https://arxiv.org/abs/1906.06818 https://t.iss.one/ArtificialIntelligenceArticles
How To Build Your Own MuZero AI Using Python (Part 1/3)
Blog by David Foster : https://medium.com/applied-data-science/how-to-build-your-own-muzero-in-python-f77d5718061a
#MachineLearning #DeepLearning #DataScience #ArtificialIntelligence #AI
Blog by David Foster : https://medium.com/applied-data-science/how-to-build-your-own-muzero-in-python-f77d5718061a
#MachineLearning #DeepLearning #DataScience #ArtificialIntelligence #AI
Medium
MuZero: The Walkthrough (Part 1/3)
Teaching A Machine To Play Games Using Self-Play And Deep Learning…Without Telling It The Rules 🤯
GitHub Typo Corpus: A Large-Scale Multilingual Dataset of Misspellings and Grammatical Errors
Masato Hagiwara, Masato Mita : https://arxiv.org/abs/1911.12893
Code & Dataset https://github.com/mhagiwara/github-typo-corpus
#ArtificialIntelligence #DeepLearning #NLP
Masato Hagiwara, Masato Mita : https://arxiv.org/abs/1911.12893
Code & Dataset https://github.com/mhagiwara/github-typo-corpus
#ArtificialIntelligence #DeepLearning #NLP
GitHub
GitHub - mhagiwara/github-typo-corpus: GitHub Typo Corpus: A Large-Scale Multilingual Dataset of Misspellings and Grammatical Errors
GitHub Typo Corpus: A Large-Scale Multilingual Dataset of Misspellings and Grammatical Errors - mhagiwara/github-typo-corpus
Buffalo University Comprehensive Lecture Slides for Machine Learning and Deep Learning
By Professor Sargur Srihari
Machine Learning:
https://cedar.buffalo.edu/~srihari/CSE574/
Deep Learning:
https://cedar.buffalo.edu/~srihari/CSE676/index.html
Probabilistic Graphical Models:
https://cedar.buffalo.edu/~srihari/CSE674/
Data Mining:
https://cedar.buffalo.edu/~srihari/CSE626/index.html
#machinelearning #deeplearning #datamining #AI #artificialintelligence
By Professor Sargur Srihari
Machine Learning:
https://cedar.buffalo.edu/~srihari/CSE574/
Deep Learning:
https://cedar.buffalo.edu/~srihari/CSE676/index.html
Probabilistic Graphical Models:
https://cedar.buffalo.edu/~srihari/CSE674/
Data Mining:
https://cedar.buffalo.edu/~srihari/CSE626/index.html
#machinelearning #deeplearning #datamining #AI #artificialintelligence
Mathematicians and neuroscientists have created the first anatomically accurate model that explains how vision is possible.
join
@ArtificialIntelligenceArticles
https://www.quantamagazine.org/a-mathematical-model-unlocks-the-secrets-of-vision-20190821/
join
@ArtificialIntelligenceArticles
https://www.quantamagazine.org/a-mathematical-model-unlocks-the-secrets-of-vision-20190821/
Quanta Magazine
A Mathematical Model Unlocks the Secrets of Vision
Mathematicians and neuroscientists have created the first anatomically accurate model that explains how vision is possible.
SSL FTW!
Pretext-Invariant Representation Learning: a self-supervised method based on Siamese nets for visual feature learning from FAIR.
Beats supervised pre-training & all previous SSL methods on ImageNet, VOC-07-12, etc. https://arxiv.org/abs/1912.01991
Pretext-Invariant Representation Learning: a self-supervised method based on Siamese nets for visual feature learning from FAIR.
Beats supervised pre-training & all previous SSL methods on ImageNet, VOC-07-12, etc. https://arxiv.org/abs/1912.01991
Major trends in #NLP : a review of 20 years of #ACL research
The 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019) is starting this week in Florence, Italy. We took the opportunity to review major research trends in the animated NLP space and formulate some implications from the business perspective. The article is backed by a statistical and — guess what — NLP-based analysis of ACL papers from the last 20 years
https://towardsdatascience.com/major-trends-in-nlp-a-review-of-20-years-of-acl-research-56f5520d473
The 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019) is starting this week in Florence, Italy. We took the opportunity to review major research trends in the animated NLP space and formulate some implications from the business perspective. The article is backed by a statistical and — guess what — NLP-based analysis of ACL papers from the last 20 years
https://towardsdatascience.com/major-trends-in-nlp-a-review-of-20-years-of-acl-research-56f5520d473
Medium
Major trends in NLP: a review of 20 years of ACL research
The 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019) is starting this week in Florence, Italy. We took the…