ArtificialIntelligenceArticles
2.98K subscribers
1.64K photos
9 videos
5 files
3.86K links
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience

6. #ResearchPapers

7. Related Courses and Ebooks
Download Telegram
Single Sample Feature Importance: An Interpretable Algorithm for Low-Level Feature Analysis. https://arxiv.org/abs/1911.11901
AttentionGAN: Unpaired Image-to-Image Translation using Attention-Guided Generative Adver... https://arxiv.org/abs/1911.11897
Visual Physics: Discovering Physical Laws from Videos. https://arxiv.org/abs/1911.11893
What's Hidden in a Randomly Weighted Neural Network? by Ali Farhadi, Mohammad Rastegari , Vivek Ramanujan, Mitchell Wortsman, Aniruddha Kembhavi,
Ramanujan et al.: https://arxiv.org/abs/1911.13299
#Artificialintelligence #DeepLearning #MachineLearning join https://t.iss.one/ArtificialIntelligenceArticles
Animesh Karnewar :

We are releasing the new version of our MSG-GAN work. https://arxiv.org/abs/1903.06048 today.

Code at https://github.com/akanimax/msg-stylegan-tf.

We present much better experimental evaluation of the method and also incorporate the Multi-scale modifications in stylegan. It was an honor collaborating with Oliver Wang. Special thanks to Alexia Jolicoeur-Martineau and Michael Hofmann for the encouragement and support.

We also experiment with our newly created (Indian Celebs) dataset (very small 3K) and get very nice results.

Please do check it out. Any feedback / suggestions are most welcome.
Fast Task Inference with Variational Intrinsic Successor Features
A novel algorithm that learns controllable features that can be leveraged to provide enhanced generalization and fast task inference through the successor feature framework.
The fundamental problem they face is a need to generalize between different latent codes, a task to which neural networks alone seem poorly suited.
To solve this generalization and slow inference problem by making use of successor features
To show that variational-intrinsic-control/diversity-is-all-you-need algorithms can be adapted to learn precisely the features needed by successor features
PAPER
https://arxiv.org/pdf/1906.05030.pdf
Probing the State of the Art: A Critical Look at Visual Representation Evaluation
Resnick et al.: https://arxiv.org/abs/1912.00215
#ArtificialIntelligence #DeepLearning #MachineLearning
STANFORD CENTER FOR PROFESSIONAL DEVELOPMENT AI RESOURCE HUB
https://onlinehub.stanford.edu/
We are recruiting new professors at Mila, again. Strong likelihood of obtaining a CIFAR AI Chair (with salary supplement and teaching reduction). This position is in my department at U. Montreal.
https://mila.quebec/en/2019/12/assistant-professor-in-machine-learning-faculte-des-arts-et-des-sciences-department-of-computer-science-and-operations-research-universite-de-montreal/?fbclid=IwAR3G5zZRNFqKnUVs9jVswJUH8qZWj2DQrpsnk4gmrlkvuA7ZkwDq6eLxGWE
Deep Learning for Symbolic Mathematics
Guillaume Lample, François Charton : https://arxiv.org/abs/1912.01412
#ArtificialIntelligence #DeepLearning #SymbolicAI
Stacked Capsule Autoencoders by Geoffrey E. Hinton
Adam R. Kosiorek, Sara Sabour, Yee Whye Teh,
https://arxiv.org/abs/1906.06818 https://t.iss.one/ArtificialIntelligenceArticles