Parallel and Distributed Deep Learning: A Survey
https://towardsdatascience.com/parallel-and-distributed-deep-learning-a-survey-97137ff94e4c
https://towardsdatascience.com/parallel-and-distributed-deep-learning-a-survey-97137ff94e4c
Medium
Parallel and Distributed Deep Learning: A Survey
Deep learning is the hottest field in AI right now. From Google Duplex assistant to Tesla self- driving cars the applications are endless.
Best of (link: https://arXiv.org) arXiv.org for AI, Machine Learning, and Deep Learning – March 2019
https://insidebigdata.com/2019/04/09/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-march-2019/
https://insidebigdata.com/2019/04/09/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-march-2019/
insideAI News
Best of arXiv.org for AI, Machine Learning, and Deep Learning – March 2019
In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, [...]
Survey on Automated Machine Learning. (link: https://arxiv.org/abs/1904.12054) arxiv.org/abs/1904.12054
Real numbers, data science and chaos: How to fit any dataset with a single parameter
Laurent Boué: https://arxiv.org/abs/1904.12320
Code: https://github.com/Ranlot/single-parameter-fit/
#artificialintelligence #datascience #dataset #machinelearning
Laurent Boué: https://arxiv.org/abs/1904.12320
Code: https://github.com/Ranlot/single-parameter-fit/
#artificialintelligence #datascience #dataset #machinelearning
arXiv.org
Real numbers, data science and chaos: How to fit any dataset with...
We show how any dataset of any modality (time-series, images, sound...) can be approximated by a well-behaved (continuous, differentiable...) scalar function with a single real-valued parameter....
Statistical Physics of Liquid Brains
Liquid neural nets (or “liquid brains”) – class of cognitive living networks
characterised by the agents (ants or immune cells, for example) moving in space are compared with standard neural nets
https://www.biorxiv.org/content/10.1101/478412v1
Liquid neural nets (or “liquid brains”) – class of cognitive living networks
characterised by the agents (ants or immune cells, for example) moving in space are compared with standard neural nets
https://www.biorxiv.org/content/10.1101/478412v1
bioRxiv
Statistical physics of liquid brains
Liquid neural networks (or “liquid brains”) are a widespread class of cognitive living networks characterised by a common feature: the agents (ants or immune cells, for example) move in space. Thus, no fixed, long-term agent-agent connections are maintained…
A scientist at Google Brain devised a way for a machine-learning system to teach itself about how the world works.
https://www.technologyreview.com/lists/innovators-under-35/2017/inventor/ian-goodfellow/
https://www.technologyreview.com/lists/innovators-under-35/2017/inventor/ian-goodfellow/
MIT Technology Review
Ian Goodfellow, 31
Invented a way for neural networks to get better by working together.
Best Deep Learning Courses: Updated for 2019
https://blog.floydhub.com/best-deep-learning-courses-updated-for-2019/
https://t.iss.one/ArtificialIntelligenceArticles
https://blog.floydhub.com/best-deep-learning-courses-updated-for-2019/
https://t.iss.one/ArtificialIntelligenceArticles
FloydHub Blog
Best Deep Learning Courses: Updated for 2019
The list of the best machine learning & deep learning courses and MOOCs for 2019.
[Course]Deep reinforcement learning, provided by Prof. Sergey Levine at UC Berkeley
https://www.youtube.com/playlist?list=PLkFD6_40KJIxJMR-j5A1mkxK26gh_qg37&fbclid=IwAR2xoZGCYu4-HBBODMVSjKCHAqNMkl3zMTZ6OpgFcB9zbOSyMLQhNlbdd_g
https://www.youtube.com/playlist?list=PLkFD6_40KJIxJMR-j5A1mkxK26gh_qg37&fbclid=IwAR2xoZGCYu4-HBBODMVSjKCHAqNMkl3zMTZ6OpgFcB9zbOSyMLQhNlbdd_g
YouTube
CS294-112 Fa18 - YouTube
To find out which sights specific neurons in monkeys "like" best, researchers designed an algorithm, called XDREAM, that generated images that made neurons fire more than any natural images the researchers tested. As the images evolved, they started to look like distorted versions of real-world stimuli. The work appears May 2 in the journal Cell
https://medicalxpress.com/news/2019-05-trippy-images-ai-super-stimulate-monkey.html
https://medicalxpress.com/news/2019-05-trippy-images-ai-super-stimulate-monkey.html
Medicalxpress
These trippy images were designed by AI to super-stimulate monkey neurons
To find out which sites specific neurons in monkeys "like" best, researchers designed an algorithm, called XDREAM, that generated images that made neurons fire more than any natural images the researchers ...
Have you ever wondered what it might sound like if the Beatles jammed with Lady Gaga or if Mozart wrote one more masterpiece? This machine learning algorithm offers an answer, of sorts.
https://www.technologyreview.com/s/613430/this-ai-generated-musak-shows-us-the-limit-of-artificial-creativity/
https://www.technologyreview.com/s/613430/this-ai-generated-musak-shows-us-the-limit-of-artificial-creativity/
What if deep learning could be your personal stylist? Enter Fashion++, a deep image generation neural network that learned to synthesize clothing and suggest minor edits to make an outfit more fashionable 👔 👠 Read the full paper by Wei-Lin Hsiao et al: https://bit.ly/2LgWiD6
Self-Supervision and Play
Pierre Sermanet et al.: https://docs.google.com/presentation/d/145wBH7TEJoEclVzE1YKTihqIXWMljeNIA6ozwMZLb3Q/edit#slide=id.g581ee82d09_0_517
#DeepLearning #Robotics #UnsupervisedLearning
Pierre Sermanet et al.: https://docs.google.com/presentation/d/145wBH7TEJoEclVzE1YKTihqIXWMljeNIA6ozwMZLb3Q/edit#slide=id.g581ee82d09_0_517
#DeepLearning #Robotics #UnsupervisedLearning
Google Docs
Self-Supervision and Play - Pierre Sermanet @ OpenAI Robotics Symposium 2019 (public, 45 mins)
Self-Supervision and Play Pierre Sermanet In collaboration with Corey Lynch, Debidatta Dwibedi, Soeren Pirk, Jonathan Tompson, Mohi Khansari, Yusuf Aytar, Yevgen Chebotar, Yunfei Bai, Jasmine Hsu, Eric Jang, Vikash Kumar, Ted Xiao, Stefan Schaal, Andrew Zisserman…
Study shows that artificial neural networks can be used to drive brain activity
https://medicalxpress.com/news/2019-05-artificial-neural-networks-brain.html
https://t.iss.one/ArtificialIntelligenceArticles
https://medicalxpress.com/news/2019-05-artificial-neural-networks-brain.html
https://t.iss.one/ArtificialIntelligenceArticles
Medicalxpress
Study shows that artificial neural networks can be used to drive brain activity
MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain's visual cortex.
Videos and lectures on MachineLearning DataScience and informatics
Ryan Urbanowicz
Perelman School of Medicine at the University of Pennsylvania
https://www.med.upenn.edu/urbslab/videos.html
Ryan Urbanowicz
Perelman School of Medicine at the University of Pennsylvania
https://www.med.upenn.edu/urbslab/videos.html
www.med.upenn.edu
Videos/Lectures | Urbanowicz Lab | Perelman School of Medicine at the University of Pennsylvania
Welcome to the URBS Lab (Unbounded Research in Biomedical Systems). Our primary goal is to develop, evaluate, and apply tools/strategies that can be leveraged to improve our understanding of human health and the strategies implemented to prevent, diagnose…
Neural Path Planning: Fixed Time, Near-Optimal Path Generation via Oracle Imitation
Bency et al.: https://arxiv.org/abs/1904.11102
#Robotics #ArtificialIntelligence #MachineLearning
Bency et al.: https://arxiv.org/abs/1904.11102
#Robotics #ArtificialIntelligence #MachineLearning
A Mean Field Theory of Batch Normalization
Yang et al.: https://arxiv.org/abs/1902.08129
#ArtificialIntelligence #NeuralComputing #NeuralNetworks #MachineLearning #DynamicalSystems
Yang et al.: https://arxiv.org/abs/1902.08129
#ArtificialIntelligence #NeuralComputing #NeuralNetworks #MachineLearning #DynamicalSystems
Spectral Inference Networks (SpIN)
Paper by Pfau et al.: https://arxiv.org/abs/1806.02215
Code: https://github.com/deepmind/spectral_inference_networks
#MachineLearning #DeepLearning #ArtificialIntelligence
Paper by Pfau et al.: https://arxiv.org/abs/1806.02215
Code: https://github.com/deepmind/spectral_inference_networks
#MachineLearning #DeepLearning #ArtificialIntelligence
arXiv.org
Spectral Inference Networks: Unifying Deep and Spectral Learning
We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization. Spectral Inference Networks generalize Slow Feature Analysis to...
A new critique of deep-learning systems that use neural nets skewers some of the current AI hype.
https://www.technologyreview.com/f/609875/the-case-against-deep-learning-hype/
https://www.technologyreview.com/f/609875/the-case-against-deep-learning-hype/
MIT Technology Review
The case against deep-learning hype
Is there more to AI than neural networks?
SafeML ICLR 2019 Workshop
Accepted Papers: https://sites.google.com/view/safeml-iclr2019/accepted-papers
#ArtificialIntelligence #AISafety #MachineLearning
@ArtificialIntelligenceArticles
Accepted Papers: https://sites.google.com/view/safeml-iclr2019/accepted-papers
#ArtificialIntelligence #AISafety #MachineLearning
@ArtificialIntelligenceArticles
"Billion-scale semi-supervised learning for image classification"
by I. Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, Dhruv Mahajan.
Weakly-supervised pre-training + semi-supervised pre-training + distillation + transfer/fine-tuning =
81.2% twith ResNet-50,
84.8% with ResNeXt-101-32x16,
top-1 accuracy on ImageNet.
ArXiv: https://arxiv.org/abs/1905.00546
Brought to you by Facebook AI.
Original post by Yalniz Zeki: https://www.facebook.com/i.zeki.yalniz/posts/10157311492509962
@ArtificialIntelligenceArticles
by I. Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, Dhruv Mahajan.
Weakly-supervised pre-training + semi-supervised pre-training + distillation + transfer/fine-tuning =
81.2% twith ResNet-50,
84.8% with ResNeXt-101-32x16,
top-1 accuracy on ImageNet.
ArXiv: https://arxiv.org/abs/1905.00546
Brought to you by Facebook AI.
Original post by Yalniz Zeki: https://www.facebook.com/i.zeki.yalniz/posts/10157311492509962
@ArtificialIntelligenceArticles
arXiv.org
Billion-scale semi-supervised learning for image classification
This paper presents a study of semi-supervised learning with large convolutional networks. We propose a pipeline, based on a teacher/student paradigm, that leverages a large collection of...
#weekend_read
Paper-Title: Reinforcement Learning, Fast and Slow
#Deepmind #Cognitive_Science
Link to the paper: https://www.cell.com/action/showPdf?pii=S1364-6613%2819%2930061-0
TL;DR: [1] This paper reviews recent techniques in deep RL that narrow the gap in learning speed between humans and agents, & demonstrate an interplay between fast and slow learning w/ parallels in animal/human cognition.
[2] When episodic memory is used in reinforcement learning, an explicit record of past events is maintained for making decisions about the current situation. The action chosen is the one associated with the highest value, based on the outcomes of similar past situations.
[3] Meta-reinforcement learning quickly adapts to new tasks by learning strong inductive biases. This is done via a slower outer learning loop training on the distribution of tasks, leading to an inner loop that rapidly adapts by maintaining a history of past actions and observations.
Paper-Title: Reinforcement Learning, Fast and Slow
#Deepmind #Cognitive_Science
Link to the paper: https://www.cell.com/action/showPdf?pii=S1364-6613%2819%2930061-0
TL;DR: [1] This paper reviews recent techniques in deep RL that narrow the gap in learning speed between humans and agents, & demonstrate an interplay between fast and slow learning w/ parallels in animal/human cognition.
[2] When episodic memory is used in reinforcement learning, an explicit record of past events is maintained for making decisions about the current situation. The action chosen is the one associated with the highest value, based on the outcomes of similar past situations.
[3] Meta-reinforcement learning quickly adapts to new tasks by learning strong inductive biases. This is done via a slower outer learning loop training on the distribution of tasks, leading to an inner loop that rapidly adapts by maintaining a history of past actions and observations.