ArtificialIntelligenceArticles
2.98K subscribers
1.64K photos
9 videos
5 files
3.86K links
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience

6. #ResearchPapers

7. Related Courses and Ebooks
Download Telegram
Neural Path Planning: Fixed Time, Near-Optimal Path Generation via Oracle Imitation
Bency et al.: https://arxiv.org/abs/1904.11102

#Robotics #ArtificialIntelligence #MachineLearning
"Billion-scale semi-supervised learning for image classification"
by I. Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, Dhruv Mahajan.

Weakly-supervised pre-training + semi-supervised pre-training + distillation + transfer/fine-tuning =
81.2% twith ResNet-50,
84.8% with ResNeXt-101-32x16,
top-1 accuracy on ImageNet.

ArXiv: https://arxiv.org/abs/1905.00546

Brought to you by Facebook AI.

Original post by Yalniz Zeki: https://www.facebook.com/i.zeki.yalniz/posts/10157311492509962

@ArtificialIntelligenceArticles
#weekend_read
Paper-Title: Reinforcement Learning, Fast and Slow
#Deepmind #Cognitive_Science
Link to the paper: https://www.cell.com/action/showPdf?pii=S1364-6613%2819%2930061-0

TL;DR: [1] This paper reviews recent techniques in deep RL that narrow the gap in learning speed between humans and agents, & demonstrate an interplay between fast and slow learning w/ parallels in animal/human cognition.
[2] When episodic memory is used in reinforcement learning, an explicit record of past events is maintained for making decisions about the current situation. The action chosen is the one associated with the highest value, based on the outcomes of similar past situations.
[3] Meta-reinforcement learning quickly adapts to new tasks by learning strong inductive biases. This is done via a slower outer learning loop training on the distribution of tasks, leading to an inner loop that rapidly adapts by maintaining a history of past actions and observations.
Decrappification, DeOldification, and Super Resolution
By Jeremy Howard and Uri Manor: https://www.fast.ai/2019/05/03/decrappify/
#ArtificialIntelligence #DeepLearning #MachineLearning
"Assessing the Scalability of Biologically-Motivated
Deep Learning Algorithms and Architectures" https://arxiv.org/pdf/1807.04587.pdf
Building a Silicon Brain
Computer chips based on biological neurons may help simulate larger and more-complex brain models.
https://www.the-scientist.com/features/building-a-silicon-brain-65738 https://t.iss.one/ArtificialIntelligenceArticles
The Neurons of our minds are far more functional than the modelled Neurons in Artificial Neural Networks (ANN).

Behavior from one neuron can change the behavior of another neuron; not through observation, but rather through injection of behavior #AI #DeepLearning #Neuroscience #NeuralNetworks

Source: https://buff.ly/2Fw6QH4
Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning

Wortsman et al.: https://arxiv.org/abs/1812.00971

#ArtificialIntelligence #DeepLearning #MetaLearning