ArtificialIntelligenceArticles
2.97K subscribers
1.64K photos
9 videos
5 files
3.86K links
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience

6. #ResearchPapers

7. Related Courses and Ebooks
Download Telegram
To find out which sights specific neurons in monkeys "like" best, researchers designed an algorithm, called XDREAM, that generated images that made neurons fire more than any natural images the researchers tested. As the images evolved, they started to look like distorted versions of real-world stimuli. The work appears May 2 in the journal Cell

https://medicalxpress.com/news/2019-05-trippy-images-ai-super-stimulate-monkey.html
Have you ever wondered what it might sound like if the Beatles jammed with Lady Gaga or if Mozart wrote one more masterpiece? This machine learning algorithm offers an answer, of sorts.
https://www.technologyreview.com/s/613430/this-ai-generated-musak-shows-us-the-limit-of-artificial-creativity/
What if deep learning could be your personal stylist? Enter Fashion++, a deep image generation neural network that learned to synthesize clothing and suggest minor edits to make an outfit more fashionable 👔 👠 Read the full paper by Wei-Lin Hsiao et al: https://bit.ly/2LgWiD6
Neural Path Planning: Fixed Time, Near-Optimal Path Generation via Oracle Imitation
Bency et al.: https://arxiv.org/abs/1904.11102

#Robotics #ArtificialIntelligence #MachineLearning
"Billion-scale semi-supervised learning for image classification"
by I. Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, Dhruv Mahajan.

Weakly-supervised pre-training + semi-supervised pre-training + distillation + transfer/fine-tuning =
81.2% twith ResNet-50,
84.8% with ResNeXt-101-32x16,
top-1 accuracy on ImageNet.

ArXiv: https://arxiv.org/abs/1905.00546

Brought to you by Facebook AI.

Original post by Yalniz Zeki: https://www.facebook.com/i.zeki.yalniz/posts/10157311492509962

@ArtificialIntelligenceArticles
#weekend_read
Paper-Title: Reinforcement Learning, Fast and Slow
#Deepmind #Cognitive_Science
Link to the paper: https://www.cell.com/action/showPdf?pii=S1364-6613%2819%2930061-0

TL;DR: [1] This paper reviews recent techniques in deep RL that narrow the gap in learning speed between humans and agents, & demonstrate an interplay between fast and slow learning w/ parallels in animal/human cognition.
[2] When episodic memory is used in reinforcement learning, an explicit record of past events is maintained for making decisions about the current situation. The action chosen is the one associated with the highest value, based on the outcomes of similar past situations.
[3] Meta-reinforcement learning quickly adapts to new tasks by learning strong inductive biases. This is done via a slower outer learning loop training on the distribution of tasks, leading to an inner loop that rapidly adapts by maintaining a history of past actions and observations.