ArtificialIntelligenceArticles
2.96K subscribers
1.64K photos
9 videos
5 files
3.86K links
for who have a passion for -
1. #ArtificialIntelligence
2. Machine Learning
3. Deep Learning
4. #DataScience
5. #Neuroscience

6. #ResearchPapers

7. Related Courses and Ebooks
Download Telegram
Natural Language Inference with Deep Learning (NAACL 2019 Tutorial)

the slides for the 2019 NAACL tutorial on Natural Language Inference with Deep Learning
by Sam Bowman and Xiaodan Zhu.

https://nlitutorial.github.io/nli_tutorial.pdf
Robotic Psychology What Do We Know about Human-Robot Interaction and What Do We Still Need to Learn?
https://scholarspace.manoa.hawaii.edu/bitstream/10125/59633/0193.pdf
COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration
Watters et al.: https://arxiv.org/abs/1905.09275
#MachineLearning #UnsupervisedLearning #ArtificialIntelligence
Table2Vec: Neural Word and Entity Embeddings for Table Population and Retrieval. arxiv.org/abs/1906.00041
Independent Component Analysis based on multiple data-weighting. arxiv.org/abs/1906.00028
Machine Learning Methods for Shark Detection. arxiv.org/abs/1905.13309
Understanding and Controlling Memory in Recurrent Neural Networks (ICML'19 oral)

This paper shows that RNNs are able to form long-term memories despite being trained only for short-term with a limited amount of timesteps, but that not all memories are created equal. The authors find that each memory is correlated with a dynamical object in the hidden-state phase space and that the objects properties can quantitatively predict long term effectiveness. By regularizing the dynamical object, the long-term functionality of the RNN is significantly improved, while not adding to the computational complexity of training.

Link to PDF: https://proceedings.mlr.press/v97/haviv19a/haviv19a.pdf
Study shows that artificial neural networks can be used to drive brain activity.



MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain’s visual cortex.

Using their current best model of the brain’s visual neural network, the researchers designed a new way to precisely control individual neurons and populations of neurons in the middle of that network. In an animal study, the team then showed that the information gained from the computational model enabled them to create images that strongly activated specific brain neurons of their choosing.

The findings suggest that the current versions of these models are similar enough to the brain that they could be used to control brain states in animals. The study also helps to establish the usefulness of these vision models, which have generated vigorous debate over whether they accurately mimic how the visual cortex works, says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.



Full article: https://news.mit.edu/2019/computer-model-brain-visual-cortex-0502

Science paper: https://science.sciencemag.org/content/364/6439/eaav9436

Biorxiv (open access): https://www.biorxiv.org/content/10.1101/461525v1