Generating Diverse High-Fidelity Images with VQ-VAE-2
Razavi et al.: https://arxiv.org/abs/1906.00446
#ArtificialIntelligence #DeepLearning #MachineLearning
Razavi et al.: https://arxiv.org/abs/1906.00446
#ArtificialIntelligence #DeepLearning #MachineLearning
arXiv.org
Generating Diverse High-Fidelity Images with VQ-VAE-2
We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation. To this end, we scale and enhance the autoregressive priors used in VQ-VAE to...
A parallel implementation of "graph2vec: Learning Distributed Representations of Graphs" (MLGWorkshop 2017).
https://github.com/benedekrozemberczki/graph2vec
https://github.com/benedekrozemberczki/graph2vec
GitHub
GitHub - benedekrozemberczki/graph2vec: A parallel implementation of "graph2vec: Learning Distributed Representations of Graphs"…
A parallel implementation of "graph2vec: Learning Distributed Representations of Graphs" (MLGWorkshop 2017). - benedekrozemberczki/graph2vec
A Survival Guide to a PhD
https://karpathy.github.io/2016/09/07/phd/ https://t.iss.one/ArtificialIntelligenceArticles
https://karpathy.github.io/2016/09/07/phd/ https://t.iss.one/ArtificialIntelligenceArticles
Cross-lingual transfer is a powerful tool for low-resource NLP. But when you build a system for a new language (say Bengali, German or French), what language do you transfer from?
This paper answers this: https://www.profillic.com/paper/arxiv:1905.12688
This paper answers this: https://www.profillic.com/paper/arxiv:1905.12688
Profillic
Profillic: AI research & source code to supercharge your projects
Explore state-of-the-art in machine learning, AI, and robotics research. Browse papers, source code, models, and more by topics and authors. Connect with researchers and engineers working on related problems in machine learning, deep learning, natural language…
COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration
Watters et al.: https://arxiv.org/abs/1905.09275
#MachineLearning #UnsupervisedLearning #ArtificialIntelligence
Watters et al.: https://arxiv.org/abs/1905.09275
#MachineLearning #UnsupervisedLearning #ArtificialIntelligence
A very nice article on practical limitations of semi-supervised learning and the recent advances that seems to overcome them
[https://towardsdatascience.com/the-quiet-semi-supervised-revolution-edec1e9ad8c]
(https://towardsdatascience.com/the-quiet-semi-supervised-revolution-edec1e9ad8c)
#machinelearning #artificialintelligence
[https://towardsdatascience.com/the-quiet-semi-supervised-revolution-edec1e9ad8c]
(https://towardsdatascience.com/the-quiet-semi-supervised-revolution-edec1e9ad8c)
#machinelearning #artificialintelligence
Medium
The Quiet Semi-Supervised Revolution
Time to dust off that unlabeled data?
Table2Vec: Neural Word and Entity Embeddings for Table Population and Retrieval. arxiv.org/abs/1906.00041
Independent Component Analysis based on multiple data-weighting. arxiv.org/abs/1906.00028
Machine Learning Methods for Shark Detection. arxiv.org/abs/1905.13309
Brain Network Mechanisms of General Intelligence
https://www.biorxiv.org/content/biorxiv/early/2019/06/03/657205.full.pdf
https://www.biorxiv.org/content/biorxiv/early/2019/06/03/657205.full.pdf
Revolutionizing Medical Diagnosis with Deep Learning: TED Talk
https://www.youtube.com/watch?v=w2_N_p_Y-W4&feature=youtu.be
https://www.youtube.com/watch?v=w2_N_p_Y-W4&feature=youtu.be
YouTube
Revolutionizing Medical Diagnosis with Deep Learning | Ankit Gupta | TEDxYouth@Ballston
17 million people worldwide died due to cardiovascular disease in 2015 alone. Drawing from his experiences as a machine learning researcher and software engi...
Understanding and Controlling Memory in Recurrent Neural Networks (ICML'19 oral)
This paper shows that RNNs are able to form long-term memories despite being trained only for short-term with a limited amount of timesteps, but that not all memories are created equal. The authors find that each memory is correlated with a dynamical object in the hidden-state phase space and that the objects properties can quantitatively predict long term effectiveness. By regularizing the dynamical object, the long-term functionality of the RNN is significantly improved, while not adding to the computational complexity of training.
Link to PDF: https://proceedings.mlr.press/v97/haviv19a/haviv19a.pdf
This paper shows that RNNs are able to form long-term memories despite being trained only for short-term with a limited amount of timesteps, but that not all memories are created equal. The authors find that each memory is correlated with a dynamical object in the hidden-state phase space and that the objects properties can quantitatively predict long term effectiveness. By regularizing the dynamical object, the long-term functionality of the RNN is significantly improved, while not adding to the computational complexity of training.
Link to PDF: https://proceedings.mlr.press/v97/haviv19a/haviv19a.pdf
Study shows that artificial neural networks can be used to drive brain activity.
MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain’s visual cortex.
Using their current best model of the brain’s visual neural network, the researchers designed a new way to precisely control individual neurons and populations of neurons in the middle of that network. In an animal study, the team then showed that the information gained from the computational model enabled them to create images that strongly activated specific brain neurons of their choosing.
The findings suggest that the current versions of these models are similar enough to the brain that they could be used to control brain states in animals. The study also helps to establish the usefulness of these vision models, which have generated vigorous debate over whether they accurately mimic how the visual cortex works, says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.
Full article: https://news.mit.edu/2019/computer-model-brain-visual-cortex-0502
Science paper: https://science.sciencemag.org/content/364/6439/eaav9436
Biorxiv (open access): https://www.biorxiv.org/content/10.1101/461525v1
MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain’s visual cortex.
Using their current best model of the brain’s visual neural network, the researchers designed a new way to precisely control individual neurons and populations of neurons in the middle of that network. In an animal study, the team then showed that the information gained from the computational model enabled them to create images that strongly activated specific brain neurons of their choosing.
The findings suggest that the current versions of these models are similar enough to the brain that they could be used to control brain states in animals. The study also helps to establish the usefulness of these vision models, which have generated vigorous debate over whether they accurately mimic how the visual cortex works, says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.
Full article: https://news.mit.edu/2019/computer-model-brain-visual-cortex-0502
Science paper: https://science.sciencemag.org/content/364/6439/eaav9436
Biorxiv (open access): https://www.biorxiv.org/content/10.1101/461525v1
MIT News
Putting vision models to the test
MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain’s visual cortex. The results suggest that the current versions of these models are similar enough to the brain to allow them to actually control…
Andrew Ng and Masoumeh Haghpanahi the team's new paper -- Cardiologist-level arrhythmia detection from ECG using deep learning.
https://www.nature.com/articles/s41591-018-0268-3
https://stanfordmlgroup.github.io/projects/ecg2/
https://t.iss.one/ArtificialIntelligenceArticles
#DeepLearning #MachineLearning #artificalintelligence
@ArtificialIntelligenceArticles
https://www.nature.com/articles/s41591-018-0268-3
https://stanfordmlgroup.github.io/projects/ecg2/
https://t.iss.one/ArtificialIntelligenceArticles
#DeepLearning #MachineLearning #artificalintelligence
@ArtificialIntelligenceArticles
Nature
Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network
Nature Medicine - Analysis of electrocardiograms using an end-to-end deep learning approach can detect and classify cardiac arrhythmia with high accuracy, similar to that of cardiologists.
Introducing FastBert — A simple Deep Learning library for BERT Models
Blog by Kaushal Trivedi: https://medium.com/huggingface/introducing-fastbert-a-simple-deep-learning-library-for-bert-models-89ff763ad384
#MachineLearning #ArtificialIntelligence #NLP #Bert #NaturalLanguageProcessing
Blog by Kaushal Trivedi: https://medium.com/huggingface/introducing-fastbert-a-simple-deep-learning-library-for-bert-models-89ff763ad384
#MachineLearning #ArtificialIntelligence #NLP #Bert #NaturalLanguageProcessing
Medium
Introducing FastBert — A simple Deep Learning library for BERT Models
A simple to use Deep Learning library to build and deploy BERT models
A curated list of gradient boosting research papers from the last 25 years with implementations. It covers NeurIPS, ICML, ICLR, KDD, ICDM, CIKM, AAAI etc.
https://github.com/benedekrozemberczki/awesome-gradient-boosting-papers
https://github.com/benedekrozemberczki/awesome-gradient-boosting-papers
GitHub
GitHub - benedekrozemberczki/awesome-gradient-boosting-papers: A curated list of gradient boosting research papers with implementations.
A curated list of gradient boosting research papers with implementations. - GitHub - benedekrozemberczki/awesome-gradient-boosting-papers: A curated list of gradient boosting research papers with ...
Very interesting work applying machine learning to higher order logics and theorem proofs. This could eventually change how we understand and program many different things.
https://arxiv.org/abs/1904.03241
https://arxiv.org/abs/1904.03241
arXiv.org
HOList: An Environment for Machine Learning of Higher-Order Theorem Proving
We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic. Higher-order interactive theorem provers enable the formalization of arbitrary...
DeepMind Made a Math Test For Neural Networks
https://arxiv.org/abs/1904.01557
https://arxiv.org/abs/1904.01557
arXiv.org
Analysing Mathematical Reasoning Abilities of Neural Models
Mathematical reasoning---a core ability within human intelligence---presents some unique challenges as a domain: we do not come to understand and solve mathematical problems primarily on the back...
Should AI Research Try to Model the Human Brain?
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30061-0
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30061-0
Trends in Cognitive Sciences
Reinforcement Learning, Fast and Slow
Deep reinforcement learning (RL) methods have driven impressive advances in artificial
intelligence in recent years, exceeding human performance in domains ranging from
Atari to Go to no-limit poker. This progress has drawn the attention of cognitive
scientists…
intelligence in recent years, exceeding human performance in domains ranging from
Atari to Go to no-limit poker. This progress has drawn the attention of cognitive
scientists…