Language, trees, and geometry in neural networks
code https://pair-code.github.io/interpretability/bert-tree/
paper https://arxiv.org/pdf/1906.02715.pdf
code https://pair-code.github.io/interpretability/bert-tree/
paper https://arxiv.org/pdf/1906.02715.pdf
Population-based Augmentation
1000x Faster Data Augmentation
Daniel Ho, Eric Liang, Richard Liaw Jun 7, 2019
https://bair.berkeley.edu/blog/2019/06/07/data_aug/
paper https://arxiv.org/pdf/1905.05393.pdf
1000x Faster Data Augmentation
Daniel Ho, Eric Liang, Richard Liaw Jun 7, 2019
https://bair.berkeley.edu/blog/2019/06/07/data_aug/
paper https://arxiv.org/pdf/1905.05393.pdf
Material used for Deep Learning related workshops for Machine Learning Tokyo
Implementation and Cheat Sheet: https://github.com/Machine-Learning-Tokyo/DL-workshop-series
#artificialintelligence #deeplearning #machinelearning
Implementation and Cheat Sheet: https://github.com/Machine-Learning-Tokyo/DL-workshop-series
#artificialintelligence #deeplearning #machinelearning
Residual Flows for Invertible Generative Modeling
Chen et al.: https://arxiv.org/abs/1906.02735
#artificialintelligence #deeplearning #generativemodels
Chen et al.: https://arxiv.org/abs/1906.02735
#artificialintelligence #deeplearning #generativemodels
Hot Papers from Google Brain, DeepMind and Facebook AI
https://www.google.com/amp/s/syncedreview.com/2019/06/02/hot-papers-from-google-brain-deepmind-and-facebook-ai/amp/
https://www.google.com/amp/s/syncedreview.com/2019/06/02/hot-papers-from-google-brain-deepmind-and-facebook-ai/amp/
Synced
‘Hot’ Papers from Google Brain, DeepMind and Facebook AI
Synced Global AI Weekly June 2nd
A machine-learning model from MIT researchers computationally breaks down how segments of amino acid chains determine a protein’s function, which could help researchers design and test new proteins for drug development or biological research.
https://news.mit.edu/2019/machine-learning-amino-acids-protein-function-0322
https://news.mit.edu/2019/machine-learning-amino-acids-protein-function-0322
MIT News | Massachusetts Institute of Technology
Model learns how individual amino acids determine protein function
A model from MIT researchers “learns” vector embeddings of each amino acid position in a 3-D protein structure, which can be used as input features for machine-learning models to predict amino acid segment functions for drug development and biological research.
Yann LeCun et al. publishing evolutionary algorithm tools. Welcoming the era of deep neuroevolution indeed! (https://eng.uber.com/deep-neuroevolution) Great to see the traditional ML community adopt these tools in the cases when they are useful.
Disentangling Disentanglement in Variational Autoencoders
Mathieu et al.: https://proceedings.mlr.press/v97/mathieu19a.html
#ArtificialIntelligence #DeepLearning #VariationalAutoencoders #VAE
Mathieu et al.: https://proceedings.mlr.press/v97/mathieu19a.html
#ArtificialIntelligence #DeepLearning #VariationalAutoencoders #VAE
PMLR
Disentangling Disentanglement in Variational Autoencoders
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of the latent representation—characterising it as the fulfilm...
Best of arXiv.org for AI, Machine Learning, and Deep Learning – April 2019 by insidebigdata
https://insidebigdata.com/2019/05/22/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-april-2019/
https://insidebigdata.com/2019/05/22/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-april-2019/
insideBIGDATA
Best of arXiv.org for AI, Machine Learning, and Deep Learning – April 2019
In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, [...]
#IntelAI Research has 6 paper acceptances at #ICML2019! Find full list of papers and more here: https://www.intel.ai/icml-2019/
All 1,294 papers at #CVPR2019
Index: https://openaccess.thecvf.com/content_CVPR_2019/html/
#ArtificialIntelligence #DeepLearning #MachineLearning
Index: https://openaccess.thecvf.com/content_CVPR_2019/html/
#ArtificialIntelligence #DeepLearning #MachineLearning
Practical Deep Learning with Bayesian Principles
Osawa et al.: https://arxiv.org/pdf/1906.02506.pdf
#Bayesian #DeepLearning #PyTorch #VariationalInference
Osawa et al.: https://arxiv.org/pdf/1906.02506.pdf
#Bayesian #DeepLearning #PyTorch #VariationalInference
Butterfly Transform: An Efficient FFT Based Neural Architecture Design
Alizadeh et al.: https://arxiv.org/abs/1906.02256
#ArtificialIntelligence #DeepLearning #MachineLearning
Alizadeh et al.: https://arxiv.org/abs/1906.02256
#ArtificialIntelligence #DeepLearning #MachineLearning
arXiv.org
Butterfly Transform: An Efficient FFT Based Neural Architecture Design
In this paper, we show that extending the butterfly operations from the FFT algorithm to a general Butterfly Transform (BFT) can be beneficial in building an efficient block structure for CNN...
Computational Narrative Intelligence and the Quest for the Great Automatic Grammatizator
Slides by Mark Riedl: https://www.dropbox.com/s/2o8enj7amaxxx1y/naacl-nu-ws.pdf?dl=0
#ArtificialIntelligence #MachineLearning #NaturalLanguageProcessing
Slides by Mark Riedl: https://www.dropbox.com/s/2o8enj7amaxxx1y/naacl-nu-ws.pdf?dl=0
#ArtificialIntelligence #MachineLearning #NaturalLanguageProcessing
very interesting hypothesis by Yoshua Bengio https://arxiv.org/abs/1901.10912
MicrosoftAI raises the bar in text-to-speech with an “almost” unsupervised context, training ONLY 200 speech and text data to generate human-sounding speech for about 20mins - 99.84% world level intelligible rate.
Paper: https://arxiv.org/pdf/1905.06791.pdf
Sample: buff.ly/2X885F9
Paper: https://arxiv.org/pdf/1905.06791.pdf
Sample: buff.ly/2X885F9
Learning Sparse Networks Using Targeted Dropout
A new research paper from
Aidan N. Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal, Geoffrey E. Hinton
https://arxiv.org/abs/1905.13678
@ArtificialIntelligenceArticles
A new research paper from
Aidan N. Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal, Geoffrey E. Hinton
https://arxiv.org/abs/1905.13678
@ArtificialIntelligenceArticles
arXiv.org
Learning Sparse Networks Using Targeted Dropout
Neural networks are easier to optimise when they have many more weights than
are required for modelling the mapping from inputs to outputs. This suggests a
two-stage learning procedure that first...
are required for modelling the mapping from inputs to outputs. This suggests a
two-stage learning procedure that first...
"Automated Speech Generation from UN General Assembly Statements: Mapping Risks in AI Generated Texts"
Bullock et al.: https://arxiv.org/abs/1906.01946
#Computation #Language #AIEthics #AIGovernance #ArtificialIntelligence
@ArtificialIntelligenceArticles
Bullock et al.: https://arxiv.org/abs/1906.01946
#Computation #Language #AIEthics #AIGovernance #ArtificialIntelligence
@ArtificialIntelligenceArticles
A new research paper from Geoffry E.Hinton
Aidan N. Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal
Learning Sparse Networks Using Targeted Dropout
https://arxiv.org/abs/1905.13678
@ArtificialIntelligenceArticles
Aidan N. Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal
Learning Sparse Networks Using Targeted Dropout
https://arxiv.org/abs/1905.13678
@ArtificialIntelligenceArticles
SLIDES
GAUSSIAN PROCESSES
Marc Deisenroth
Department of Computing
Imperial College London
https://drive.google.com/file/d/1Ve_Jrn9f-4IcYxF2Jz5KrDnl1qE_IRqg/view?usp=drive_open
GAUSSIAN PROCESSES
Marc Deisenroth
Department of Computing
Imperial College London
https://drive.google.com/file/d/1Ve_Jrn9f-4IcYxF2Jz5KrDnl1qE_IRqg/view?usp=drive_open
Google Docs
lecture_gaussian_processes.pdf