Disentangling Disentanglement in Variational Autoencoders
Mathieu et al.: https://proceedings.mlr.press/v97/mathieu19a.html
#ArtificialIntelligence #DeepLearning #VariationalAutoencoders #VAE
Mathieu et al.: https://proceedings.mlr.press/v97/mathieu19a.html
#ArtificialIntelligence #DeepLearning #VariationalAutoencoders #VAE
PMLR
Disentangling Disentanglement in Variational Autoencoders
We develop a generalisation of disentanglement in variational autoencoders (VAEs)—decomposition of the latent representation—characterising it as the fulfilm...
Best of arXiv.org for AI, Machine Learning, and Deep Learning – April 2019 by insidebigdata
https://insidebigdata.com/2019/05/22/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-april-2019/
https://insidebigdata.com/2019/05/22/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-april-2019/
insideBIGDATA
Best of arXiv.org for AI, Machine Learning, and Deep Learning – April 2019
In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, [...]
#IntelAI Research has 6 paper acceptances at #ICML2019! Find full list of papers and more here: https://www.intel.ai/icml-2019/
All 1,294 papers at #CVPR2019
Index: https://openaccess.thecvf.com/content_CVPR_2019/html/
#ArtificialIntelligence #DeepLearning #MachineLearning
Index: https://openaccess.thecvf.com/content_CVPR_2019/html/
#ArtificialIntelligence #DeepLearning #MachineLearning
Practical Deep Learning with Bayesian Principles
Osawa et al.: https://arxiv.org/pdf/1906.02506.pdf
#Bayesian #DeepLearning #PyTorch #VariationalInference
Osawa et al.: https://arxiv.org/pdf/1906.02506.pdf
#Bayesian #DeepLearning #PyTorch #VariationalInference
Butterfly Transform: An Efficient FFT Based Neural Architecture Design
Alizadeh et al.: https://arxiv.org/abs/1906.02256
#ArtificialIntelligence #DeepLearning #MachineLearning
Alizadeh et al.: https://arxiv.org/abs/1906.02256
#ArtificialIntelligence #DeepLearning #MachineLearning
arXiv.org
Butterfly Transform: An Efficient FFT Based Neural Architecture Design
In this paper, we show that extending the butterfly operations from the FFT algorithm to a general Butterfly Transform (BFT) can be beneficial in building an efficient block structure for CNN...
Computational Narrative Intelligence and the Quest for the Great Automatic Grammatizator
Slides by Mark Riedl: https://www.dropbox.com/s/2o8enj7amaxxx1y/naacl-nu-ws.pdf?dl=0
#ArtificialIntelligence #MachineLearning #NaturalLanguageProcessing
Slides by Mark Riedl: https://www.dropbox.com/s/2o8enj7amaxxx1y/naacl-nu-ws.pdf?dl=0
#ArtificialIntelligence #MachineLearning #NaturalLanguageProcessing
very interesting hypothesis by Yoshua Bengio https://arxiv.org/abs/1901.10912
MicrosoftAI raises the bar in text-to-speech with an “almost” unsupervised context, training ONLY 200 speech and text data to generate human-sounding speech for about 20mins - 99.84% world level intelligible rate.
Paper: https://arxiv.org/pdf/1905.06791.pdf
Sample: buff.ly/2X885F9
Paper: https://arxiv.org/pdf/1905.06791.pdf
Sample: buff.ly/2X885F9
Learning Sparse Networks Using Targeted Dropout
A new research paper from
Aidan N. Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal, Geoffrey E. Hinton
https://arxiv.org/abs/1905.13678
@ArtificialIntelligenceArticles
A new research paper from
Aidan N. Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal, Geoffrey E. Hinton
https://arxiv.org/abs/1905.13678
@ArtificialIntelligenceArticles
arXiv.org
Learning Sparse Networks Using Targeted Dropout
Neural networks are easier to optimise when they have many more weights than
are required for modelling the mapping from inputs to outputs. This suggests a
two-stage learning procedure that first...
are required for modelling the mapping from inputs to outputs. This suggests a
two-stage learning procedure that first...
"Automated Speech Generation from UN General Assembly Statements: Mapping Risks in AI Generated Texts"
Bullock et al.: https://arxiv.org/abs/1906.01946
#Computation #Language #AIEthics #AIGovernance #ArtificialIntelligence
@ArtificialIntelligenceArticles
Bullock et al.: https://arxiv.org/abs/1906.01946
#Computation #Language #AIEthics #AIGovernance #ArtificialIntelligence
@ArtificialIntelligenceArticles
A new research paper from Geoffry E.Hinton
Aidan N. Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal
Learning Sparse Networks Using Targeted Dropout
https://arxiv.org/abs/1905.13678
@ArtificialIntelligenceArticles
Aidan N. Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal
Learning Sparse Networks Using Targeted Dropout
https://arxiv.org/abs/1905.13678
@ArtificialIntelligenceArticles
SLIDES
GAUSSIAN PROCESSES
Marc Deisenroth
Department of Computing
Imperial College London
https://drive.google.com/file/d/1Ve_Jrn9f-4IcYxF2Jz5KrDnl1qE_IRqg/view?usp=drive_open
GAUSSIAN PROCESSES
Marc Deisenroth
Department of Computing
Imperial College London
https://drive.google.com/file/d/1Ve_Jrn9f-4IcYxF2Jz5KrDnl1qE_IRqg/view?usp=drive_open
Google Docs
lecture_gaussian_processes.pdf
Search engine for computer vision datasets
https://www.visualdata.io/
https://www.visualdata.io/
ICML Accepted Papers have been posted. https://icml.cc/Conferences/2019/AcceptedPapersInitial
icml.cc
ICML 2019 Schedule
ICML Website
Lecture Notes by Andrew Ng : Full Set
https://www.datasciencecentral.com/profiles/blogs/lecture-notes-by-ng-full-set
https://www.datasciencecentral.com/profiles/blogs/lecture-notes-by-ng-full-set
Datasciencecentral
Lecture Notes by Andrew Ng : Full Set
The following notes represent a complete, stand alone interpretation of Stanford's machine learning course presented by Professor Andrew Ng and originally post…
Mission Moon 3-D: A New Perspective on the Space Race https://www.aitribune.com/book/2018111096
Aitribune
Mission Moon 3-D: A New Perspective on the Space Race | AI Tribune
By: David J. Eicher, Brian May
The story of the lunar landing and the events that led up to it, told in text and visually stunning 3-D images.
The story of the lunar landing and the events that led up to it, told in text and visually stunning 3-D images.
This paper evaluates some of the methods in the context of computer vision, specifically when identifying different types of objects and predicting how far away an object is in images. The new method is called 3D- BoNet.
paper: [https://www.profillic.com/paper/arxiv:1906.01140]
(https://www.profillic.com/paper/arxiv:1906.01140)
paper: [https://www.profillic.com/paper/arxiv:1906.01140]
(https://www.profillic.com/paper/arxiv:1906.01140)
Profillic
Profillic: AI research & source code to supercharge your projects
Explore state-of-the-art in machine learning, AI, and robotics research. Browse papers, source code, models, and more by topics and authors. Connect with researchers and engineers working on related problems in machine learning, deep learning, natural language…
Make music with GANs
GANSynth is a new method for fast generation of high-fidelity audio.
🎵 Examples: https://goo.gl/magenta/gansynth-examples
⏯ Colab: https://goo.gl/magenta/gansynth-demo
📝 Paper: https://goo.gl/magenta/gansynth-paper
💻 Code: https://goo.gl/magenta/gansynth-code
⌨️ Blog: https://magenta.tensorflow.org/gansynth
#artificialintelligence #deeplearning #generativeadversarialnetworks
GANSynth is a new method for fast generation of high-fidelity audio.
🎵 Examples: https://goo.gl/magenta/gansynth-examples
⏯ Colab: https://goo.gl/magenta/gansynth-demo
📝 Paper: https://goo.gl/magenta/gansynth-paper
💻 Code: https://goo.gl/magenta/gansynth-code
⌨️ Blog: https://magenta.tensorflow.org/gansynth
#artificialintelligence #deeplearning #generativeadversarialnetworks
Google
Google Colaboratory