Energy-Based Adversarial Training and Video Prediction, NeurIPS 2016
By Yann LeCun, Facebook AI Research
YouTube: https://youtu.be/x4sI5qO6O2Y
#DeepLearning #EnergyBasedModels #UnsupervisedLearning
By Yann LeCun, Facebook AI Research
YouTube: https://youtu.be/x4sI5qO6O2Y
#DeepLearning #EnergyBasedModels #UnsupervisedLearning
YouTube
Energy-Based Adversarial Training and Video Prediction, NIPS 2016 | Yann LeCun, Facebook AI Research
NIPS 2016 Workshop on Adversarial Training https://arxiv.org/abs/1609.03126 We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) whic...
Unsupervised Separation of Dynamics from Pixels
Silvia Chiappa and Ulrich Paquet : https://arxiv.org/abs/1907.12906
#DeepLearning #MachineLearning #UnsupervisedLearning
Silvia Chiappa and Ulrich Paquet : https://arxiv.org/abs/1907.12906
#DeepLearning #MachineLearning #UnsupervisedLearning
arXiv.org
Unsupervised Separation of Dynamics from Pixels
We present an approach to learn the dynamics of multiple objects from image
sequences in an unsupervised way. We introduce a probabilistic model that first
generate noisy positions for each object...
sequences in an unsupervised way. We introduce a probabilistic model that first
generate noisy positions for each object...
U-GAT-IT
Official TensorFlow Implementation : https://github.com/taki0112/UGATIT
#DeepLearning #Tensorflow #UnsupervisedLearning
Official TensorFlow Implementation : https://github.com/taki0112/UGATIT
#DeepLearning #Tensorflow #UnsupervisedLearning
GitHub
GitHub - taki0112/UGATIT: Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive…
Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020) - taki0112/UGATIT
Compressing BERT for faster prediction
Blog by Sam Sucik : https://blog.rasa.com/compressing-bert-for-faster-prediction-2/
#ArtificialIntelligence #NaturalLanguageProcessing #UnsupervisedLearning
Blog by Sam Sucik : https://blog.rasa.com/compressing-bert-for-faster-prediction-2/
#ArtificialIntelligence #NaturalLanguageProcessing #UnsupervisedLearning
Rasa
Learn how to make BERT smaller and faster
Let's look at compression methods for neural networks, such as quantization and pruning. Then, we apply one to BERT using TensorFlow Lite.
"Hamiltonian Neural Networks"
Greydanus et al.: https://arxiv.org/abs/1906.01563
Blog: https://greydanus.github.io/2019/05/15/hamiltonian-nns/
#Hamiltonian #NeuralNetworks #UnsupervisedLearning
Greydanus et al.: https://arxiv.org/abs/1906.01563
Blog: https://greydanus.github.io/2019/05/15/hamiltonian-nns/
#Hamiltonian #NeuralNetworks #UnsupervisedLearning
arXiv.org
Hamiltonian Neural Networks
Even though neural networks enjoy widespread use, they still struggle to learn the basic laws of physics. How might we endow them with better inductive biases? In this paper, we draw inspiration...
Visualizing and Measuring the Geometry of BERT
Coenen et al.: https://arxiv.org/abs/1906.02715
#BERT #NaturalLanguageProcessing #UnsupervisedLearning
Coenen et al.: https://arxiv.org/abs/1906.02715
#BERT #NaturalLanguageProcessing #UnsupervisedLearning
arXiv.org
Visualizing and Measuring the Geometry of BERT
Transformer architectures show significant promise for natural language processing. Given that a single pretrained model can be fine-tuned to perform well on many different tasks, these networks...
Deep causal representation learning for unsupervised domain adaptation
Moraffah et al.: https://arxiv.org/abs/1910.12417
#DeepLearning #MachineLearning #UnsupervisedLearning
Moraffah et al.: https://arxiv.org/abs/1910.12417
#DeepLearning #MachineLearning #UnsupervisedLearning
arXiv.org
Deep causal representation learning for unsupervised domain adaptation
Studies show that the representations learned by deep neural networks can be transferred to similar prediction tasks in other domains for which we do not have enough labeled data. However, as we...
Momentum Contrast for Unsupervised Visual Representation Learning
He et al.: https://arxiv.org/abs/1911.05722
#ArtificialIntelligence #DeepLearning #UnsupervisedLearning
He et al.: https://arxiv.org/abs/1911.05722
#ArtificialIntelligence #DeepLearning #UnsupervisedLearning
arXiv.org
Momentum Contrast for Unsupervised Visual Representation Learning
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue...
The Illustrated GPT-2 (Visualizing Transformer Language Models)
Blog by Jay Alammar : https://jalammar.github.io/illustrated-gpt2/
#ArtificialIntelligence #NLP #UnsupervisedLearning
Blog by Jay Alammar : https://jalammar.github.io/illustrated-gpt2/
#ArtificialIntelligence #NLP #UnsupervisedLearning
"Fast Task Inference with Variational Intrinsic Successor Features"
Hansen et al.: https://arxiv.org/abs/1906.05030
#DeepLearning #ReinforcementLearning #UnsupervisedLearning
Hansen et al.: https://arxiv.org/abs/1906.05030
#DeepLearning #ReinforcementLearning #UnsupervisedLearning