Lifelong GAN: Continual Learning for Conditional Image Generation
Zhai et al.: https://arxiv.org/abs/1907.10107
#deeplearning #generativemodels #GAN
Zhai et al.: https://arxiv.org/abs/1907.10107
#deeplearning #generativemodels #GAN
A paper posted online this month has settled a nearly 30-year-old conjecture about the structure of the fundamental building blocks of computer circuits. This “sensitivity” conjecture has stumped many of the most prominent computer scientists over the years, yet the new proof is so simple that one researcher summed it up in a single tweet.
“This conjecture has stood as one of the most frustrating and embarrassing open problems in all of combinatorics and theoretical computer science,” wrote Scott Aaronson of the University of Texas, Austin, in a blog post. “The list of people who tried to solve it and failed is like a who’s who of discrete math and theoretical computer science,” he added in an email.
The conjecture concerns Boolean functions, rules for transforming a string of input bits (0s and 1s) into a single output bit. One such rule is to output a 1 provided any of the input bits is 1, and a 0 otherwise; another rule is to output a 0 if the string has an even number of 1s, and a 1 otherwise. Every computer circuit is some combination of Boolean functions, making them “the bricks and mortar of whatever you’re doing in computer science,” said Rocco Servedio of Columbia University.
Click on the article to read the solution
https://www.quantamagazine.org/mathematician-solves-computer-science-conjecture-in-two-pages-20190725/
“This conjecture has stood as one of the most frustrating and embarrassing open problems in all of combinatorics and theoretical computer science,” wrote Scott Aaronson of the University of Texas, Austin, in a blog post. “The list of people who tried to solve it and failed is like a who’s who of discrete math and theoretical computer science,” he added in an email.
The conjecture concerns Boolean functions, rules for transforming a string of input bits (0s and 1s) into a single output bit. One such rule is to output a 1 provided any of the input bits is 1, and a 0 otherwise; another rule is to output a 0 if the string has an even number of 1s, and a 1 otherwise. Every computer circuit is some combination of Boolean functions, making them “the bricks and mortar of whatever you’re doing in computer science,” said Rocco Servedio of Columbia University.
Click on the article to read the solution
https://www.quantamagazine.org/mathematician-solves-computer-science-conjecture-in-two-pages-20190725/
Quanta Magazine
Decades-Old Computer Science Conjecture Solved in Two Pages
The “sensitivity” conjecture stumped many top computer scientists, yet the new proof is so simple that one researcher summed it up in a single tweet.
Self-Supervised Learning for the win!
Literally. For winning cash.
FASSL is not facile.
https://sites.google.com/view/fb-ssl-challenge-iccv19/home
Literally. For winning cash.
FASSL is not facile.
https://sites.google.com/view/fb-ssl-challenge-iccv19/home
Google
fai_ssl_challenge
Overview
The Facebook AI self-supervision learning challenge (FASSL) aims to benchmark self-supervised visual representations on a diverse set of tasks and datasets using a standardized transfer learning setup.
In this first iteration, we base our challenge…
The Facebook AI self-supervision learning challenge (FASSL) aims to benchmark self-supervised visual representations on a diverse set of tasks and datasets using a standardized transfer learning setup.
In this first iteration, we base our challenge…
The Nature of Machine Learning (evolution vs. experience vs. abstract) - Video by Art of the Problem
https://www.youtube.com/watch?v=yLAwDEfzqRw
https://www.youtube.com/watch?v=yLAwDEfzqRw
YouTube
What Is Machine Learning?
The root of intelligence is learning. Follow the progression of evolutionary, experiential, and abstract learning, forming the bedrock of intelligence. It provides insight into various learning paradigms including unsupervised learning, supervised learning…
Release of 27 pretrained models for NLP / NLU for PyTorch
Hugging Face open sources a new library that contains up to 27 pretrained models to conduct state-of-the-art NLP/NLU tasks.
Link: https://medium.com/dair-ai/pytorch-transformers-for-state-of-the-art-nlp-3348911ffa5b
#SOTA #NLP #NLU #PyTorch #opensource
Hugging Face open sources a new library that contains up to 27 pretrained models to conduct state-of-the-art NLP/NLU tasks.
Link: https://medium.com/dair-ai/pytorch-transformers-for-state-of-the-art-nlp-3348911ffa5b
#SOTA #NLP #NLU #PyTorch #opensource
Medium
PyTorch Transformers for state-of-the-art NLP
Hugging Face open sources a new library that contains up to 27 pretrained models to conduct state-of-the-art NLP/NLU tasks.
Approximate Bayesian inference for a "steps and turns" continuous-time random walk observ... arxiv.org/abs/1907.10115
Lifelong GAN: Continual Learning for Conditional Image Generation. arxiv.org/abs/1907.10107
Exploring Factors for Improving Low Resolution Face Recognition. arxiv.org/abs/1907.10104
Dynamic Facial Expression Generation on Hilbert Hypersphere with Conditional Wasserstein... arxiv.org/abs/1907.10087
New Open Source GPU-Accelerated Atari Emulator for Reinforcement Learning Now Available
https://news.developer.nvidia.com/new-open-source-gpu-accelerated-atari-emulator-for-reinforcement-learning-now-available/
https://news.developer.nvidia.com/new-open-source-gpu-accelerated-atari-emulator-for-reinforcement-learning-now-available/
Meet Your AI-Generated Dream Anime Girl
https://medium.com/syncedreview/meet-your-ai-generated-dream-anime-girl-5dc149c5eab4
https://medium.com/syncedreview/meet-your-ai-generated-dream-anime-girl-5dc149c5eab4
Medium
Meet Your AI-Generated Dream Anime Girl
Do you dream of Asuna Yuuki? Do you long to escape to a fantasy world with a beautiful anime partner? If so there’s a new artificial…
New fast.ai course: A Code-First Introduction to Natural Language Processing
https://www.fast.ai/2019/07/08/fastai-nlp/
Github: https://github.com/fastai/course-nlp
Videos: https://www.youtube.com/playlist?list=PLtmWHNX-gukKocXQOkQjuVxglSDYWsSh9
https://www.fast.ai/2019/07/08/fastai-nlp/
Github: https://github.com/fastai/course-nlp
Videos: https://www.youtube.com/playlist?list=PLtmWHNX-gukKocXQOkQjuVxglSDYWsSh9
Artificial Intelligence can generate interesting story endings. It picks important phrases of a story and creates more “diverse” endings.
Great work on WriterForcing by Carnegie Mellon University, which takes Seq2Seq models to the next level.
Read arxiv.org/pdf/1907.08259
Great work on WriterForcing by Carnegie Mellon University, which takes Seq2Seq models to the next level.
Read arxiv.org/pdf/1907.08259
A Neural Network Based On-device Learning Anomaly Detector for Edge Devices. arxiv.org/abs/1907.10147
Uniform convergence may be unable to explain generalization in deep learning
Vaishnavh Nagarajan and J. Zico Kolter : https://arxiv.org/abs/1902.04742
Code : https://locuslab.github.io/2019-07-09-uniform-convergence/
#deeplearning #machinelearning #neuralnetworks
Vaishnavh Nagarajan and J. Zico Kolter : https://arxiv.org/abs/1902.04742
Code : https://locuslab.github.io/2019-07-09-uniform-convergence/
#deeplearning #machinelearning #neuralnetworks
A Fine-Grained Spectral Perspective on Neural Networks
Greg Yang and Hadi Salman : https://arxiv.org/abs/1907.10599
Compute eigenvalues : https://github.com/thegregyang/NNspectra
#MachineLearning #NeuralComputing #EvolutionaryComputing
Greg Yang and Hadi Salman : https://arxiv.org/abs/1907.10599
Compute eigenvalues : https://github.com/thegregyang/NNspectra
#MachineLearning #NeuralComputing #EvolutionaryComputing
Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP
Yu et al.: https://arxiv.org/abs/1906.02768
#nlp #neuralnetwork #reinforcementlearning #neuralnetworks
Yu et al.: https://arxiv.org/abs/1906.02768
#nlp #neuralnetwork #reinforcementlearning #neuralnetworks