hey, we have 295 so far, and it is awesome, thank you!
It means that 15% of post viewers filled the questionnaire form of one question.
We kindly ask you to fill in if you havenβt yet, because we need 200 more responses to make this poll statistically significant!
It means that 15% of post viewers filled the questionnaire form of one question.
We kindly ask you to fill in if you havenβt yet, because we need 200 more responses to make this poll statistically significant!
πͺππ πwe did it! 665 responses collected so far!
Thank you all. Results will be published as anonimized CSV (only countries and number of votes) later.
Thank you all. Results will be published as anonimized CSV (only countries and number of votes) later.
Jax is NumPy on CPU and GPU with automatic differentiation and JIT compilation.
Link: https://github.com/google/jax
Link: https://github.com/google/jax
GitHub
GitHub - jax-ml/jax: Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more - jax-ml/jax
And, channel statistics, as promised. You can look at the numbers at: https://docs.google.com/spreadsheets/d/1_y6ojxU7svUAmPViqWveUf0yVLqoZyukJTGikcC_5hQ.
And again, thank you for your support!
And again, thank you for your support!
A Bag of Tricks for Image Classification
1. Large batch size
2. Mini model-tweaks
3. Refined Training Methods
4. Transfer Learning
5. Fancy Data Augmentation
https://link.medium.com/fzJvIBfsJS
#CV #tipsandtricks
1. Large batch size
2. Mini model-tweaks
3. Refined Training Methods
4. Transfer Learning
5. Fancy Data Augmentation
https://link.medium.com/fzJvIBfsJS
#CV #tipsandtricks
Towards Data Science
A Bag of Tricks for Image Classification
Come get your deep learning goodies
Bayesian Optimization in AlphaGo
How latest AlphaGo agent win rate was improved from 50% to 66.5%.
ArXiV: https://arxiv.org/abs/1812.06855
How latest AlphaGo agent win rate was improved from 50% to 66.5%.
ArXiV: https://arxiv.org/abs/1812.06855
ββFacebook have created and now open-sourced Nevergrad, a Python3 library that claims making easier to perform gradient-free optimizations.
Link: https://code.fb.com/ai-research/nevergrad/
Github: https://github.com/facebookresearch/nevergrad
Link: https://code.fb.com/ai-research/nevergrad/
Github: https://github.com/facebookresearch/nevergrad
Smart Compose: Using Neural Networks to Help Write Emails
Google shared some information about their new feature. Most important: they claim to focus on Fairness and Privacy, training on completely anonimized data and trying to eliminate biases.
Link: https://ai.googleblog.com/2018/05/smart-compose-using-neural-networks-to.html
#Google #SmartCompose #FairAI #Privacy
Google shared some information about their new feature. Most important: they claim to focus on Fairness and Privacy, training on completely anonimized data and trying to eliminate biases.
Link: https://ai.googleblog.com/2018/05/smart-compose-using-neural-networks-to.html
#Google #SmartCompose #FairAI #Privacy
research.google
Smart Compose: Using Neural Networks to Help Write Emails
Posted by Yonghui Wu, Principal Engineer, Google Brain Team Last week at Google I/O, we introduced Smart Compose, a new feature in Gmail that uses ...
ββReproducing high-quality singing voice
with state-of-the-art AI technology.
Some advance in singing voice synthesis. This opens path toward more interesting collaborations and sythetic celebrities projects.
P.S. Hatsune Miku's will still remain popular for their particular qualities, but now there is more room for competitors.
Link: https://www.techno-speech.com/news-20181214a-en
#SOTA #Voice #Synthesis
with state-of-the-art AI technology.
Some advance in singing voice synthesis. This opens path toward more interesting collaborations and sythetic celebrities projects.
P.S. Hatsune Miku's will still remain popular for their particular qualities, but now there is more room for competitors.
Link: https://www.techno-speech.com/news-20181214a-en
#SOTA #Voice #Synthesis
Deep learning cheatsheets, covering content of Stanfordβs CS 230 class.
CNN: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks
RNN: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-recurrent-neural-networks
TipsAndTricks: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-deep-learning-tips-and-tricks
#cheatsheet #Stanford #dl #cnn #rnn #tipsntricks
CNN: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks
RNN: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-recurrent-neural-networks
TipsAndTricks: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-deep-learning-tips-and-tricks
#cheatsheet #Stanford #dl #cnn #rnn #tipsntricks
stanford.edu
CS 230 - Convolutional Neural Networks Cheatsheet
Teaching page of Shervine Amidi, Graduate Student at Stanford University.
ββOverview of current state of autonomously driving vehicle by Ben Evans.
Not so technical overview of where first autonomous vehicles will become commodity.
Link: https://www.ben-evans.com/benedictevans/2018/3/26/steps-to-autonomy
Not so technical overview of where first autonomous vehicles will become commodity.
Link: https://www.ben-evans.com/benedictevans/2018/3/26/steps-to-autonomy
ββCreating super slow motion videos by predicting missing frames using a neural network, instead of simple interpolation. With code.
Github: https://github.com/avinashpaliwal/Super-SloMo
Website: https://people.cs.umass.edu/~hzjiang/projects/superslomo/
Github: https://github.com/avinashpaliwal/Super-SloMo
Website: https://people.cs.umass.edu/~hzjiang/projects/superslomo/
ππGeoff Hinton was named Companion of the Order of Canada (highest Canadian honor).
Hinton β one of the pioneers of the Deep Leaning, author of the famous Β«Neural NetworksΒ» on Coursera, one of the creators of the modern ML industry.
Congratulations!
https://www.gg.ca/en/media/news/2018/governor-general-announces-103-new-appointments-order-canada
Hinton β one of the pioneers of the Deep Leaning, author of the famous Β«Neural NetworksΒ» on Coursera, one of the creators of the modern ML industry.
Congratulations!
https://www.gg.ca/en/media/news/2018/governor-general-announces-103-new-appointments-order-canada
The Governor General of Canada
Governor General Announces 103 New Appointments to the Order of Canada
OTTAWAβHer Excellency the Right Honourable Julie Payette, Governor General of Canada, today announced 103 new appointments to the Order of Canada. The new member list includes 2 Companions (C.C.), 15
ββMassively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond
New SOTA on cross-lingual transfer (XNLI, MLDoc) and bitext mining (BUCC) using a shared encoder for 93 languages.
Link: https://arxiv.org/abs/1812.10464
#SOTA #NLP
New SOTA on cross-lingual transfer (XNLI, MLDoc) and bitext mining (BUCC) using a shared encoder for 93 languages.
Link: https://arxiv.org/abs/1812.10464
#SOTA #NLP
ββBuilding Automated Feature Rollouts on Robust Regression Analysis
Nice article on important thing β statistical analysis of hypothesis testing. Every new feature or change made to existent one is basically an experiment. Article covers how #Uber team handles this in live system.
Link: https://eng.uber.com/autonomous-rollouts-regression-analysis/
#Uber #statistics #production #truestory
Nice article on important thing β statistical analysis of hypothesis testing. Every new feature or change made to existent one is basically an experiment. Article covers how #Uber team handles this in live system.
Link: https://eng.uber.com/autonomous-rollouts-regression-analysis/
#Uber #statistics #production #truestory
Battling Entropy: Making Order of the Chaos in Our Lives
Article on #entropy as a concept.
Link: https://fs.blog/2018/11/entropy/
Article on #entropy as a concept.
Link: https://fs.blog/2018/11/entropy/
Farnam Street
Entropy: The Hidden Force That Complicates Life
This article will help you learn how Entropy, the second law of thermodynamics, makes life increasingly more complicated. Understanding entroy will supercharge how and where you apply your energy.
A disciplined approach to neural network hyper-parameters
Recommendations on how to optimize learning rate, weight decay, momentum and batch size.
ArXiV: https://arxiv.org/pdf/1803.09820.pdf
#nn #hyperopt
Recommendations on how to optimize learning rate, weight decay, momentum and batch size.
ArXiV: https://arxiv.org/pdf/1803.09820.pdf
#nn #hyperopt