Deep learning cheatsheets, covering content of Stanford’s CS 230 class.
CNN: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks
RNN: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-recurrent-neural-networks
TipsAndTricks: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-deep-learning-tips-and-tricks
#cheatsheet #Stanford #dl #cnn #rnn #tipsntricks
🔗 CS 230 - Convolutional Neural Networks Cheatsheet
Teaching page of Shervine Amidi, Graduate Student at Stanford University.
CNN: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks
RNN: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-recurrent-neural-networks
TipsAndTricks: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-deep-learning-tips-and-tricks
#cheatsheet #Stanford #dl #cnn #rnn #tipsntricks
🔗 CS 230 - Convolutional Neural Networks Cheatsheet
Teaching page of Shervine Amidi, Graduate Student at Stanford University.
stanford.edu
CS 230 - Convolutional Neural Networks Cheatsheet
Teaching page of Shervine Amidi, Graduate Student at Stanford University.
A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain
Computer Vision can detect Alzheimer’s Disease in brain scans SIX YEARS before a diagnosis. Uses PET scans, which are common & cheaper. 82% specificity at 100% sensitivity. Can pick out signs hard to see with the naked eye.
Link: https://pubs.rsna.org/doi/10.1148/radiol.2018180958
#CV #DL #Alzheimer #medical
🔗 A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain | Radiology
Computer Vision can detect Alzheimer’s Disease in brain scans SIX YEARS before a diagnosis. Uses PET scans, which are common & cheaper. 82% specificity at 100% sensitivity. Can pick out signs hard to see with the naked eye.
Link: https://pubs.rsna.org/doi/10.1148/radiol.2018180958
#CV #DL #Alzheimer #medical
🔗 A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain | Radiology
Generalization in Deep Networks: The Role of Distance from Initialization
Why it's important to take into account the initialization to explain generalization.
ArXiV: https://arxiv.org/abs/1901.01672
#DL #NN
🔗 Generalization in Deep Networks: The Role of Distance from Initialization
Why does training deep neural networks using stochastic gradient descent (SGD) result in a generalization error that does not worsen with the number of parameters in the network? To answer this question, we advocate a notion of effective model capacity that is dependent on {\em a given random initialization of the network} and not just the training algorithm and the data distribution. We provide empirical evidences that demonstrate that the model capacity of SGD-trained deep networks is in fact restricted through implicit regularization of {\em the $\ell_2$ distance from the initialization}. We also provide theoretical arguments that further highlight the need for initialization-dependent notions of model capacity. We leave as open questions how and why distance from initialization is regularized, and whether it is sufficient to explain generalization.
Why it's important to take into account the initialization to explain generalization.
ArXiV: https://arxiv.org/abs/1901.01672
#DL #NN
🔗 Generalization in Deep Networks: The Role of Distance from Initialization
Why does training deep neural networks using stochastic gradient descent (SGD) result in a generalization error that does not worsen with the number of parameters in the network? To answer this question, we advocate a notion of effective model capacity that is dependent on {\em a given random initialization of the network} and not just the training algorithm and the data distribution. We provide empirical evidences that demonstrate that the model capacity of SGD-trained deep networks is in fact restricted through implicit regularization of {\em the $\ell_2$ distance from the initialization}. We also provide theoretical arguments that further highlight the need for initialization-dependent notions of model capacity. We leave as open questions how and why distance from initialization is regularized, and whether it is sufficient to explain generalization.
Desnapify
Logical followup of #pix2pix project by Isola et al., based on on Keras implementation by Thibault de Boissiere allows to remove that kat/dog faces from #Snapchat photoes.
Github: https://github.com/ipsingh06/ml-desnapify
Mentioned #Keras repo: https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/pix2pix
#DL
🔗 ipsingh06/ml-desnapify
Contribute to ipsingh06/ml-desnapify development by creating an account on GitHub.
Logical followup of #pix2pix project by Isola et al., based on on Keras implementation by Thibault de Boissiere allows to remove that kat/dog faces from #Snapchat photoes.
Github: https://github.com/ipsingh06/ml-desnapify
Mentioned #Keras repo: https://github.com/tdeboissiere/DeepLearningImplementations/tree/master/pix2pix
#DL
🔗 ipsingh06/ml-desnapify
Contribute to ipsingh06/ml-desnapify development by creating an account on GitHub.
GitHub
GitHub - ipsingh06/ml-desnapify: Deep convolutional generative adversarial network (DCGAN) trained to remove Snapchat filters from…
Deep convolutional generative adversarial network (DCGAN) trained to remove Snapchat filters from selfie images - ipsingh06/ml-desnapify
AutoML: Automating the design of machine learning models for autonomous driving
Link: https://medium.com/waymo/automl-automating-the-design-of-machine-learning-models-for-autonomous-driving-141a5583ec2a
#Waymo #automl #DL #selfdriving #Google
🔗 AutoML: Automating the design of machine learning models for autonomous driving
Through a collaboration with Google AI researchers we’re putting cutting-edge research into practice to automatically generate neural nets.
Link: https://medium.com/waymo/automl-automating-the-design-of-machine-learning-models-for-autonomous-driving-141a5583ec2a
#Waymo #automl #DL #selfdriving #Google
🔗 AutoML: Automating the design of machine learning models for autonomous driving
Through a collaboration with Google AI researchers we’re putting cutting-edge research into practice to automatically generate neural nets.
Medium
AutoML: Automating the design of machine learning models for autonomous driving
Through a collaboration with Google AI researchers we’re putting cutting-edge research into practice to automatically generate neural nets.
How I used NLP (Spacy) to screen Data Science Resumes
Example on how #notAIyet can be used to ease day-to-day job.
Link: https://towardsdatascience.com/do-the-keywords-in-your-resume-aptly-represent-what-type-of-data-scientist-you-are-59134105ba0d
#NLP #HR #DL
🔗 How I used NLP (Spacy) to screen Data Science Resumes
Do the keywords in your Resume aptly represent what type of Data Scientist you are?
Example on how #notAIyet can be used to ease day-to-day job.
Link: https://towardsdatascience.com/do-the-keywords-in-your-resume-aptly-represent-what-type-of-data-scientist-you-are-59134105ba0d
#NLP #HR #DL
🔗 How I used NLP (Spacy) to screen Data Science Resumes
Do the keywords in your Resume aptly represent what type of Data Scientist you are?
Medium
How I used NLP (Spacy) to screen Data Science Resume
Position your Data Science resume better through NLP (Spacy).
🤓Interesting note on weight decay vs L2 regularization
In short, the was difference when moving from caffe (which implements weight decay) to keras (which implements L2). That led to different results on the same net architecture and same set of hyperparameters.
Link: https://bbabenko.github.io/weight-decay/
#DL #nn #hyperopt #hyperparams
🔗 weight decay vs L2 regularization
one popular way of adding regularization to deep learning models is to include a weight decay term in the updates. this is the same thing as adding an $L_2$ ...
In short, the was difference when moving from caffe (which implements weight decay) to keras (which implements L2). That led to different results on the same net architecture and same set of hyperparameters.
Link: https://bbabenko.github.io/weight-decay/
#DL #nn #hyperopt #hyperparams
🔗 weight decay vs L2 regularization
one popular way of adding regularization to deep learning models is to include a weight decay term in the updates. this is the same thing as adding an $L_2$ ...
bbabenko.github.io
weight decay vs L2 regularization
one popular way of adding regularization to deep learning models is to include a weight decay term in the updates. this is the same thing as adding an $L_2$ ...
Implementing a ResNet model from scratch.
Well-written and explained note on how to build and train a ResNet model from ground zero.
Link: https://towardsdatascience.com/implementing-a-resnet-model-from-scratch-971be7193718
#ResNet #DL #CV #nn #tutorial
🔗 Implementing a ResNet model from scratch. – Towards Data Science
A basic description of how ResNet works and a hands-on approach to understanding the state-of-the-art network.
Well-written and explained note on how to build and train a ResNet model from ground zero.
Link: https://towardsdatascience.com/implementing-a-resnet-model-from-scratch-971be7193718
#ResNet #DL #CV #nn #tutorial
🔗 Implementing a ResNet model from scratch. – Towards Data Science
A basic description of how ResNet works and a hands-on approach to understanding the state-of-the-art network.
Understanding Convolutional Neural Networks through Visualizations in PyTorch
Explanation of how #CNN works
Link: https://towardsdatascience.com/understanding-convolutional-neural-networks-through-visualizations-in-pytorch-b5444de08b91
#PyTorch #nn #DL
🔗 Understanding Convolutional Neural Networks through Visualizations in PyTorch
Getting down to the nitty-gritty of CNNs
Explanation of how #CNN works
Link: https://towardsdatascience.com/understanding-convolutional-neural-networks-through-visualizations-in-pytorch-b5444de08b91
#PyTorch #nn #DL
🔗 Understanding Convolutional Neural Networks through Visualizations in PyTorch
Getting down to the nitty-gritty of CNNs
Towards Data Science
Understanding Convolutional Neural Networks through Visualizations in PyTorch
Getting down to the nitty-gritty of CNNs
Project: DeepNLP course
Link: https://github.com/DanAnastasyev/DeepNLP-Course
Description:
Deep learning for NLP crash course at ABBYY. Topics include: sentiment analysis, word embeddings, CNNs, seq2seq with attention and much more. Enjoy!
#ML #DL #NLP #python #abbyy #opensource
🔗 DanAnastasyev/DeepNLP-Course
Deep NLP Course. Contribute to DanAnastasyev/DeepNLP-Course development by creating an account on GitHub.
Link: https://github.com/DanAnastasyev/DeepNLP-Course
Description:
Deep learning for NLP crash course at ABBYY. Topics include: sentiment analysis, word embeddings, CNNs, seq2seq with attention and much more. Enjoy!
#ML #DL #NLP #python #abbyy #opensource
🔗 DanAnastasyev/DeepNLP-Course
Deep NLP Course. Contribute to DanAnastasyev/DeepNLP-Course development by creating an account on GitHub.
GitHub
GitHub - DanAnastasyev/DeepNLP-Course: Deep NLP Course
Deep NLP Course. Contribute to DanAnastasyev/DeepNLP-Course development by creating an account on GitHub.
Mini Course in Deep Learning with #PyTorch for AIMS
#course #DL
https://github.com/Atcold/pytorch-Deep-Learning-Minicourse
🔗 Atcold/pytorch-Deep-Learning-Minicourse
Minicourse in Deep Learning with PyTorch. Contribute to Atcold/pytorch-Deep-Learning-Minicourse development by creating an account on GitHub.
#course #DL
https://github.com/Atcold/pytorch-Deep-Learning-Minicourse
🔗 Atcold/pytorch-Deep-Learning-Minicourse
Minicourse in Deep Learning with PyTorch. Contribute to Atcold/pytorch-Deep-Learning-Minicourse development by creating an account on GitHub.
GitHub
GitHub - Atcold/NYU-DLSP20: NYU Deep Learning Spring 2020
NYU Deep Learning Spring 2020. Contribute to Atcold/NYU-DLSP20 development by creating an account on GitHub.
Deep learning with Python Develop Deep Learning models on Theano and Thensorflow using Keras
#book #keras #DL
📝 5_6133943928459624650.pdf - 💾5 709 397
#book #keras #DL
📝 5_6133943928459624650.pdf - 💾5 709 397
📹Artificial caricature
Agents learn to draw simplified (artistic?) portraits via trial and error.
Project website: https://learning-to-paint.github.io
ArXiV: https://arxiv.org/abs/1910.01007
#GAN #CelebA #DL
🔗 Unsupervised Doodling and Painting with Improved SPIRAL
Agents learn to draw simplified (artistic?) portraits via trial and error.
Project website: https://learning-to-paint.github.io
ArXiV: https://arxiv.org/abs/1910.01007
#GAN #CelebA #DL
🔗 Unsupervised Doodling and Painting with Improved SPIRAL
arXiv.org
Unsupervised Doodling and Painting with Improved SPIRAL
We investigate using reinforcement learning agents as generative models of images (extending arXiv:1804.01118). A generative agent controls a simulated painting environment, and is trained with...
Neural networks taught to "read minds" in real time
🔗 Neural networks taught to "read minds" in real time
As part of the NeuroNet NTI Assistive Neurotechnology project, employees of the Neurobotics Group of Companies and the Moscow Institute of Physics and Technology have trained neural networks to recreate images of the electrical activity of the brain. Earlier, no such experiments were performed on EEG material (other scientists used fMRI or analyzed signals directly from neurons). In the future, this discovery will create a new type of device for post-stroke rehabilitation.
https://www.biorxiv.org/content/10.1101/787101v2
#AI #ML #DL
🔗 Neural networks taught to "read minds" in real time
As part of the NeuroNet NTI Assistive Neurotechnology project, employees of the Neurobotics Group of Companies and the Moscow Institute of Physics and Technology have trained neural networks to recreate images of the electrical activity of the brain. Earlier, no such experiments were performed on EEG material (other scientists used fMRI or analyzed signals directly from neurons). In the future, this discovery will create a new type of device for post-stroke rehabilitation.
https://www.biorxiv.org/content/10.1101/787101v2
#AI #ML #DL
YouTube
Neural networks taught to "read minds" in real time
As part of the NeuroNet NTI Assistive Neurotechnology project, employees of the Neurobotics Group of Companies and the Moscow Institute of Physics and Technology have trained neural networks to recreate images of the electrical activity of the brain. Earlier…