Neural Networks | Нейронные сети
11.6K subscribers
726 photos
163 videos
170 files
9.4K links
Все о машинном обучении

По всем вопросам - @notxxx1

№ 4959169263
Download Telegram
​A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain

Computer Vision can detect Alzheimer’s Disease in brain scans SIX YEARS before a diagnosis. Uses PET scans, which are common & cheaper. 82% specificity at 100% sensitivity. Can pick out signs hard to see with the naked eye.

Link: https://pubs.rsna.org/doi/10.1148/radiol.2018180958

#CV #DL #Alzheimer #medical

🔗 A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain | Radiology
​​​Generalization in Deep Networks: The Role of Distance from Initialization

Why it's important to take into account the initialization to explain generalization.

ArXiV: https://arxiv.org/abs/1901.01672

#DL #NN

🔗 Generalization in Deep Networks: The Role of Distance from Initialization
Why does training deep neural networks using stochastic gradient descent (SGD) result in a generalization error that does not worsen with the number of parameters in the network? To answer this question, we advocate a notion of effective model capacity that is dependent on {\em a given random initialization of the network} and not just the training algorithm and the data distribution. We provide empirical evidences that demonstrate that the model capacity of SGD-trained deep networks is in fact restricted through implicit regularization of {\em the $\ell_2$ distance from the initialization}. We also provide theoretical arguments that further highlight the need for initialization-dependent notions of model capacity. We leave as open questions how and why distance from initialization is regularized, and whether it is sufficient to explain generalization.
🤓Interesting note on weight decay vs L2 regularization

In short, the was difference when moving from caffe (which implements weight decay) to keras (which implements L2). That led to different results on the same net architecture and same set of hyperparameters.

Link: https://bbabenko.github.io/weight-decay/

#DL #nn #hyperopt #hyperparams

🔗 weight decay vs L2 regularization
one popular way of adding regularization to deep learning models is to include a weight decay term in the updates. this is the same thing as adding an $L_2$ ...
​Implementing a ResNet model from scratch.

Well-written and explained note on how to build and train a ResNet model from ground zero.

Link: https://towardsdatascience.com/implementing-a-resnet-model-from-scratch-971be7193718

#ResNet #DL #CV #nn #tutorial

🔗 Implementing a ResNet model from scratch. – Towards Data Science
A basic description of how ResNet works and a hands-on approach to understanding the state-of-the-art network.
​Project: DeepNLP course
Link: https://github.com/DanAnastasyev/DeepNLP-Course
Description:
Deep learning for NLP crash course at ABBYY. Topics include: sentiment analysis, word embeddings, CNNs, seq2seq with attention and much more. Enjoy!
#ML #DL #NLP #python #abbyy #opensource

🔗 DanAnastasyev/DeepNLP-Course
Deep NLP Course. Contribute to DanAnastasyev/DeepNLP-Course development by creating an account on GitHub.
Deep learning with Python Develop Deep Learning models on Theano and Thensorflow using Keras
#book #keras #DL

📝 5_6133943928459624650.pdf - 💾5 709 397
​Neural networks taught to "read minds" in real time

🔗 Neural networks taught to "read minds" in real time
As part of the NeuroNet NTI Assistive Neurotechnology project, employees of the Neurobotics Group of Companies and the Moscow Institute of Physics and Technology have trained neural networks to recreate images of the electrical activity of the brain. Earlier, no such experiments were performed on EEG material (other scientists used fMRI or analyzed signals directly from neurons). In the future, this discovery will create a new type of device for post-stroke rehabilitation.

https://www.biorxiv.org/content/10.1101/787101v2

#AI #ML #DL