François #Chollet is the creator of #Keras, which is an open source #DeepLearning library that is designed to enable fast, user-friendly experimentation with #deepNeuralNetworks. It serves as an interface to several deep learning libraries, most popular of which is #TensorFlow, and it was integrated into TensorFlow main codebase a while back. Aside from creating an exceptionally useful and popular library, François is also a world-class #AI researcher and software engineer at #Google, and is definitely an outspoken, if not controversial, personality in the AI world, especially in the realm of ideas around the future of #ArtificialIntelligence. This conversation is part of the Artificial Intelligence podcast.
Link
🔭 @DeepGravity
Link
🔭 @DeepGravity
YouTube
François Chollet: Keras, Deep Learning, and the Progress of AI | Lex Fridman Podcast #38
Gilbert Strang: #DeepLearning and #NeuralNetworks
Part of Lex Fridman conversation with Gilbert Strang
Gilbert Strang is a professor of mathematics at #MIT and perhaps one of the most famous and impactful teachers of #math in the world. His MIT OpenCourseWare lectures on linear algebra have been viewed millions of times.
🔭 @DeepGravity
Part of Lex Fridman conversation with Gilbert Strang
Gilbert Strang is a professor of mathematics at #MIT and perhaps one of the most famous and impactful teachers of #math in the world. His MIT OpenCourseWare lectures on linear algebra have been viewed millions of times.
🔭 @DeepGravity
YouTube
Gilbert Strang: Deep Learning and Neural Networks
Full episode with Gilbert Strang (Nov 2019): https://www.youtube.com/watch?v=lEZPfmGCEk0
Subscribe to this channel if you like clips and to the main channel if you like full length episodes: https://www.youtube.com/lexfridman
(more links below)
Podcast…
Subscribe to this channel if you like clips and to the main channel if you like full length episodes: https://www.youtube.com/lexfridman
(more links below)
Podcast…
Semantic Segmentation of Thigh Muscle using 2.5D #DeepLearning Network Trained with Limited Datasets
Purpose: We propose a 2.5D #deep learning #NeuralNetwork (#DLNN) to automatically classify thigh muscle into 11 classes and evaluate its classification accuracy over 2D and 3D DLNN when trained with limited datasets. Enables operator invariant quantitative assessment of the thigh muscle volume change with respect to the disease progression. Materials and methods: Retrospective datasets consist of 48 thigh volume (TV) cropped from CT DICOM images. Cropped volumes were aligned with femur axis and resample in 2 mm voxel-spacing. Proposed 2.5D DLNN consists of three 2D U-Net trained with axial, coronal and sagittal muscle slices respectively. A voting algorithm was used to combine the output of U-Nets to create final segmentation. 2.5D U-Net was trained on PC with 38 TV and the remaining 10 TV were used to evaluate segmentation accuracy of 10 classes within Thigh. The result segmentation of both left and right thigh were de-cropped to original CT volume space. Finally, segmentation accuracies were compared between proposed DLNN and 2D/3D U-Net. Results: Average segmentation DSC score accuracy of all classes with 2.5D U-Net as 91.18 mean DSC score for 2D U-Net was 3.3 DSC score of 3D U-Net was 5.7 same datasets. Conclusion: We achieved a faster computationally efficient and automatic segmentation of thigh muscle into 11 classes with reasonable accuracy. Enables quantitative evaluation of muscle atrophy with disease progression.
Link
🔭 @DeepGravity
Purpose: We propose a 2.5D #deep learning #NeuralNetwork (#DLNN) to automatically classify thigh muscle into 11 classes and evaluate its classification accuracy over 2D and 3D DLNN when trained with limited datasets. Enables operator invariant quantitative assessment of the thigh muscle volume change with respect to the disease progression. Materials and methods: Retrospective datasets consist of 48 thigh volume (TV) cropped from CT DICOM images. Cropped volumes were aligned with femur axis and resample in 2 mm voxel-spacing. Proposed 2.5D DLNN consists of three 2D U-Net trained with axial, coronal and sagittal muscle slices respectively. A voting algorithm was used to combine the output of U-Nets to create final segmentation. 2.5D U-Net was trained on PC with 38 TV and the remaining 10 TV were used to evaluate segmentation accuracy of 10 classes within Thigh. The result segmentation of both left and right thigh were de-cropped to original CT volume space. Finally, segmentation accuracies were compared between proposed DLNN and 2D/3D U-Net. Results: Average segmentation DSC score accuracy of all classes with 2.5D U-Net as 91.18 mean DSC score for 2D U-Net was 3.3 DSC score of 3D U-Net was 5.7 same datasets. Conclusion: We achieved a faster computationally efficient and automatic segmentation of thigh muscle into 11 classes with reasonable accuracy. Enables quantitative evaluation of muscle atrophy with disease progression.
Link
🔭 @DeepGravity
DeepLearning Academy courses
Applied Deep Learning for Predictive Analytics
Deep Learning with #TensorFlow
#DeepLearning
#Course
🔭 @DeepGravity
Applied Deep Learning for Predictive Analytics
Deep Learning with #TensorFlow
#DeepLearning
#Course
🔭 @DeepGravity
Deeplearning-Academy
Courses
Advanced Deep Learning Education and mentoring platform | Learn and practice on real Data Science projects | Get prepared to work as a Deep Learning Engineer.
#DeepLearning models tend to increase their accuracy with the increasing amount of training data, where’s traditional #MachineLearning models such as #SVM and Naive #Bayes classifier stop improving after a saturation point.
Link
🔭 @DeepGravity
Link
🔭 @DeepGravity
A very interesting paper by #Harvard University and #OpenAI
#DeepDoubleDescent: WHERE BIGGER MODELS AND MORE DATA HURT
ABSTRACT
We show that a variety of modern deep learning tasks exhibit a “double-descent” phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.
Paper
Related article
#DeepLearning
🔭 @DeepGravity
#DeepDoubleDescent: WHERE BIGGER MODELS AND MORE DATA HURT
ABSTRACT
We show that a variety of modern deep learning tasks exhibit a “double-descent” phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.
Paper
Related article
#DeepLearning
🔭 @DeepGravity
Openai
Deep double descent
We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful…
Free #AI #Resources
Find The Most Updated and Free #ArtificialIntelligence, #MachineLearning, #DataScience, #DeepLearning, #Mathematics, #Python Programming Resources. (Last Update: December 4, 2019)
Link
🔭 @DeepGravity
Find The Most Updated and Free #ArtificialIntelligence, #MachineLearning, #DataScience, #DeepLearning, #Mathematics, #Python Programming Resources. (Last Update: December 4, 2019)
Link
🔭 @DeepGravity
MarkTechPost
Free AI/ Data Science Resources
Find The Most Updated and Free Artificial Intelligence, Machine Learning, Data Science, Deep Learning, Mathematics, Python, R Programming Resources.
Deciphering interaction fingerprints from protein molecular surfaces using geometric #DeepLearning
Abstract
Predicting interactions between proteins and other biomolecules solely based on structure remains a challenge in biology. A high-level representation of protein structure, the molecular surface, displays patterns of chemical and geometric features that fingerprint a protein’s modes of interactions with other biomolecules. We hypothesize that proteins participating in similar interactions may share common fingerprints, independent of their evolutionary history. Fingerprints may be difficult to grasp by visual analysis but could be learned from large-scale datasets. We present MaSIF (molecular surface interaction fingerprinting), a conceptual framework based on a geometric deep learning method to capture fingerprints that are important for specific biomolecular interactions. We showcase MaSIF with three prediction challenges: protein pocket-ligand prediction, protein–protein interaction site prediction and ultrafast scanning of protein surfaces for prediction of protein–protein complexes. We anticipate that our conceptual framework will lead to improvements in our understanding of protein function and design.
Paper
🔭 @DeepGravity
Abstract
Predicting interactions between proteins and other biomolecules solely based on structure remains a challenge in biology. A high-level representation of protein structure, the molecular surface, displays patterns of chemical and geometric features that fingerprint a protein’s modes of interactions with other biomolecules. We hypothesize that proteins participating in similar interactions may share common fingerprints, independent of their evolutionary history. Fingerprints may be difficult to grasp by visual analysis but could be learned from large-scale datasets. We present MaSIF (molecular surface interaction fingerprinting), a conceptual framework based on a geometric deep learning method to capture fingerprints that are important for specific biomolecular interactions. We showcase MaSIF with three prediction challenges: protein pocket-ligand prediction, protein–protein interaction site prediction and ultrafast scanning of protein surfaces for prediction of protein–protein complexes. We anticipate that our conceptual framework will lead to improvements in our understanding of protein function and design.
Paper
🔭 @DeepGravity
Nature
Deciphering interaction fingerprints from protein molecular surfaces using geometric deep learning
Nature Methods - MaSIF, a deep learning-based method, finds common patterns of chemical and geometric features on biomolecular surfaces for predicting protein–ligand and protein–protein...
Yoshua #Bengio: From System 1 #DeepLearning to System 2 Deep Learning ( #NeurIPS2019)
YouTube
🔭 @DeepGravity
YouTube
🔭 @DeepGravity
YouTube
Yoshua Bengio: From System 1 Deep Learning to System 2 Deep Learning (NeurIPS 2019)
This is a combined slide/speaker video of Yoshua Bengio's talk at NeurIPS 2019. Slide-synced non-YouTube version is here: https://slideslive.com/neurips/neur...
Security of #DeepLearning Methodologies: Challenges and Opportunities
Despite the plethora of studies about security vulnerabilities and defenses of deep learning models, security aspects of deep learning methodologies, such as transfer learning, have been rarely studied. In this article, we highlight the security challenges and research opportunities of these methodologies, focusing on vulnerabilities and attacks unique to them.
Paper
🔭 @DeepGravity
Despite the plethora of studies about security vulnerabilities and defenses of deep learning models, security aspects of deep learning methodologies, such as transfer learning, have been rarely studied. In this article, we highlight the security challenges and research opportunities of these methodologies, focusing on vulnerabilities and attacks unique to them.
Paper
🔭 @DeepGravity
The year in AI: 2019 #ML / #AI advances recap
It has become somewhat of a tradition for me to do an end-of-year retrospective of advances in AI/ML (see last year’s round up for example), so here we go again! This year started with a big recognition to the impact of #DeepLearning when #Hinton, #Bengio, and #Lecun were awarded the #Turing award.
Link
🔭 @DeepGravity
It has become somewhat of a tradition for me to do an end-of-year retrospective of advances in AI/ML (see last year’s round up for example), so here we go again! This year started with a big recognition to the impact of #DeepLearning when #Hinton, #Bengio, and #Lecun were awarded the #Turing award.
Link
🔭 @DeepGravity
Medium
The year in AI: 2019 ML/AI advances recap
It has become somewhat of a tradition for me to do an end-of-year retrospective of advances in AI/ML (see last year’s round up for…