Semantic Segmentation of Thigh Muscle using 2.5D #DeepLearning Network Trained with Limited Datasets
Purpose: We propose a 2.5D #deep learning #NeuralNetwork (#DLNN) to automatically classify thigh muscle into 11 classes and evaluate its classification accuracy over 2D and 3D DLNN when trained with limited datasets. Enables operator invariant quantitative assessment of the thigh muscle volume change with respect to the disease progression. Materials and methods: Retrospective datasets consist of 48 thigh volume (TV) cropped from CT DICOM images. Cropped volumes were aligned with femur axis and resample in 2 mm voxel-spacing. Proposed 2.5D DLNN consists of three 2D U-Net trained with axial, coronal and sagittal muscle slices respectively. A voting algorithm was used to combine the output of U-Nets to create final segmentation. 2.5D U-Net was trained on PC with 38 TV and the remaining 10 TV were used to evaluate segmentation accuracy of 10 classes within Thigh. The result segmentation of both left and right thigh were de-cropped to original CT volume space. Finally, segmentation accuracies were compared between proposed DLNN and 2D/3D U-Net. Results: Average segmentation DSC score accuracy of all classes with 2.5D U-Net as 91.18 mean DSC score for 2D U-Net was 3.3 DSC score of 3D U-Net was 5.7 same datasets. Conclusion: We achieved a faster computationally efficient and automatic segmentation of thigh muscle into 11 classes with reasonable accuracy. Enables quantitative evaluation of muscle atrophy with disease progression.
Link
🔭 @DeepGravity
Purpose: We propose a 2.5D #deep learning #NeuralNetwork (#DLNN) to automatically classify thigh muscle into 11 classes and evaluate its classification accuracy over 2D and 3D DLNN when trained with limited datasets. Enables operator invariant quantitative assessment of the thigh muscle volume change with respect to the disease progression. Materials and methods: Retrospective datasets consist of 48 thigh volume (TV) cropped from CT DICOM images. Cropped volumes were aligned with femur axis and resample in 2 mm voxel-spacing. Proposed 2.5D DLNN consists of three 2D U-Net trained with axial, coronal and sagittal muscle slices respectively. A voting algorithm was used to combine the output of U-Nets to create final segmentation. 2.5D U-Net was trained on PC with 38 TV and the remaining 10 TV were used to evaluate segmentation accuracy of 10 classes within Thigh. The result segmentation of both left and right thigh were de-cropped to original CT volume space. Finally, segmentation accuracies were compared between proposed DLNN and 2D/3D U-Net. Results: Average segmentation DSC score accuracy of all classes with 2.5D U-Net as 91.18 mean DSC score for 2D U-Net was 3.3 DSC score of 3D U-Net was 5.7 same datasets. Conclusion: We achieved a faster computationally efficient and automatic segmentation of thigh muscle into 11 classes with reasonable accuracy. Enables quantitative evaluation of muscle atrophy with disease progression.
Link
🔭 @DeepGravity
DeepLearning Academy courses
Applied Deep Learning for Predictive Analytics
Deep Learning with #TensorFlow
#DeepLearning
#Course
🔭 @DeepGravity
Applied Deep Learning for Predictive Analytics
Deep Learning with #TensorFlow
#DeepLearning
#Course
🔭 @DeepGravity
Deeplearning-Academy
Courses
Advanced Deep Learning Education and mentoring platform | Learn and practice on real Data Science projects | Get prepared to work as a Deep Learning Engineer.
#DeepLearning models tend to increase their accuracy with the increasing amount of training data, where’s traditional #MachineLearning models such as #SVM and Naive #Bayes classifier stop improving after a saturation point.
Link
🔭 @DeepGravity
Link
🔭 @DeepGravity
A very interesting paper by #Harvard University and #OpenAI
#DeepDoubleDescent: WHERE BIGGER MODELS AND MORE DATA HURT
ABSTRACT
We show that a variety of modern deep learning tasks exhibit a “double-descent” phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.
Paper
Related article
#DeepLearning
🔭 @DeepGravity
#DeepDoubleDescent: WHERE BIGGER MODELS AND MORE DATA HURT
ABSTRACT
We show that a variety of modern deep learning tasks exhibit a “double-descent” phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.
Paper
Related article
#DeepLearning
🔭 @DeepGravity
Openai
Deep double descent
We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful…
Free #AI #Resources
Find The Most Updated and Free #ArtificialIntelligence, #MachineLearning, #DataScience, #DeepLearning, #Mathematics, #Python Programming Resources. (Last Update: December 4, 2019)
Link
🔭 @DeepGravity
Find The Most Updated and Free #ArtificialIntelligence, #MachineLearning, #DataScience, #DeepLearning, #Mathematics, #Python Programming Resources. (Last Update: December 4, 2019)
Link
🔭 @DeepGravity
MarkTechPost
Free AI/ Data Science Resources
Find The Most Updated and Free Artificial Intelligence, Machine Learning, Data Science, Deep Learning, Mathematics, Python, R Programming Resources.
Deciphering interaction fingerprints from protein molecular surfaces using geometric #DeepLearning
Abstract
Predicting interactions between proteins and other biomolecules solely based on structure remains a challenge in biology. A high-level representation of protein structure, the molecular surface, displays patterns of chemical and geometric features that fingerprint a protein’s modes of interactions with other biomolecules. We hypothesize that proteins participating in similar interactions may share common fingerprints, independent of their evolutionary history. Fingerprints may be difficult to grasp by visual analysis but could be learned from large-scale datasets. We present MaSIF (molecular surface interaction fingerprinting), a conceptual framework based on a geometric deep learning method to capture fingerprints that are important for specific biomolecular interactions. We showcase MaSIF with three prediction challenges: protein pocket-ligand prediction, protein–protein interaction site prediction and ultrafast scanning of protein surfaces for prediction of protein–protein complexes. We anticipate that our conceptual framework will lead to improvements in our understanding of protein function and design.
Paper
🔭 @DeepGravity
Abstract
Predicting interactions between proteins and other biomolecules solely based on structure remains a challenge in biology. A high-level representation of protein structure, the molecular surface, displays patterns of chemical and geometric features that fingerprint a protein’s modes of interactions with other biomolecules. We hypothesize that proteins participating in similar interactions may share common fingerprints, independent of their evolutionary history. Fingerprints may be difficult to grasp by visual analysis but could be learned from large-scale datasets. We present MaSIF (molecular surface interaction fingerprinting), a conceptual framework based on a geometric deep learning method to capture fingerprints that are important for specific biomolecular interactions. We showcase MaSIF with three prediction challenges: protein pocket-ligand prediction, protein–protein interaction site prediction and ultrafast scanning of protein surfaces for prediction of protein–protein complexes. We anticipate that our conceptual framework will lead to improvements in our understanding of protein function and design.
Paper
🔭 @DeepGravity
Nature
Deciphering interaction fingerprints from protein molecular surfaces using geometric deep learning
Nature Methods - MaSIF, a deep learning-based method, finds common patterns of chemical and geometric features on biomolecular surfaces for predicting protein–ligand and protein–protein...
Yoshua #Bengio: From System 1 #DeepLearning to System 2 Deep Learning ( #NeurIPS2019)
YouTube
🔭 @DeepGravity
YouTube
🔭 @DeepGravity
YouTube
Yoshua Bengio: From System 1 Deep Learning to System 2 Deep Learning (NeurIPS 2019)
This is a combined slide/speaker video of Yoshua Bengio's talk at NeurIPS 2019. Slide-synced non-YouTube version is here: https://slideslive.com/neurips/neur...
Security of #DeepLearning Methodologies: Challenges and Opportunities
Despite the plethora of studies about security vulnerabilities and defenses of deep learning models, security aspects of deep learning methodologies, such as transfer learning, have been rarely studied. In this article, we highlight the security challenges and research opportunities of these methodologies, focusing on vulnerabilities and attacks unique to them.
Paper
🔭 @DeepGravity
Despite the plethora of studies about security vulnerabilities and defenses of deep learning models, security aspects of deep learning methodologies, such as transfer learning, have been rarely studied. In this article, we highlight the security challenges and research opportunities of these methodologies, focusing on vulnerabilities and attacks unique to them.
Paper
🔭 @DeepGravity
The year in AI: 2019 #ML / #AI advances recap
It has become somewhat of a tradition for me to do an end-of-year retrospective of advances in AI/ML (see last year’s round up for example), so here we go again! This year started with a big recognition to the impact of #DeepLearning when #Hinton, #Bengio, and #Lecun were awarded the #Turing award.
Link
🔭 @DeepGravity
It has become somewhat of a tradition for me to do an end-of-year retrospective of advances in AI/ML (see last year’s round up for example), so here we go again! This year started with a big recognition to the impact of #DeepLearning when #Hinton, #Bengio, and #Lecun were awarded the #Turing award.
Link
🔭 @DeepGravity
Medium
The year in AI: 2019 ML/AI advances recap
It has become somewhat of a tradition for me to do an end-of-year retrospective of advances in AI/ML (see last year’s round up for…
Our reinforcement learning architect designs have been just published on #NeurIPS2019 AI Art Gallery:
https://lnkd.in/dFZ37BN
Seems it draws like a baby now, but is growing and hopefully would be a skillful #RL artist very soon.
#reinforcementlearning #deeplearning #ai #artificialintelligence #art #deepreinforcementlearning #creativeart #neurips
🔭 @DeepGravity
https://lnkd.in/dFZ37BN
Seems it draws like a baby now, but is growing and hopefully would be a skillful #RL artist very soon.
#reinforcementlearning #deeplearning #ai #artificialintelligence #art #deepreinforcementlearning #creativeart #neurips
🔭 @DeepGravity
AI Art Gallery
Yuta Akizuki, Mathias Bernhard, Reza Kakooee, Marirena Kladeftira, Benjamin Dillenburger - AI Art Gallery
Generative Modelling with Design Constraints – Reinforcement Learning for Furniture Generation (2019) Generative design has been…
Seven differences between academia and industry for building machine learning and #deepLearning models
1) Approach to accuracy
2) Training vs serving
3) Emphasis on Engineering
4) Less emphasis on larger models
5) Understanding the baseline
6) Understanding the intricacies of data
7) Focusing on deep learning too early
Link
🔭 @DeepGravity
1) Approach to accuracy
2) Training vs serving
3) Emphasis on Engineering
4) Less emphasis on larger models
5) Understanding the baseline
6) Understanding the intricacies of data
7) Focusing on deep learning too early
Link
🔭 @DeepGravity
Datasciencecentral
Seven differences between academia and industry for building machine learning and deep learning models
Academia and industry take different approaches to building machine learning and deep learning models
Here are seven differences
1) Approach to accura…
#TensorFlow 2 Tutorial: Get Started in #DeepLearning With tf.keras
After completing this tutorial, you will know:
The difference between Keras and tf.keras and how to install and confirm TensorFlow is working.
The 5-step life-cycle of tf.keras models and how to use the sequential and functional APIs.
How to develop MLP, CNN, and RNN models with tf.keras for regression, classification, and time series forecasting.
How to use the advanced features of the tf.keras API to inspect and diagnose your model.
How to improve the performance of your tf.keras model by reducing overfitting and accelerating training.
#Keras
Link
🔭 @DeepGravity
After completing this tutorial, you will know:
The difference between Keras and tf.keras and how to install and confirm TensorFlow is working.
The 5-step life-cycle of tf.keras models and how to use the sequential and functional APIs.
How to develop MLP, CNN, and RNN models with tf.keras for regression, classification, and time series forecasting.
How to use the advanced features of the tf.keras API to inspect and diagnose your model.
How to improve the performance of your tf.keras model by reducing overfitting and accelerating training.
#Keras
Link
🔭 @DeepGravity
MachineLearningMastery.com
TensorFlow 2 Tutorial: Get Started in Deep Learning with tf.keras - MachineLearningMastery.com
Predictive modeling with deep learning is a skill that modern developers need to know. TensorFlow is the premier open-source deep learning framework developed and maintained by Google. Although using TensorFlow directly can be challenging, the modern tf.keras…
During the last two days, some famous #MachineLearning researchers elucidated their own definition of #DeepLearning. You might check the related links to read full definitions and discussions on each.
Yann LeCun:
#DL is constructing networks of parameterized functional modules & training them from examples using gradient-based optimization. That's it.
This definition is orthogonal to the learning paradigm: reinforcement, supervised, or self-supervised.
https://www.facebook.com/722677142/posts/10156463919392143/
Andriy Burkov:
Looks like in late 2019, people still need a definition of deep learning, so here's mine: deep learning is finding parameters of a nested parametrized non-linear function by minimizing an example-based differentiable cost function using gradient descent.
https://www.linkedin.com/posts/andriyburkov_looks-like-in-late-2019-people-still-need-activity-6615377527147941888-ce68/
François Chollet:
Deep learning refers to an approach to representation learning where your model is a chain of modules (typically a stack / pyramid, hence the notion of depth), each of which could serve as a standalone feature extractor if trained as such.
https://twitter.com/fchollet/status/1210031900695449600
Link
🔭 @DeepGravity
Yann LeCun:
#DL is constructing networks of parameterized functional modules & training them from examples using gradient-based optimization. That's it.
This definition is orthogonal to the learning paradigm: reinforcement, supervised, or self-supervised.
https://www.facebook.com/722677142/posts/10156463919392143/
Andriy Burkov:
Looks like in late 2019, people still need a definition of deep learning, so here's mine: deep learning is finding parameters of a nested parametrized non-linear function by minimizing an example-based differentiable cost function using gradient descent.
https://www.linkedin.com/posts/andriyburkov_looks-like-in-late-2019-people-still-need-activity-6615377527147941888-ce68/
François Chollet:
Deep learning refers to an approach to representation learning where your model is a chain of modules (typically a stack / pyramid, hence the notion of depth), each of which could serve as a standalone feature extractor if trained as such.
https://twitter.com/fchollet/status/1210031900695449600
Link
🔭 @DeepGravity
Dive into Deep Learning
An interactive #DeepLearning #book with code, math, and discussions, based on the #NumPy interface.
Book
🔭 @DeepGravity
An interactive #DeepLearning #book with code, math, and discussions, based on the #NumPy interface.
Book
🔭 @DeepGravity
درود بر همهی شما دوستان گرامی،
امیدوارم این روزهای سخت بهاری به زودی با چیرگی سبزی بر سیاهی سپری بشه. هر چند اندوهش هرگز از یادها نخواهد رفت.
به منظور بررسی ابعاد بحران #کرونا از نگاه #ماشین_لرنینگ، قصد دارم به کمک شما عزیزان یک جلسهی هماندیشی آنلاین رو راهاندازی کنم. در لینک زیر زمانهای مختلفی رو میبینین. لطفا زمانی که برای شما مناسبتره رو انتخاب کنین که تو اون تایم از طریق زوم یا گوگل میت بتونیم دور هم جمع بشیم. سعی کردم گزینهها رو بین صبح و عصر و شب پخش کنم که با توجه به اختلاف ساعتها بتونیم تایم مشترکی رو پیدا کنیم:
https://doodle.com/poll/69fvgkegwq3y8p6w
هدف این جلسه بیشتر هم اندیشی و به اشتراک گذاری دانستهها و داشتهها ست. من خودم دو تا رپو آماده کردم که در موردشون توضیح خواهم داد.
(هدف مقاله دادن یا کار اقتصادی کردن نیست)
امیدوارم ما هم بتونیم در کنار تیم درمان، کمکی برای کشور (و شاید دنیا) در این شرایط باشیم.
اگه پیشنهادی هم دارین، لطفا در کامنت یا به صورت خصوصی پیام بذارین.
ارادتمند
#ai #computervision #machinelearning #deeplearning #covid19
@Reza
🔭 @DeepGravity
امیدوارم این روزهای سخت بهاری به زودی با چیرگی سبزی بر سیاهی سپری بشه. هر چند اندوهش هرگز از یادها نخواهد رفت.
به منظور بررسی ابعاد بحران #کرونا از نگاه #ماشین_لرنینگ، قصد دارم به کمک شما عزیزان یک جلسهی هماندیشی آنلاین رو راهاندازی کنم. در لینک زیر زمانهای مختلفی رو میبینین. لطفا زمانی که برای شما مناسبتره رو انتخاب کنین که تو اون تایم از طریق زوم یا گوگل میت بتونیم دور هم جمع بشیم. سعی کردم گزینهها رو بین صبح و عصر و شب پخش کنم که با توجه به اختلاف ساعتها بتونیم تایم مشترکی رو پیدا کنیم:
https://doodle.com/poll/69fvgkegwq3y8p6w
هدف این جلسه بیشتر هم اندیشی و به اشتراک گذاری دانستهها و داشتهها ست. من خودم دو تا رپو آماده کردم که در موردشون توضیح خواهم داد.
(هدف مقاله دادن یا کار اقتصادی کردن نیست)
امیدوارم ما هم بتونیم در کنار تیم درمان، کمکی برای کشور (و شاید دنیا) در این شرایط باشیم.
اگه پیشنهادی هم دارین، لطفا در کامنت یا به صورت خصوصی پیام بذارین.
ارادتمند
#ai #computervision #machinelearning #deeplearning #covid19
@Reza
🔭 @DeepGravity
Doodle
Doodle: The COVID-19 Aspects
For Iranians in AI