Machine Learning
39.2K subscribers
3.83K photos
32 videos
41 files
1.3K links
Machine learning insights, practical tutorials, and clear explanations for beginners and aspiring data scientists. Follow the channel for models, algorithms, coding guides, and real-world ML applications.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Machine Learning
Photo
# πŸ“š PyTorch Tutorial for Beginners - Part 2/6: Deep Neural Networks & Training Techniques
#PyTorch #DeepLearning #MachineLearning #NeuralNetworks #Training

Welcome to Part 2 of our comprehensive PyTorch series! This lesson dives deep into building and training neural networks, covering architectures, activation functions, optimization, and more.

---

## πŸ”Ή Recap & Setup
import torch
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt
from torch.utils.data import DataLoader, TensorDataset

# Check GPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"Using device: {device}")


---

## πŸ”Ή Deep Neural Network (DNN) Architecture
### 1. Key Components
| Component | Purpose | PyTorch Implementation |
|--------------------|-------------------------------------------------------------------------|------------------------------|
| Input Layer | Receives raw features | nn.Linear(input_dim, hidden_dim) |
| Hidden Layers | Learn hierarchical representations | Multiple nn.Linear + Activation |
| Output Layer | Produces final predictions | nn.Linear(hidden_dim, output_dim) |
| Activation | Introduces non-linearity | nn.ReLU(), nn.Sigmoid(), etc. |
| Loss Function | Measures prediction error | nn.MSELoss(), nn.CrossEntropyLoss() |
| Optimizer | Updates weights to minimize loss | optim.SGD(), optim.Adam() |

### 2. Building a DNN
class DNN(nn.Module):
def __init__(self, input_size, hidden_sizes, output_size):
super().__init__()
layers = []

# Hidden layers
prev_size = input_size
for hidden_size in hidden_sizes:
layers.append(nn.Linear(prev_size, hidden_size))
layers.append(nn.ReLU())
prev_size = hidden_size

# Output layer (no activation for regression)
layers.append(nn.Linear(prev_size, output_size))

self.net = nn.Sequential(*layers)

def forward(self, x):
return self.net(x)

# Example: 3-layer network (input=10, hidden=[64,32], output=1)
model = DNN(10, [64, 32], 1).to(device)
print(model)


---

## πŸ”Ή Activation Functions
### 1. Common Choices
| Activation | Formula | Range | Use Case | PyTorch |
|-----------------|----------------------|------------|------------------------------|------------------|
| ReLU | max(0, x) | [0, ∞) | Hidden layers | nn.ReLU() |
| Leaky ReLU | max(0.01x, x) | (-∞, ∞) | Avoid dead neurons | nn.LeakyReLU() |
| Sigmoid | 1 / (1 + e^(-x)) | (0, 1) | Binary classification | nn.Sigmoid() |
| Tanh | (e^x - e^(-x)) / ... | (-1, 1) | RNNs, some hidden layers | nn.Tanh() |
| Softmax | e^x / sum(e^x) | (0, 1) | Multi-class classification | nn.Softmax() |

### 2. Visual Comparison
x = torch.linspace(-5, 5, 100)
activations = {
"ReLU": nn.ReLU()(x),
"LeakyReLU": nn.LeakyReLU(0.1)(x),
"Sigmoid": nn.Sigmoid()(x),
"Tanh": nn.Tanh()(x)
}

plt.figure(figsize=(12, 4))
for i, (name, y) in enumerate(activations.items()):
plt.subplot(1, 4, i+1)
plt.plot(x.numpy(), y.numpy())
plt.title(name)
plt.tight_layout()
plt.show()


---
πŸ”₯2πŸ‘1
🌟 Vision Transformer (ViT) Tutorial – Part 1: From CNNs to Transformers – The Revolution in Computer Vision

Let's start: https://hackmd.io/@husseinsheikho/vit-1

#VisionTransformer #ViT #DeepLearning #ComputerVision #Transformers #AI #MachineLearning #NeuralNetworks #ImageClassification #AttentionIsAllYouNeed

βœ‰οΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk

πŸ“± Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
❀3πŸ‘1
πŸ“˜ Ultimate Guide to Graph Neural Networks (GNNs): Part 1 β€” Foundations of Graph Theory & Why GNNs Revolutionize AI

Duration: ~45 minutes reading time | Comprehensive beginner-to-advanced introduction

Let's start: https://hackmd.io/@husseinsheikho/GNN-1

#GraphNeuralNetworks #GNN #MachineLearning #DeepLearning #AI #NeuralNetworks #DataScience #GraphTheory #ArtificialIntelligence #PyTorchGeometric #NodeClassification #LinkPrediction #GraphRepresentation #AIforBeginners #AdvancedAI
ο»Ώ
βœ‰οΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk

πŸ“± Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
❀1
πŸ“˜ Ultimate Guide to Graph Neural Networks (GNNs): Part 2 β€” The Message Passing Framework: Mathematical Heart of All GNNs

Duration: ~60 minutes reading time | Comprehensive deep dive into the core mechanism powering modern GNNs

Let's study: https://hackmd.io/@husseinsheikho/GNN-2

#GraphNeuralNetworks #GNN #MachineLearning #DeepLearning #AI #NeuralNetworks #DataScience #GraphTheory #ArtificialIntelligence #PyTorchGeometric #MessagePassing #GraphAlgorithms #NodeClassification #LinkPrediction #GraphRepresentation #AIforBeginners #AdvancedAI

βœ‰οΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk

πŸ“± Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
❀3🀩1
πŸ“• Ultimate Guide to Graph Neural Networks (GNNs): Part 3 β€” Advanced GNN Architectures: Transformers, Temporal Networks & Geometric Deep Learning

Duration: ~60 minutes reading time | Comprehensive deep dive into cutting-edge GNN architectures

πŸ†˜ Read: https://hackmd.io/@husseinsheikho/GNN-3

#GraphNeuralNetworks #GNN #MachineLearning #DeepLearning #AI #NeuralNetworks #DataScience #GraphTheory #ArtificialIntelligence #PyTorchGeometric #GraphTransformers #TemporalGNNs #GeometricDeepLearning #AdvancedGNNs #AIforBeginners #AdvancedAI


βœ‰οΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk

πŸ“± Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
❀1
πŸ“˜ Ultimate Guide to Graph Neural Networks (GNNs): Part 4 β€” GNN Training Dynamics, Optimization Challenges, and Scalability Solutions

Duration: ~45 minutes reading time | Comprehensive guide to training GNNs effectively at scale

Part 4-A: https://hackmd.io/@husseinsheikho/GNN4-A

Part4-B: https://hackmd.io/@husseinsheikho/GNN4-B

#GraphNeuralNetworks #GNN #MachineLearning #DeepLearning #AI #NeuralNetworks #DataScience #GraphTheory #ArtificialIntelligence #PyTorchGeometric #GNNOptimization #ScalableGNNs #TrainingDynamics #AIforBeginners #AdvancedAI


βœ‰οΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk

πŸ“± Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
❀4πŸ‘Ž1
πŸ“˜ Ultimate Guide to Graph Neural Networks (GNNs): Part 5 β€” GNN Applications Across Domains: Real-World Impact in 30 Minutes

Duration: ~30 minutes reading time | Practical guide to GNN applications with concrete ROI metrics

Link: https://hackmd.io/@husseinsheikho/GNN-5

#GraphNeuralNetworks #GNN #MachineLearning #DeepLearning #AI #NeuralNetworks #DataScience #GraphTheory #ArtificialIntelligence #RealWorldApplications #HealthcareAI #FinTech #DrugDiscovery #RecommendationSystems #ClimateAI

βœ‰οΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk

πŸ“± Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
❀5
πŸ“˜ Ultimate Guide to Graph Neural Networks (GNNs): Part 6 β€” Advanced Frontiers, Ethics, and Future Directions

Duration: ~50 minutes reading time | Cutting-edge insights on where GNNs are headed

Let's read: https://hackmd.io/@husseinsheikho/GNN-6

#GraphNeuralNetworks #GNN #MachineLearning #DeepLearning #AI #NeuralNetworks #DataScience #GraphTheory #ArtificialIntelligence #FutureOfGNNs #EmergingResearch #EthicalAI #GNNBestPractices #AdvancedAI #50MinuteRead

βœ‰οΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk

πŸ“± Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
❀4
πŸ“˜ Ultimate Guide to Graph Neural Networks (GNNs): Part 7 β€” Advanced Implementation, Multimodal Integration, and Scientific Applications

Duration: ~60 minutes reading time | Deep dive into cutting-edge GNN implementations and applications

Read: https://hackmd.io/@husseinsheikho/GNN7

#GraphNeuralNetworks #GNN #MachineLearning #DeepLearning #AI #NeuralNetworks #DataScience #GraphTheory #ArtificialIntelligence #AdvancedGNNs #MultimodalLearning #ScientificAI #GNNImplementation #60MinuteRead

βœ‰οΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
Please open Telegram to view this post
VIEW IN TELEGRAM
❀2
Please open Telegram to view this post
VIEW IN TELEGRAM
❀7
✨ NeRFs Explained: Goodbye Photogrammetry? ✨

πŸ“– Table of Contents NeRFs Explained: Goodbye Photogrammetry? How Do NeRFs Work? Block #A: We Begin with a 5D Input Block #B: The Neural Network and Its Output Block #C: Volumetric Rendering The NeRF Problem and Evolutions Summary and Next Steps…...

🏷️ #3DComputerVision #3DReconstruction #DeepLearning #NeuralNetworks #Photogrammetry #Tutorial
✨ Adversarial Learning with Keras and TensorFlow (Part 3): Exploring Adversarial Attacks Using Neural Structured Learning (NSL) ✨

πŸ“– Table of Contents Adversarial Learning with Keras and TensorFlow (Part 3): Exploring Adversarial Attacks Using Neural Structured Learning (NSL) Introduction to Advanced Adversarial Techniques in Machine Learning Harnessing NSL for Robust Model Training: Insights from Part 2 Deep Dive into…...

🏷️ #AdversarialLearning #DeepLearning #ImageProcessing #Keras #MachineLearning #NeuralNetworks #NeuralStructuredLearning #TensorFlow #Tutorial
πŸ€–πŸ§  The Little Book of Deep Learning – A Complete Summary and Chapter-Wise Overview

πŸ—“οΈ 08 Oct 2025
πŸ“š AI News & Trends

In the ever-evolving world of Artificial Intelligence, deep learning continues to be the driving force behind breakthroughs in computer vision, speech recognition and natural language processing. For those seeking a clear, structured and accessible guide to understanding how deep learning really works, β€œThe Little Book of Deep Learning” by FranΓ§ois Fleuret is a gem. This ...

#DeepLearning #ArtificialIntelligence #MachineLearning #NeuralNetworks #AIGuides # FrancoisFleuret
πŸ“Œ I Measured Neural Network Training Every 5 Steps for 10,000 Iterations

πŸ—‚ Category: MACHINE LEARNING

πŸ•’ Date: 2025-11-15 | ⏱️ Read time: 9 min read

A deep dive into the mechanics of neural network training. This detailed analysis meticulously measures key training metrics every 5 steps over 10,000 iterations, providing a high-resolution view of the learning process. The findings offer granular insights into model convergence and the subtle dynamics often missed by standard monitoring, making it a valuable read for ML practitioners and researchers seeking to better understand how models learn.

#NeuralNetworks #MachineLearning #DeepLearning #DataAnalysis #ModelTraining
❀2
πŸ“Œ Neural Networks Are Blurry, Symbolic Systems Are Fragmented. Sparse Autoencoders Help Us Combine Them.

πŸ—‚ Category: DEEP LEARNING

πŸ•’ Date: 2025-11-27 | ⏱️ Read time: 17 min read

Neural networks and symbolic AI models compress information in fundamentally different ways, leading to "blurry" continuous representations versus "fragmented" discrete ones. Sparse Autoencoders (SAEs) offer a promising bridge between these two paradigms. By learning sparse, interpretable features from the dense activations within neural networks, SAEs can help translate continuous data into more structured, symbolic-like components. This approach aims to combine the robust pattern recognition of neural systems with the logical reasoning capabilities of symbolic AI, advancing the quest for more understandable and capable models.

#SparseAutoencoders #AIInterpretability #NeuralNetworks #SymbolicAI #NeuroSymbolic
❀3