Machine Learning with Python
68.8K subscribers
1.34K photos
108 videos
175 files
1.01K links
Learn Machine Learning with hands-on Python tutorials, real-world code examples, and clear explanations for researchers and developers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Auto-Encoder & Backpropagation by hand ✍️ lecture video ~ πŸ“Ί https://byhand.ai/cv/10

It took me a few years to invent this method to show both forward and backward passes for a non-trivial case of a multi-layer perceptron over a batch of inputs, plus gradient descents over multiple epochs, while being able to hand calculate each step and code in Excel at the same time.

= Chapters =
β€’ Encoder & Decoder (00:00)
β€’ Equation (10:09)
β€’ 4-2-4 AutoEncoder (16:38)
β€’ 6-4-2-4-6 AutoEncoder (18:39)
β€’ L2 Loss (20:49)
β€’ L2 Loss Gradient (27:31)
β€’ Backpropagation (30:12)
β€’ Implement Backpropagation (39:00)
β€’ Gradient Descent (44:30)
β€’ Summary (51:39)

#AIEngineering #MachineLearning #DeepLearning #LLMs #RAG #MLOps #Python #GitHubProjects #AIForBeginners #ArtificialIntelligence #NeuralNetworks #OpenSourceAI #DataScienceCareers


βœ‰οΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
Please open Telegram to view this post
VIEW IN TELEGRAM
❀6
This media is not supported in your browser
VIEW IN TELEGRAM
GPU by hand ✍️ I drew this to show how a GPU speeds up an array operation of 8 elements in parallel over 4 threads in 2 clock cycles. Read more πŸ‘‡

CPU
β€’ It has one core.
β€’ Its global memory has 120 locations (0-119).
β€’ To use the GPU, it needs to copy data from the global memory to the GPU.
β€’ After GPU is done, it will copy the results back.

GPU
β€’ It has four cores to run four threads (0-3).
β€’ It has a register file of 28 locations (0-27)
β€’ This register file has four banks (0-3).
β€’ All threads share the same register file.
β€’ But they must read/write using the four banks.
β€’ Each bank allows 2 reads (Read 0, Read 1) and 1 write in a single clock cycle.

#AIEngineering #MachineLearning #DeepLearning #LLMs #RAG #MLOps #Python #GitHubProjects #AIForBeginners #ArtificialIntelligence #NeuralNetworks #OpenSourceAI #DataScienceCareers


βœ‰οΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
Please open Telegram to view this post
VIEW IN TELEGRAM
πŸ‘6❀4
What is torch.nn really?

When I started working with PyTorch, my biggest question was: "What is torch.nn?".


This article explains it quite well.

πŸ“Œ Read

#pytorch #AIEngineering #MachineLearning #DeepLearning #LLMs #RAG #MLOps #Python #GitHubProjects #AIForBeginners #ArtificialIntelligence #NeuralNetworks #OpenSourceAI #DataScienceCareers


βœ‰οΈ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
Please open Telegram to view this post
VIEW IN TELEGRAM
❀5
πŸ€–πŸ§  NVIDIA, MIT, HKU and Tsinghua University Introduce QeRL: A Powerful Quantum Leap in Reinforcement Learning for LLMs

πŸ—“οΈ 17 Oct 2025
πŸ“š AI News & Trends

The rise of large language models (LLMs) has redefined artificial intelligence powering everything from conversational AI to autonomous reasoning systems. However, training these models especially through reinforcement learning (RL) is computationally expensive requiring massive GPU resources and long training cycles. To address this, a team of researchers from NVIDIA, Massachusetts Institute of Technology (MIT), The ...

#QuantumLearning #ReinforcementLearning #LLMs #NVIDIA #MIT #TsinghuaUniversity
❀2
πŸ€–πŸ§  Agentic Entropy-Balanced Policy Optimization (AEPO): Balancing Exploration and Stability in Reinforcement Learning for Web Agents

πŸ—“οΈ 17 Oct 2025
πŸ“š AI News & Trends

AEPO (Agentic Entropy-Balanced Policy Optimization) represents a major advancement in the evolution of Agentic Reinforcement Learning (RL). As large language models (LLMs) increasingly act as autonomous web agents – searching, reasoning and interacting with tools – the need for balanced exploration and stability has become crucial. Traditional RL methods often rely heavily on entropy to ...

#AgenticRL #ReinforcementLearning #LLMs #WebAgents #EntropyBalanced #PolicyOptimization
❀3
πŸ€–πŸ§  The Art of Scaling Reinforcement Learning Compute for LLMs: Top Insights from Meta, UT Austin and Harvard University

πŸ—“οΈ 21 Oct 2025
πŸ“š AI News & Trends

As Large Language Models (LLMs) continue to redefine artificial intelligence, a new research breakthrough has emerged from Meta, The University of Texas at Austin, University College London, UC Berkeley, Harvard University and Periodic Labs. Their paper, titled β€œThe Art of Scaling Reinforcement Learning Compute for LLMs,” introduces a transformative framework for understanding how reinforcement learning ...

#ReinforcementLearning #LLMs #AIResearch #Meta #UTAustin #HarvardUniversity
❀1πŸŽ‰1
All assignments for the #Stanford The Modern Software Developer course are now available online.

This is the first full-fledged university course that covers how code-generative #LLMs are changing every stage of the development lifecycle. The assignments are designed to take you from a beginner to a confident expert in using AI to boost productivity in development.

Enjoy your studies! ✌️
https://github.com/mihail911/modern-software-dev-assignments

https://t.iss.one/CodeProgrammer
❀6πŸ‘4
Forwarded from Learn Python Hub
Media is too big
VIEW IN TELEGRAM
Learn how LLMs work in less than 10 minutes
And honestly? This is probably the best visualization of #LLMs ever made.

https://t.iss.one/Python53
❀12
⚑️ All cheat sheets for programmers in one place.

There's a lot of useful stuff inside: short, clear tips on languages, technologies, and frameworks.

No registration required and it's free.

https://overapi.com/

#python #php #Database #DataAnalysis #MachineLearning #AI #DeepLearning #LLMS

https://t.iss.one/CodeProgrammer ⚑️
Please open Telegram to view this post
VIEW IN TELEGRAM
❀13πŸ‘1
Forwarded from Data Analytics
These 9 lectures from Stanford are a pure goldmine for anyone wanting to learn and understand LLMs in depth

Lecture 1 - Transformer: https://lnkd.in/dGnQW39t

Lecture 2 - Transformer-Based Models & Tricks: https://lnkd.in/dT_VEpVH

Lecture 3 - Tranformers & Large Language Models: https://lnkd.in/dwjjpjaP

Lecture 4 - LLM Training: https://lnkd.in/dSi_xCEN

Lecture 5 - LLM tuning: https://lnkd.in/dUK5djpB

Lecture 6 - LLM Reasoning: https://lnkd.in/dAGQTNAM

Lecture 7 - Agentic LLMs: https://lnkd.in/dWD4j7vm

Lecture 8 - LLM Evaluation: https://lnkd.in/ddxE5zvb

Lecture 9 - Recap & Current Trends: https://lnkd.in/dGsTd8jN

Start understanding #LLMs in depth from the experts. Go through each step-by-step video.

https://t.iss.one/DataAnalyticsX πŸ”—
Please open Telegram to view this post
VIEW IN TELEGRAM
❀7πŸ‘3πŸŽ‰3