Auto-Encoder & Backpropagation by hand ✍️ lecture video ~ 📺 https://byhand.ai/cv/10
It took me a few years to invent this method to show both forward and backward passes for a non-trivial case of a multi-layer perceptron over a batch of inputs, plus gradient descents over multiple epochs, while being able to hand calculate each step and code in Excel at the same time.
= Chapters =
• Encoder & Decoder (00:00)
• Equation (10:09)
• 4-2-4 AutoEncoder (16:38)
• 6-4-2-4-6 AutoEncoder (18:39)
• L2 Loss (20:49)
• L2 Loss Gradient (27:31)
• Backpropagation (30:12)
• Implement Backpropagation (39:00)
• Gradient Descent (44:30)
• Summary (51:39)
✉️ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
It took me a few years to invent this method to show both forward and backward passes for a non-trivial case of a multi-layer perceptron over a batch of inputs, plus gradient descents over multiple epochs, while being able to hand calculate each step and code in Excel at the same time.
= Chapters =
• Encoder & Decoder (00:00)
• Equation (10:09)
• 4-2-4 AutoEncoder (16:38)
• 6-4-2-4-6 AutoEncoder (18:39)
• L2 Loss (20:49)
• L2 Loss Gradient (27:31)
• Backpropagation (30:12)
• Implement Backpropagation (39:00)
• Gradient Descent (44:30)
• Summary (51:39)
#AIEngineering #MachineLearning #DeepLearning #LLMs #RAG #MLOps #Python #GitHubProjects #AIForBeginners #ArtificialIntelligence #NeuralNetworks #OpenSourceAI #DataScienceCareers
Please open Telegram to view this post
VIEW IN TELEGRAM
❤6
This media is not supported in your browser
VIEW IN TELEGRAM
GPU by hand ✍️ I drew this to show how a GPU speeds up an array operation of 8 elements in parallel over 4 threads in 2 clock cycles. Read more 👇
CPU
• It has one core.
• Its global memory has 120 locations (0-119).
• To use the GPU, it needs to copy data from the global memory to the GPU.
• After GPU is done, it will copy the results back.
GPU
• It has four cores to run four threads (0-3).
• It has a register file of 28 locations (0-27)
• This register file has four banks (0-3).
• All threads share the same register file.
• But they must read/write using the four banks.
• Each bank allows 2 reads (Read 0, Read 1) and 1 write in a single clock cycle.
✉️ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
CPU
• It has one core.
• Its global memory has 120 locations (0-119).
• To use the GPU, it needs to copy data from the global memory to the GPU.
• After GPU is done, it will copy the results back.
GPU
• It has four cores to run four threads (0-3).
• It has a register file of 28 locations (0-27)
• This register file has four banks (0-3).
• All threads share the same register file.
• But they must read/write using the four banks.
• Each bank allows 2 reads (Read 0, Read 1) and 1 write in a single clock cycle.
#AIEngineering #MachineLearning #DeepLearning #LLMs #RAG #MLOps #Python #GitHubProjects #AIForBeginners #ArtificialIntelligence #NeuralNetworks #OpenSourceAI #DataScienceCareers
Please open Telegram to view this post
VIEW IN TELEGRAM
👍5❤4
What is torch.nn really?
This article explains it quite well.
📌 Read
✉️ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
When I started working with PyTorch, my biggest question was: "What is torch.nn?".
This article explains it quite well.
📌 Read
#pytorch #AIEngineering #MachineLearning #DeepLearning #LLMs #RAG #MLOps #Python #GitHubProjects #AIForBeginners #ArtificialIntelligence #NeuralNetworks #OpenSourceAI #DataScienceCareers
Please open Telegram to view this post
VIEW IN TELEGRAM
❤5
🤖🧠 NVIDIA, MIT, HKU and Tsinghua University Introduce QeRL: A Powerful Quantum Leap in Reinforcement Learning for LLMs
🗓️ 17 Oct 2025
📚 AI News & Trends
The rise of large language models (LLMs) has redefined artificial intelligence powering everything from conversational AI to autonomous reasoning systems. However, training these models especially through reinforcement learning (RL) is computationally expensive requiring massive GPU resources and long training cycles. To address this, a team of researchers from NVIDIA, Massachusetts Institute of Technology (MIT), The ...
#QuantumLearning #ReinforcementLearning #LLMs #NVIDIA #MIT #TsinghuaUniversity
🗓️ 17 Oct 2025
📚 AI News & Trends
The rise of large language models (LLMs) has redefined artificial intelligence powering everything from conversational AI to autonomous reasoning systems. However, training these models especially through reinforcement learning (RL) is computationally expensive requiring massive GPU resources and long training cycles. To address this, a team of researchers from NVIDIA, Massachusetts Institute of Technology (MIT), The ...
#QuantumLearning #ReinforcementLearning #LLMs #NVIDIA #MIT #TsinghuaUniversity
❤2
🤖🧠 Agentic Entropy-Balanced Policy Optimization (AEPO): Balancing Exploration and Stability in Reinforcement Learning for Web Agents
🗓️ 17 Oct 2025
📚 AI News & Trends
AEPO (Agentic Entropy-Balanced Policy Optimization) represents a major advancement in the evolution of Agentic Reinforcement Learning (RL). As large language models (LLMs) increasingly act as autonomous web agents – searching, reasoning and interacting with tools – the need for balanced exploration and stability has become crucial. Traditional RL methods often rely heavily on entropy to ...
#AgenticRL #ReinforcementLearning #LLMs #WebAgents #EntropyBalanced #PolicyOptimization
🗓️ 17 Oct 2025
📚 AI News & Trends
AEPO (Agentic Entropy-Balanced Policy Optimization) represents a major advancement in the evolution of Agentic Reinforcement Learning (RL). As large language models (LLMs) increasingly act as autonomous web agents – searching, reasoning and interacting with tools – the need for balanced exploration and stability has become crucial. Traditional RL methods often rely heavily on entropy to ...
#AgenticRL #ReinforcementLearning #LLMs #WebAgents #EntropyBalanced #PolicyOptimization
❤3
🤖🧠 The Art of Scaling Reinforcement Learning Compute for LLMs: Top Insights from Meta, UT Austin and Harvard University
🗓️ 21 Oct 2025
📚 AI News & Trends
As Large Language Models (LLMs) continue to redefine artificial intelligence, a new research breakthrough has emerged from Meta, The University of Texas at Austin, University College London, UC Berkeley, Harvard University and Periodic Labs. Their paper, titled “The Art of Scaling Reinforcement Learning Compute for LLMs,” introduces a transformative framework for understanding how reinforcement learning ...
#ReinforcementLearning #LLMs #AIResearch #Meta #UTAustin #HarvardUniversity
🗓️ 21 Oct 2025
📚 AI News & Trends
As Large Language Models (LLMs) continue to redefine artificial intelligence, a new research breakthrough has emerged from Meta, The University of Texas at Austin, University College London, UC Berkeley, Harvard University and Periodic Labs. Their paper, titled “The Art of Scaling Reinforcement Learning Compute for LLMs,” introduces a transformative framework for understanding how reinforcement learning ...
#ReinforcementLearning #LLMs #AIResearch #Meta #UTAustin #HarvardUniversity
❤1🎉1