Auto-Encoder & Backpropagation by hand ✍️ lecture video ~ 📺 https://byhand.ai/cv/10
It took me a few years to invent this method to show both forward and backward passes for a non-trivial case of a multi-layer perceptron over a batch of inputs, plus gradient descents over multiple epochs, while being able to hand calculate each step and code in Excel at the same time.
= Chapters =
• Encoder & Decoder (00:00)
• Equation (10:09)
• 4-2-4 AutoEncoder (16:38)
• 6-4-2-4-6 AutoEncoder (18:39)
• L2 Loss (20:49)
• L2 Loss Gradient (27:31)
• Backpropagation (30:12)
• Implement Backpropagation (39:00)
• Gradient Descent (44:30)
• Summary (51:39)
✉️ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
It took me a few years to invent this method to show both forward and backward passes for a non-trivial case of a multi-layer perceptron over a batch of inputs, plus gradient descents over multiple epochs, while being able to hand calculate each step and code in Excel at the same time.
= Chapters =
• Encoder & Decoder (00:00)
• Equation (10:09)
• 4-2-4 AutoEncoder (16:38)
• 6-4-2-4-6 AutoEncoder (18:39)
• L2 Loss (20:49)
• L2 Loss Gradient (27:31)
• Backpropagation (30:12)
• Implement Backpropagation (39:00)
• Gradient Descent (44:30)
• Summary (51:39)
#AIEngineering #MachineLearning #DeepLearning #LLMs #RAG #MLOps #Python #GitHubProjects #AIForBeginners #ArtificialIntelligence #NeuralNetworks #OpenSourceAI #DataScienceCareers
Please open Telegram to view this post
VIEW IN TELEGRAM
❤6
This media is not supported in your browser
VIEW IN TELEGRAM
GPU by hand ✍️ I drew this to show how a GPU speeds up an array operation of 8 elements in parallel over 4 threads in 2 clock cycles. Read more 👇
CPU
• It has one core.
• Its global memory has 120 locations (0-119).
• To use the GPU, it needs to copy data from the global memory to the GPU.
• After GPU is done, it will copy the results back.
GPU
• It has four cores to run four threads (0-3).
• It has a register file of 28 locations (0-27)
• This register file has four banks (0-3).
• All threads share the same register file.
• But they must read/write using the four banks.
• Each bank allows 2 reads (Read 0, Read 1) and 1 write in a single clock cycle.
✉️ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
CPU
• It has one core.
• Its global memory has 120 locations (0-119).
• To use the GPU, it needs to copy data from the global memory to the GPU.
• After GPU is done, it will copy the results back.
GPU
• It has four cores to run four threads (0-3).
• It has a register file of 28 locations (0-27)
• This register file has four banks (0-3).
• All threads share the same register file.
• But they must read/write using the four banks.
• Each bank allows 2 reads (Read 0, Read 1) and 1 write in a single clock cycle.
#AIEngineering #MachineLearning #DeepLearning #LLMs #RAG #MLOps #Python #GitHubProjects #AIForBeginners #ArtificialIntelligence #NeuralNetworks #OpenSourceAI #DataScienceCareers
Please open Telegram to view this post
VIEW IN TELEGRAM
👍5❤4
What is torch.nn really?
This article explains it quite well.
📌 Read
✉️ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk
When I started working with PyTorch, my biggest question was: "What is torch.nn?".
This article explains it quite well.
📌 Read
#pytorch #AIEngineering #MachineLearning #DeepLearning #LLMs #RAG #MLOps #Python #GitHubProjects #AIForBeginners #ArtificialIntelligence #NeuralNetworks #OpenSourceAI #DataScienceCareers
Please open Telegram to view this post
VIEW IN TELEGRAM
❤5
This repo is awesome. It features RAG, AI Agents, Multi-agent Teams, MCP, Voice Agents, and more.
✅ link: https://github.com/Shubhamsaboo/awesome-llm-apps
#RAG #AIAgents #MultiAgentSystems #VoiceAI #LLMApps
✉️ Our Telegram channels: https://t.iss.one/addlist/0f6vfFbEMdAwODBk📱 Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
1❤5🔥5👍1
🎓 Stanford has released a new course: “Transformers & Large Language Models”
The authors are the Amidi brothers, and three free lectures are already available on YouTube. This is probably one of the most systematic introductory courses on modern LLMs.
Course content:
• Transformers: tokenization, embeddings, attention, architecture
• #LLM basics: Mixture of Experts, decoding types
• Training and fine-tuning: SFT, RL, LoRA
• Model evaluation: LLM/VLM-as-a-judge, best practices
• Tricks: RoPE, attention approximations, quantization
• Reasoning: scaling during training and inference
• Agentic approaches: #RAG, tool calling
If you are already familiar with this topic — it’s a great opportunity to refresh your knowledge and try implementing some techniques from scratch.
https://cme295.stanford.edu/syllabus/
https://t.iss.one/CodeProgrammer🌟
The authors are the Amidi brothers, and three free lectures are already available on YouTube. This is probably one of the most systematic introductory courses on modern LLMs.
Course content:
• Transformers: tokenization, embeddings, attention, architecture
• #LLM basics: Mixture of Experts, decoding types
• Training and fine-tuning: SFT, RL, LoRA
• Model evaluation: LLM/VLM-as-a-judge, best practices
• Tricks: RoPE, attention approximations, quantization
• Reasoning: scaling during training and inference
• Agentic approaches: #RAG, tool calling
If you are already familiar with this topic — it’s a great opportunity to refresh your knowledge and try implementing some techniques from scratch.
https://cme295.stanford.edu/syllabus/
https://t.iss.one/CodeProgrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
❤11
The course gathers up-to-date information on #Python programming and creating advanced AI assistants based on it.
• Content: The course includes 9 lectures, supplemented with video materials, detailed presentations, and code examples. Learning to develop AI agents is accessible even for coding beginners.
• Topics: The lectures cover topics such as #RAG (Retrieval-Augmented Generation), embeddings, #agents, and the #MCP protocol.
The perfect weekend plan is to dive deep into #AI!
https://github.com/orgs/azure-ai-foundry/discussions/166
https://t.iss.one/CodeProgrammer
Please open Telegram to view this post
VIEW IN TELEGRAM
👍6❤3🔥1🎉1