10 GitHub Repositories to Master LLM
β brexhq/prompt-engineering
Tips and examples to improve your prompt engineering skills.
π GitHub
β mlabonne/llm-course
A full course with tutorials and hands-on LLM projects.
π GitHub
β Hannibal046/Awesome-LLM
Curated list of LLM papers, tools, and tutorials.
π GitHub
β WooooDyy/LLM-Agent-Paper-List
Research papers focused on LLM-based agents.
π GitHub
β avvorstenbosch/Masterclass-LLMs-for-Data-Science
Guide to using LLMs in data workflows, with exercises.
π GitHub
β Shubhamsaboo/awesome-llm-apps
Real-world LLM apps using OpenAI, Gemini, and more.
π GitHub
β BradyFU/Awesome-Multimodal-LLM
Resources on LLMs that handle text, images, and audio.
π GitHub
β HandsOnLLM/Hands-On-LLM
Code examples from the O'Reilly hands-on LLM book.
π GitHub
β SylphAI-Inc/LLM-engineer-handbook
Handbook for building and deploying LLMs.
π GitHub
β rasbt/LLMs-from-scratch
Build a GPT-style model in PyTorch from scratch.
π GitHub
β brexhq/prompt-engineering
Tips and examples to improve your prompt engineering skills.
π GitHub
β mlabonne/llm-course
A full course with tutorials and hands-on LLM projects.
π GitHub
β Hannibal046/Awesome-LLM
Curated list of LLM papers, tools, and tutorials.
π GitHub
β WooooDyy/LLM-Agent-Paper-List
Research papers focused on LLM-based agents.
π GitHub
β avvorstenbosch/Masterclass-LLMs-for-Data-Science
Guide to using LLMs in data workflows, with exercises.
π GitHub
β Shubhamsaboo/awesome-llm-apps
Real-world LLM apps using OpenAI, Gemini, and more.
π GitHub
β BradyFU/Awesome-Multimodal-LLM
Resources on LLMs that handle text, images, and audio.
π GitHub
β HandsOnLLM/Hands-On-LLM
Code examples from the O'Reilly hands-on LLM book.
π GitHub
β SylphAI-Inc/LLM-engineer-handbook
Handbook for building and deploying LLMs.
π GitHub
β rasbt/LLMs-from-scratch
Build a GPT-style model in PyTorch from scratch.
π GitHub
π7
COMMON TERMINOLOGIES IN PYTHON - PART 1
Have you ever gotten into a discussion with a programmer before? Did you find some of the Terminologies mentioned strange or you didn't fully understand them?
In this series, we would be looking at the common Terminologies in python.
It is important to know these Terminologies to be able to professionally/properly explain your codes to people and/or to be able to understand what people say in an instant when these codes are mentioned. Below are a few:
IDLE (Integrated Development and Learning Environment) - this is an environment that allows you to easily write Python code. IDLE can be used to execute a single statements and create, modify, and execute Python scripts.
Python Shell - This is the interactive environment that allows you to type in python code and execute them immediately
System Python - This is the version of python that comes with your operating system
Prompt - usually represented by the symbol ">>>" and it simply means that python is waiting for you to give it some instructions
REPL (Read-Evaluate-Print-Loop) - this refers to the sequence of events in your interactive window in form of a loop (python reads the code inputted>the code is evaluated>output is printed)
Argument - this is a value that is passed to a function when called eg print("Hello World")... "Hello World" is the argument that is being passed.
Function - this is a code that takes some input, known as arguments, processes that input and produces an output called a return value. E.g print("Hello World")... print is the function
Return Value - this is the value that a function returns to the calling script or function when it completes its task (in other words, Output). E.g.
>>> print("Hello World")
Hello World
Where Hello World is your return value.
Note: A return value can be any of these variable types: handle, integer, object, or string
Script - This is a file where you store your python code in a text file and execute all of the code with a single command
Script files - this is a file containing a group of python scripts
Have you ever gotten into a discussion with a programmer before? Did you find some of the Terminologies mentioned strange or you didn't fully understand them?
In this series, we would be looking at the common Terminologies in python.
It is important to know these Terminologies to be able to professionally/properly explain your codes to people and/or to be able to understand what people say in an instant when these codes are mentioned. Below are a few:
IDLE (Integrated Development and Learning Environment) - this is an environment that allows you to easily write Python code. IDLE can be used to execute a single statements and create, modify, and execute Python scripts.
Python Shell - This is the interactive environment that allows you to type in python code and execute them immediately
System Python - This is the version of python that comes with your operating system
Prompt - usually represented by the symbol ">>>" and it simply means that python is waiting for you to give it some instructions
REPL (Read-Evaluate-Print-Loop) - this refers to the sequence of events in your interactive window in form of a loop (python reads the code inputted>the code is evaluated>output is printed)
Argument - this is a value that is passed to a function when called eg print("Hello World")... "Hello World" is the argument that is being passed.
Function - this is a code that takes some input, known as arguments, processes that input and produces an output called a return value. E.g print("Hello World")... print is the function
Return Value - this is the value that a function returns to the calling script or function when it completes its task (in other words, Output). E.g.
>>> print("Hello World")
Hello World
Where Hello World is your return value.
Note: A return value can be any of these variable types: handle, integer, object, or string
Script - This is a file where you store your python code in a text file and execute all of the code with a single command
Script files - this is a file containing a group of python scripts
π2
OpenAI has dropped a helpful AI for coders β the new Codex-1 model, which writes code like a top senior with 15 years of experience.
Codex-1 works within the Codex AI agent β itβs like having a whole development team in your browser, writing code and fixing it SIMULTANEOUSLY. Plus, the agent can work on multiple tasks in parallel.
Theyβre starting the rollout today β check it out in your sidebar.
Codex-1 works within the Codex AI agent β itβs like having a whole development team in your browser, writing code and fixing it SIMULTANEOUSLY. Plus, the agent can work on multiple tasks in parallel.
Theyβre starting the rollout today β check it out in your sidebar.
π2
Roadmap to Building AI Agents
1. Master Python Programming β Build a solid foundation in Python, the primary language for AI development.
2. Understand RESTful APIs β Learn how to send and receive data via APIs, a crucial part of building interactive agents.
3. Dive into Large Language Models (LLMs) β Get a grip on how LLMs work and how they power intelligent behavior.
4. Get Hands-On with the OpenAI API β Familiarize yourself with GPT models and tools like function calling and assistants.
5. Explore Vector Databases β Understand how to store and search high-dimensional data efficiently.
6. Work with Embeddings β Learn how to generate and query embeddings for context-aware responses.
7. Implement Caching and Persistent Memory β Use databases to maintain memory across interactions.
8. Build APIs with Flask or FastAPI β Serve your agents as web services using these Python frameworks.
9. Learn Prompt Engineering β Master techniques to guide and control LLM responses.
10. Study Retrieval-Augmented Generation (RAG) β Learn how to combine external knowledge with LLMs.
11. Explore Agentic Frameworks β Use tools like LangChain and LangGraph to structure your agents.
12. Integrate External Tools β Learn to connect agents to real-world tools and APIs (like using MCP).
13. Deploy with Docker β Containerize your agents for consistent and scalable deployment.
14. Control Agent Behavior β Learn how to set limits and boundaries to ensure reliable outputs.
15. Implement Safety and Guardrails β Build in mechanisms to ensure ethical and safe agent behavior.
React β€οΈ for more
1. Master Python Programming β Build a solid foundation in Python, the primary language for AI development.
2. Understand RESTful APIs β Learn how to send and receive data via APIs, a crucial part of building interactive agents.
3. Dive into Large Language Models (LLMs) β Get a grip on how LLMs work and how they power intelligent behavior.
4. Get Hands-On with the OpenAI API β Familiarize yourself with GPT models and tools like function calling and assistants.
5. Explore Vector Databases β Understand how to store and search high-dimensional data efficiently.
6. Work with Embeddings β Learn how to generate and query embeddings for context-aware responses.
7. Implement Caching and Persistent Memory β Use databases to maintain memory across interactions.
8. Build APIs with Flask or FastAPI β Serve your agents as web services using these Python frameworks.
9. Learn Prompt Engineering β Master techniques to guide and control LLM responses.
10. Study Retrieval-Augmented Generation (RAG) β Learn how to combine external knowledge with LLMs.
11. Explore Agentic Frameworks β Use tools like LangChain and LangGraph to structure your agents.
12. Integrate External Tools β Learn to connect agents to real-world tools and APIs (like using MCP).
13. Deploy with Docker β Containerize your agents for consistent and scalable deployment.
14. Control Agent Behavior β Learn how to set limits and boundaries to ensure reliable outputs.
15. Implement Safety and Guardrails β Build in mechanisms to ensure ethical and safe agent behavior.
React β€οΈ for more
π6π1π₯1
LLM Cheatsheet
Introduction to LLMs
- LLMs (Large Language Models) are AI systems that generate text by predicting the next word.
- Prompts are the instructions or text you give to an LLM.
- Personas allow LLMs to take on specific roles or tones.
- Learning types:
- Zero-shot (no examples given)
- One-shot (one example)
- Few-shot (a few examples)
Transformers
- The core architecture behind LLMs, using self-attention to process input sequences.
- Encoder: Understands input.
- Decoder: Generates output.
- Embeddings: Converts words into vectors.
Types of LLMs
- Encoder-only: Great for understanding (like BERT).
- Decoder-only: Best for generating text (like GPT).
- Encoder-decoder: Useful for tasks like translation and summarization (like T5).
Configuration Settings
- Decoding strategies:
- Greedy: Always picks the most likely next word.
- Beam search: Considers multiple possible sequences.
- Random sampling: Adds creativity by picking among top choices.
- Temperature: Controls randomness (higher value = more creative output).
- Top-k and Top-p: Restrict choices to the most likely words.
LLM Instruction Fine-Tuning & Evaluation
- Instruction fine-tuning: Trains LLMs to follow specific instructions.
- Task-specific fine-tuning: Focuses on a single task.
- Multi-task fine-tuning: Trains on multiple tasks for broader skills.
Model Evaluation
- Evaluating LLMs is hard-metrics like BLEU and ROUGE are common, but human judgment is often needed.
Join our WhatsApp Channel: https://whatsapp.com/channel/0029VazaRBY2UPBNj1aCrN0U
Introduction to LLMs
- LLMs (Large Language Models) are AI systems that generate text by predicting the next word.
- Prompts are the instructions or text you give to an LLM.
- Personas allow LLMs to take on specific roles or tones.
- Learning types:
- Zero-shot (no examples given)
- One-shot (one example)
- Few-shot (a few examples)
Transformers
- The core architecture behind LLMs, using self-attention to process input sequences.
- Encoder: Understands input.
- Decoder: Generates output.
- Embeddings: Converts words into vectors.
Types of LLMs
- Encoder-only: Great for understanding (like BERT).
- Decoder-only: Best for generating text (like GPT).
- Encoder-decoder: Useful for tasks like translation and summarization (like T5).
Configuration Settings
- Decoding strategies:
- Greedy: Always picks the most likely next word.
- Beam search: Considers multiple possible sequences.
- Random sampling: Adds creativity by picking among top choices.
- Temperature: Controls randomness (higher value = more creative output).
- Top-k and Top-p: Restrict choices to the most likely words.
LLM Instruction Fine-Tuning & Evaluation
- Instruction fine-tuning: Trains LLMs to follow specific instructions.
- Task-specific fine-tuning: Focuses on a single task.
- Multi-task fine-tuning: Trains on multiple tasks for broader skills.
Model Evaluation
- Evaluating LLMs is hard-metrics like BLEU and ROUGE are common, but human judgment is often needed.
Join our WhatsApp Channel: https://whatsapp.com/channel/0029VazaRBY2UPBNj1aCrN0U
π2