llm-council
LLM Council works together to answer your hardest questions.
https://github.com/karpathy/llm-council
LLM Council works together to answer your hardest questions.
https://github.com/karpathy/llm-council
GitHub
GitHub - karpathy/llm-council: LLM Council works together to answer your hardest questions
LLM Council works together to answer your hardest questions - karpathy/llm-council
Building Data Visualisations in Python in Minutes
The video demonstrates how to use Streamlit, a Python framework, to quickly build professional and interactive data visualizations with minimal code, showing live examples involving Pandas for data manipulation and visualization. It highlights Streamlit's simplicity, live reloading, interactivity, and caching features, making it ideal for rapid data exploration and sharing within small u...
https://www.youtube.com/watch?v=lQRq4-MiAGA
The video demonstrates how to use Streamlit, a Python framework, to quickly build professional and interactive data visualizations with minimal code, showing live examples involving Pandas for data manipulation and visualization. It highlights Streamlit's simplicity, live reloading, interactivity, and caching features, making it ideal for rapid data exploration and sharing within small u...
https://www.youtube.com/watch?v=lQRq4-MiAGA
YouTube
Building Data Visualisations in Python in Minutes • Kris Jenkins • GOTO 2025
This presentation was recorded at GOTO Copenhagen 2025. #GOTOcon #GOTOcph
https://gotocph.com
Kris Jenkins - Lifelong Computer Geek and Podcast Host @krisajenkins
RESOURCES
https://bsky.app/profile/krisajenkins.bsky.social
https://twitter.com/krisajenkins…
https://gotocph.com
Kris Jenkins - Lifelong Computer Geek and Podcast Host @krisajenkins
RESOURCES
https://bsky.app/profile/krisajenkins.bsky.social
https://twitter.com/krisajenkins…
Nano-PDF
A CLI tool to edit PDF slides using natural language prompts, powered by Google's Gemini 3 Pro Image ("Nano Banana") model.
https://github.com/gavrielc/Nano-PDF
A CLI tool to edit PDF slides using natural language prompts, powered by Google's Gemini 3 Pro Image ("Nano Banana") model.
https://github.com/gavrielc/Nano-PDF
GitHub
GitHub - gavrielc/Nano-PDF: Edit PDF files with Nano Banana
Edit PDF files with Nano Banana. Contribute to gavrielc/Nano-PDF development by creating an account on GitHub.
Gunicorn Internals
This blog is a technical case of the Gunicorn source code.
https://humbulani1234.github.io/blog/
This blog is a technical case of the Gunicorn source code.
https://humbulani1234.github.io/blog/
👎1
Setting up a Django project with Vite, React, and Tailwind CSS
The video demonstrates setting up a modern Django project with Vite for frontend builds: create Django app with UV, configure Vite for JS/CSS bundling to Django's staticfiles, integrate django-vite for HMR dev server.
https://www.youtube.com/watch?v=GztJ1h6ZXA0
The video demonstrates setting up a modern Django project with Vite for frontend builds: create Django app with UV, configure Vite for JS/CSS bundling to Django's staticfiles, integrate django-vite for HMR dev server.
https://www.youtube.com/watch?v=GztJ1h6ZXA0
YouTube
Setting up a Django project with Vite, React, and Tailwind CSS
This video shows the complete set up from scratch of a new Django project, adding Vite as a front end build tool, and showing how to add React and Tailwind CSS.
Links:
The Django Vite integration guide: https://www.saaspegasus.com/guides/modern-javascript…
Links:
The Django Vite integration guide: https://www.saaspegasus.com/guides/modern-javascript…
trustedsec / social-engineer-toolkit
The Social-Engineer Toolkit (SET) repository from TrustedSec - All new versions of SET will be deployed here.
https://github.com/trustedsec/social-engineer-toolkit
The Social-Engineer Toolkit (SET) repository from TrustedSec - All new versions of SET will be deployed here.
https://github.com/trustedsec/social-engineer-toolkit
GitHub
GitHub - trustedsec/social-engineer-toolkit: The Social-Engineer Toolkit (SET) repository from TrustedSec - All new versions of…
The Social-Engineer Toolkit (SET) repository from TrustedSec - All new versions of SET will be deployed here. - trustedsec/social-engineer-toolkit
AI infrastructure in the "Era of experience"
The article analyzes AI infrastructure needs in the "Era of Experience," where RL-trained models interact with proprietary environments, using GRPO for efficient policy optimization and LoRA adapters to enable low-cost training/inference via multi-tenancy and large-batch async RL. It predicts commoditized base models will spawn a reinforcement fine-tuning (RFT) industry for custom models...
https://www.tensoreconomics.com/p/ai-infrastructure-in-the-era-of-experience
The article analyzes AI infrastructure needs in the "Era of Experience," where RL-trained models interact with proprietary environments, using GRPO for efficient policy optimization and LoRA adapters to enable low-cost training/inference via multi-tenancy and large-batch async RL. It predicts commoditized base models will spawn a reinforcement fine-tuning (RFT) industry for custom models...
https://www.tensoreconomics.com/p/ai-infrastructure-in-the-era-of-experience
Tensoreconomics
AI infrastructure in the "Era of experience"
Intelligence involution, economies of scale in RL, everything async and multi-turn.
Keras HyperParameters Tuning
This is an example i provided on the Keras ecosystem.
https://keras.io/examples/structured_data/class_with_grn_and_vsn_with_hyperparameters_tuning/
This is an example i provided on the Keras ecosystem.
https://keras.io/examples/structured_data/class_with_grn_and_vsn_with_hyperparameters_tuning/
keras.io
Keras documentation: Classification with Gated Residual and Variable Selection Networks with HyperParameters tuning
Become an AI Researcher Course – LLM, Math, PyTorch, Neural Networks, Transformers
This comprehensive course on becoming an AI Researcher starts with the foundational mathematics (vectors, derivatives, gradients, matrices) and PyTorch fundamentals necessary for understanding modern AI. It then progresses through the building blocks of neural networks and culminates with an in-depth module on Transformers, the critical technology behind Large Language Models and generat...
https://www.youtube.com/watch?v=wu8npoU37cI
This comprehensive course on becoming an AI Researcher starts with the foundational mathematics (vectors, derivatives, gradients, matrices) and PyTorch fundamentals necessary for understanding modern AI. It then progresses through the building blocks of neural networks and culminates with an in-depth module on Transformers, the critical technology behind Large Language Models and generat...
https://www.youtube.com/watch?v=wu8npoU37cI
YouTube
Become an AI Researcher Course – LLM, Math, PyTorch, Neural Networks, Transformers
Welcome to the full course on becoming an AI Researcher. This course will guide you step-by-step, starting with the foundational mathematics essential for understanding modern AI, before diving into PyTorch fundamentals. You will then learn about the building…
Improve Query Performance Using Python Django QuerySets
The post shows how efficient Django QuerySet usage can significantly improve database performance, reduce latency, and create faster applications. It explains that writing better queries leads to more stable, scalable, and cost-effective Django systems because the ORM can easily generate unnecessary load when used carelessly.
https://blog.appsignal.com/2025/12/03/improve-query-performance-using-django-python-querysets.html
The post shows how efficient Django QuerySet usage can significantly improve database performance, reduce latency, and create faster applications. It explains that writing better queries leads to more stable, scalable, and cost-effective Django systems because the ORM can easily generate unnecessary load when used carelessly.
https://blog.appsignal.com/2025/12/03/improve-query-performance-using-django-python-querysets.html
Appsignal
Improve Query Performance Using Python Django QuerySets | AppSignal Blog
Let's improve the performance of your queries using Django QuerySets.
A first look at Django's new background tasks
Django 6.0 introduces django.tasks, a lightweight framework for defining and enqueuing background tasks via a standard API, but lacks built-in workers—requiring external infrastructure like custom database-backed backends. The article builds a demo notification app with a DB backend, worker, retries, and result polling, showing how to implement queuing while noting limitations like no co...
https://roam.be/notes/2025/a-first-look-at-djangos-new-background-tasks/
Django 6.0 introduces django.tasks, a lightweight framework for defining and enqueuing background tasks via a standard API, but lacks built-in workers—requiring external infrastructure like custom database-backed backends. The article builds a demo notification app with a DB backend, worker, retries, and result polling, showing how to implement queuing while noting limitations like no co...
https://roam.be/notes/2025/a-first-look-at-djangos-new-background-tasks/
Roam
A first look at Django's new background tasks
Django 6.0 introduces a built-in background tasks framework in `django.tasks`. But don't expect to phase out Celery, Huey or other preferred solutions just yet.
anthropics / claude-quickstarts
A collection of projects designed to help developers quickly get started with building deployable applications using the Claude API
https://github.com/anthropics/claude-quickstarts
A collection of projects designed to help developers quickly get started with building deployable applications using the Claude API
https://github.com/anthropics/claude-quickstarts
GitHub
GitHub - anthropics/claude-quickstarts: A collection of projects designed to help developers quickly get started with building…
A collection of projects designed to help developers quickly get started with building deployable applications using the Claude API - anthropics/claude-quickstarts
Is the 79-character limit still in actual (with modern displays)?
https://www.reddit.com/r/Python/comments/1pejhny/is_the_79character_limit_still_in_actual_with/
https://www.reddit.com/r/Python/comments/1pejhny/is_the_79character_limit_still_in_actual_with/
Reddit
From the Python community on Reddit
Explore this post and more from the Python community
How prompt caching works - Paged Attention and Automatic Prefix Caching plus practical tips
Prompt caching in large language models (LLMs) is an optimization technique that stores and reuses intermediate computational states (key-value caches) of repeated prompt prefixes, significantly reducing redundant processing and speeding up responses. By breaking prompts into fixed-size token blocks and utilizing a hash-based prefix matching system, prompt caching enables multiple reques...
https://sankalp.bearblog.dev/how-prompt-caching-works
Prompt caching in large language models (LLMs) is an optimization technique that stores and reuses intermediate computational states (key-value caches) of repeated prompt prefixes, significantly reducing redundant processing and speeding up responses. By breaking prompts into fixed-size token blocks and utilizing a hash-based prefix matching system, prompt caching enables multiple reques...
https://sankalp.bearblog.dev/how-prompt-caching-works
sankalp's blog
How prompt caching works - Paged Attention and Automatic Prefix Caching plus practical tips
A deep dive into prompt caching - practical tips to improve cache hits and how vLLM's paged attention enables KV-cache reuse across requests via automatic prefix-caching
Can LLMs give us AGI if they are bad at arithmetic?
Wes McKinney's post questions whether large language models (LLMs) can achieve artificial general intelligence (AGI) given their persistent struggles with basic arithmetic tasks like adding single-digit numbers, even in top models. Through experiments and analysis, he shows that while LLMs perform inconsistently on simple math (e.g., summing ~10 numbers), this reveals deeper limitations ...
https://wesmckinney.com/blog/llms-arithmetic/
Wes McKinney's post questions whether large language models (LLMs) can achieve artificial general intelligence (AGI) given their persistent struggles with basic arithmetic tasks like adding single-digit numbers, even in top models. Through experiments and analysis, he shows that while LLMs perform inconsistently on simple math (e.g., summing ~10 numbers), this reveals deeper limitations ...
https://wesmckinney.com/blog/llms-arithmetic/
Wes McKinney
Can LLMs give us AGI if they are bad at arithmetic? – Wes McKinney
Modernising Django Packages Without Breaking Everything
To successfully modernize a mature Django package without breaking user code, the maintainer should phase in new tools to consolidate configuration into a single pyproject.toml file. Key strategies involve streamlining the developer experience with fast tools like uv and Ruff, using a Justfile for memorable commands, and automating releases with Towncrier for clean changelog management.
https://lincolnloop.com/blog/modernising-django-packages-without-breaking-everything/
To successfully modernize a mature Django package without breaking user code, the maintainer should phase in new tools to consolidate configuration into a single pyproject.toml file. Key strategies involve streamlining the developer experience with fast tools like uv and Ruff, using a Justfile for memorable commands, and automating releases with Towncrier for clean changelog management.
https://lincolnloop.com/blog/modernising-django-packages-without-breaking-everything/
Lincoln Loop
Modernising Django Packages Without Breaking Everything | Lincoln Loop
A case study in upgrading django-countries to v8. I’m the solo maintainer for django-countries, which provides a country field for …
vllm-omni
A framework for efficient model inference with omni-modality models.
https://github.com/vllm-project/vllm-omni
A framework for efficient model inference with omni-modality models.
https://github.com/vllm-project/vllm-omni
GitHub
GitHub - vllm-project/vllm-omni: A framework for efficient model inference with omni-modality models
A framework for efficient model inference with omni-modality models - vllm-project/vllm-omni
Django 6.0 released
Django 6.0 introduces major new features: built-in support for template partials (for cleaner, reusable templates), a native background-task framework, a built-in Content Security Policy (CSP) system, and a more modern, Unicode-friendly email API. This release marks the end of mainstream support for Django 5.2; developers are encouraged to upgrade to 6.0 to benefit from the new features ...
https://www.djangoproject.com/weblog/2025/dec/03/django-60-released/
Django 6.0 introduces major new features: built-in support for template partials (for cleaner, reusable templates), a native background-task framework, a built-in Content Security Policy (CSP) system, and a more modern, Unicode-friendly email API. This release marks the end of mainstream support for Django 5.2; developers are encouraged to upgrade to 6.0 to benefit from the new features ...
https://www.djangoproject.com/weblog/2025/dec/03/django-60-released/
Django Project
Django 6.0 released
Posted by Natalia Bidart on Dec. 3, 2025
Can Google's ADK Replace LangChain and MCP?
Christina Lin (Google) demos Agent Development Kit (ADK), open-source Python framework for agentic pipelines: assemble LLMs + tools (via MCP servers/function calling) + prompts for complex workflows like version control or Friday night bookings, with grounding for cited real-time data to cut hallucinations/token costs.
https://www.youtube.com/watch?v=nMnQ63YkftE
Christina Lin (Google) demos Agent Development Kit (ADK), open-source Python framework for agentic pipelines: assemble LLMs + tools (via MCP servers/function calling) + prompts for complex workflows like version control or Friday night bookings, with grounding for cited real-time data to cut hallucinations/token costs.
https://www.youtube.com/watch?v=nMnQ63YkftE
YouTube
Can Google's ADK Replace LangChain and MCP? (with Christina Lin)
How do you build systems with AI? Not code-generating assistants, but production systems that use LLMs as part of their processing pipeline. When should you chain multiple agent calls together versus just making one LLM request? And how do you debug, test…