Implementing Karpenter In EKS (From Start To Finish)
https://www.cloudnativedeepdive.com/implementing-karpenter-in-eks-from-start-to-finish
https://www.cloudnativedeepdive.com/implementing-karpenter-in-eks-from-start-to-finish
Auto-scaling GitHub Actions on Kubernetes with Actions Runner Controller (ARC) & Terraform
https://blog.devgenius.io/auto-scaling-github-actions-on-kubernetes-with-actions-runner-controller-arc-terraform-ca9d651c08d8
https://blog.devgenius.io/auto-scaling-github-actions-on-kubernetes-with-actions-runner-controller-arc-terraform-ca9d651c08d8
gonzo
https://github.com/control-theory/gonzo
A powerful, real-time log analysis terminal UI inspired by k9s. Analyze log streams with beautiful charts, AI-powered insights, and advanced filtering - all from your terminal.
https://github.com/control-theory/gonzo
sbnb
https://github.com/sbnb-io/sbnb
Sbnb Linux is a revolutionary minimalist Linux distribution designed to boot bare-metal servers and enable remote connections through fast tunnels. It is ideal for environments ranging from home labs to distributed data centers. Sbnb Linux is simplified, automated, and resilient to power outages, supporting confidential computing to ensure secure operations in untrusted locations.
https://github.com/sbnb-io/sbnb
Reverse Proxy Deep Dive
Part 1: The Complexity of Connection Handling: https://startwithawhy.com/reverseproxy/2024/01/15/ReverseProxy-Deep-Dive.html
Part 2: Why HTTP Parsing at the Edge Is Harder Than It Looks: https://startwithawhy.com/reverseproxy/2025/07/20/ReverseProxy-Deep-Dive-Part2.html
Part 3: The Hidden Complexity of Service Discovery: https://startwithawhy.com/reverseproxy/2025/07/26/Reverseproxy-Deep-Dive-Part3.html
Part 4: Why Load Balancing at Scale is Hard: https://startwithawhy.com/reverseproxy/2025/08/08/ReverseProxy-Deep-Dive-Part4.html
Part 1: The Complexity of Connection Handling: https://startwithawhy.com/reverseproxy/2024/01/15/ReverseProxy-Deep-Dive.html
Part 2: Why HTTP Parsing at the Edge Is Harder Than It Looks: https://startwithawhy.com/reverseproxy/2025/07/20/ReverseProxy-Deep-Dive-Part2.html
Part 3: The Hidden Complexity of Service Discovery: https://startwithawhy.com/reverseproxy/2025/07/26/Reverseproxy-Deep-Dive-Part3.html
Part 4: Why Load Balancing at Scale is Hard: https://startwithawhy.com/reverseproxy/2025/08/08/ReverseProxy-Deep-Dive-Part4.html
Why "What Happened First?" Is One of the Hardest Questions in Large-Scale Systems
https://newsletter.scalablethread.com/p/why-what-happened-first-is-one-of
Understanding Why Exact Ordering of Events is Hard in Distributed Systems
https://newsletter.scalablethread.com/p/why-what-happened-first-is-one-of
Advanced Terraform: Patterns for Teams at Scale
https://morethanmonkeys.medium.com/advanced-terraform-patterns-for-teams-at-scale-f3728d6efc5a
https://morethanmonkeys.medium.com/advanced-terraform-patterns-for-teams-at-scale-f3728d6efc5a
A deep dive into Cloudflare’s September 12, 2025 dashboard and API outage
https://blog.cloudflare.com/deep-dive-into-cloudflares-sept-12-dashboard-and-api-outage
https://blog.cloudflare.com/deep-dive-into-cloudflares-sept-12-dashboard-and-api-outage
pg_duckdb
https://github.com/duckdb/pg_duckdb
pg_duckdb integrates DuckDB's columnar-vectorized analytics engine into PostgreSQL, enabling high-performance analytics and data-intensive applications.
https://github.com/duckdb/pg_duckdb
Postgres Internals Deep Dive: Process Architecture
https://www.enterprisedb.com/blog/postgres-internals-deep-dive-process-architecture
https://www.enterprisedb.com/blog/postgres-internals-deep-dive-process-architecture
The Data Engineer’s guide to optimizing Kubernetes
https://medium.com/datamindedbe/the-data-engineers-guide-to-optimizing-kubernetes-effede0fcfd6
https://medium.com/datamindedbe/the-data-engineers-guide-to-optimizing-kubernetes-effede0fcfd6
The dissection of pushing an OCI image to AWS ECR
https://medium.com/@cjohannsen1981/the-dissection-of-pushing-an-oci-image-to-aws-ecr-88c742ba0eff
https://medium.com/@cjohannsen1981/the-dissection-of-pushing-an-oci-image-to-aws-ecr-88c742ba0eff
Stop Building Platforms Nobody Uses: Pick the Right Kubernetes Abstraction with GitOps
https://itnext.io/stop-building-platforms-nobody-uses-pick-the-right-kubernetes-abstraction-with-gitops-64681357690f
Understand Developer Pain First — Then Pick the Right Abstraction Layer for Kubernetes Platform with GitOps
https://itnext.io/stop-building-platforms-nobody-uses-pick-the-right-kubernetes-abstraction-with-gitops-64681357690f
Scaling Faire’s CI horizontally with Buildkite, Kubernetes, and multiple pipelines
https://craft.faire.com/scaling-faires-ci-horizontally-with-buildkite-kubernetes-and-multiple-pipelines-b9266ba06e7e
Breaking apart monolithic continuous integration in our Kotlin monorepo
https://craft.faire.com/scaling-faires-ci-horizontally-with-buildkite-kubernetes-and-multiple-pipelines-b9266ba06e7e
Autoscaling Kubernetes Pods Based on HTTP Traffic
https://autoscalingkubernetes.hashnode.dev/autoscaling-kubernetes-pods-based-on-http-traffic
https://autoscalingkubernetes.hashnode.dev/autoscaling-kubernetes-pods-based-on-http-traffic
CK-X
https://github.com/sailor-sh/CK-X
A powerful Kubernetes certification practice environment that provides a realistic exam-like experience for kubernetess exam preparation.
https://github.com/sailor-sh/CK-X
10
descheduler
https://github.com/kubernetes-sigs/descheduler
Scheduling in Kubernetes is the process of binding pending pods to nodes, and is performed by a component of Kubernetes called kube-scheduler. The scheduler's decisions, whether or where a pod can or can not be scheduled, are guided by its configurable policy which comprises of set of rules, called predicates and priorities. The scheduler's decisions are influenced by its view of a Kubernetes cluster at that point of time when a new pod appears for scheduling. As Kubernetes clusters are very dynamic and their state changes over time, there may be desire to move already running pods to some other nodes for various reasons:
- Some nodes are under or over utilized.
- The original scheduling decision does not hold true any more, as taints or labels are added to or removed from nodes, pod/node affinity requirements are not satisfied any more.
- Some nodes failed and their pods moved to other nodes.
- New nodes are added to clusters.
Consequently, there might be several pods scheduled on less desired nodes in a cluster. Descheduler, based on its policy, finds pods that can be moved and evicts them. Please note, in current implementation, descheduler does not schedule replacement of evicted pods but relies on the default scheduler for that.
https://github.com/kubernetes-sigs/descheduler