Reddit DevOps
266 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
AWS vs Azure for DevOps transition (6 yrs IT experience) – which is better to start with?

I’m planning to transition into a DevOps / Cloud Engineer role and would like some guidance.

My background:
6 years total experience
4 yrs IT Helpdesk
2 yrs Windows Server & VMware administration (L2, not advance actions)

My plan was to first gain Cloud Engineer experience and then move into DevOps.
Initially I thought Amazon Web Services (AWS) would be the best option since it has a large market share. But it seems entry-level roles are very competitive and expectations are quite high.

Because of that, I’m also considering Microsoft Azure, especially since many companies use Microsoft environments.

For people already working in cloud or DevOps:

1.Which platform is easier to break into for the first cloud role?
2.How does the job demand and competition compare between AWS and Azure?
3.What tools and responsibilities are common in Azure DevOps roles vs AWS-based DevOps?

From a career growth perspective, which would you recommend starting with?
Any insights from real-world experience would be really helpful.

https://redd.it/1rr64lo
@r_devops
Roles for those who might be "not good enough" to be DevOps?

2-page resume (not a full CV, as that's 11-pages):

https://imgur.com/a/0yPYHOM

1-page resume (what I usually use to apply for jobs):

https://imgur.com/YnxLDy1


I'm finding myself in a bit of a weird spot, having been laid off in January. My company had me listed even on my offer of employment letter as a "DevOps Engineer", but I suspect they (MSP) paid people in job title inflation rather than a real salary. Because our "SREs" would do things like build a site-to-site VPN entirely using ClickOps in 2 Cloud Platform web consoles rather than do my natural inclination (which is to do it all in Terraform). So in spite of the job title, I never had Software Engineers/Developers to support, and didn't really touch containers or CICD until 1-2 years into the job.

My role was more Ansible-monkey + Packer-monkey than anything else (Cloud Engineer? Infrastructure Engineer?). At best I can write out the Terraform + Ansible code and tie it all together with a Gitlab CI Pipeline so that a junior engineer could adjust some variables, run the pipeline, and about 2 hours later you're looking at a 10-node Splunk cluster deployed (EC2, ALB, Kinesis Firehose, S3, SQS), all required Splunk TA apps installed, ingesting required logs (Cloudwatch => Kinesis, S3 => SQS, etc.) from AWS. Used to need about 150+ allocated hours to do that manually.

But I don't have formal work experience with k8s. And ironically I'm not well-practiced with writing Bash/Python/Powershell because most of my time was spent doing the exact opposite (converting cartoonishly long User Data scripts => Ansible plays, I swear someone tried to install Splunk using 13 Python scripts).

I also trip over Basic Linux CLI questions (I can STIG various Linux distros without bricking them, but I can't tell you by heart which CLI tools to check if "Linux is slow").


So yeah, I'm feeling a bit of imposter syndrome here and wanted to see what roles might suit someone like me (more Ops than Dev) who might not be qualified to be mid-level DevOps Engineer on Day 1 who has to hit the ground running without a full slide backwards into say, Systems Administration?

From what I can tell, Platform Engineer and SRE tends to have harsher Programming requirements.


Cloud Engineer, Infrastructure Engineer, and Linux Administrator tend to have extremely low volume.


"Automation Engineer" tends to be polluted with wrong industry results (Automotive or Manufacturing). "Release Engineer" doesn't seem to have any results (may be Senior-only).



https://redd.it/1rrxfri
@r_devops
Ingress NGINX EOL this month — what runway are teams giving themselves to migrate?

Ingress NGINX reaches end of support this month, and I'm guessing there's still thousands of clusters still running it in production.

Curious what runway teams are giving themselves to migrate off of it?

For lots of orgs I've worked with, Ingress NGINX has been the default for years. With upstream maintenance coming to a halt, many teams are evaluating alternatives.

Traefik
HAProxy Ingress
AWS ALB Controller (for EKS)
Gateway API

What's the sentiment around these right now? Are any of them reasonably close to a drop in replacements for existing clusters?

Also wondering if some orgs will end up doing what we see with other projects that go EOL and basically run a supported fork or extended maintenance version while planning a slower migration.

https://redd.it/1rr49pn
@r_devops
Sonatype Nexus Repository CE

Hey folks, I'm trying to evaluate the "new" Sonatype Nexus Community Edition.
However, the download page at https://www.sonatype.com/products/nexus-community-edition-download requires me to insert all sort of personal details (including the company name, what if I don't have one lol).

Understandably, I could insert random data, but I'm not sure if the download link is then sent to the email address.

That you know of, is there a known direct download link? Sonatype's website must be purposedly indexed like crap because I can't find anything useful there.

https://redd.it/1ryalu1
@r_devops
How do you keep track of which repos depend on which in a large org?

I work in an infrastructure automation team at a large org (\~hundreds of repos across GitLab). We build shared Docker images, reusable CI templates, Terraform modules, the usual stuff.

A challenge I've seen is: someone pushes a breaking change to a shared Docker image or a Terraform module, and then pipelines in other repos start failing. We don't have a clear picture of "if I change X, what else is affected." It's mostly "tribal knowledge". A few senior engineers know which repos depend on what, but that's it. New people are completely lost.

We've looked at GitLab's dependency scanning but that's focused on CVEs in external packages, not internal cross-repo stuff. We've also looked at Backstage but the idea of manually writing YAML for every dependency relationship across hundreds of repos feels like it defeats the purpose.

How do you handle this? Do you have some internal tooling, a spreadsheet, or do you just accept that stuff breaks and fix it after the fact?

Curious how other orgs deal with this at scale.

https://redd.it/1ry0edd
@r_devops
Added a lightweight AWS/Azure hygiene scan to our CI - sharing the 20 rules we check

We’ve been trying to keep our AWS and Azure environments a bit cleaner without adding heavy tooling, so we built a small read‑only scanner that runs in CI and evaluates a conservative set of hygiene rules. The focus is on high‑signal checks that don’t generate noise in IaC‑driven environments.

It’s packaged as a Docker image and a GitHub Action so it’s easy to drop into pipelines. It assumes a read‑only role and just reports findings - no write permissions.


https://github.com/cleancloud-io/cleancloud

Docker Hub: https://hub.docker.com/r/getcleancloud/cleancloud

docker run getcleancloud/cleancloud:latest scan

GitHub Marketplace: https://github.com/marketplace/actions/cleancloud-scan

yaml

- uses: cleancloud-io/scan-action@v1
with:
provider: aws
all-regions: 'true'
fail-on-confidence: HIGH
fail-on-cost: '100'
output: json
output-file: scan-results.json

# 20 rules across AWS and Azure

Conservative, high‑signal, designed to avoid false positives in IaC environments.

# AWS (10 rules)

Unattached EBS volumes (HIGH)
Old EBS snapshots
CloudWatch log groups with infinite retention
Unattached Elastic IPs (HIGH)
Detached ENIs
Untagged resources
Old AMIs
Idle NAT Gateways
Idle RDS instances (HIGH)
Idle load balancers (HIGH)

# Azure (10 rules)

Unattached managed disks
Old snapshots
Unused public IPs (HIGH)
Empty load balancers (HIGH)
Empty App Gateways (HIGH)
Empty App Service Plans (HIGH)
Idle VNet Gateways
Stopped (not deallocated) VMs (HIGH)
Idle SQL databases (HIGH)
Untagged resources

Rules without a confidence marker are MEDIUM \- they use time‑based heuristics or multiple signals. We started by failing CI only on HIGH confidence, then tightened things as teams validated.

We're also adding multi‑account scanning (AWS Organizations + Azure Management Groups) in the next few days, since that’s where most of the real‑world waste tends to hide.

Curious how others are handling lightweight hygiene checks in CI and what rules you consider “must‑have” in your setups.

https://redd.it/1rxuyet
@r_devops
Looking for a rolling storage solution

Where I work we have a lot of data that's stored in some file shares in an on-prem set of devices. We are unfortunately repeatedly running into storage limits and because of the current price of everything, expansion might not be possible.

What I'm looking for is something that can look at all of these SAN devices, find files that have not been read or modified in X days, and archive that data to the cloud, similar to how s3 has lifecycles that can progressively move cold data to colder storage. I want our on-prem SANs to be hot and cloud storage to get progressively colder. And just as s3 does it, I want reads and write to be transparent.

Budgets are tight, but my time is not. I'm not afraid to learn and deploy some open source software that fulfills these requirements, but I don't know what that software is. If I have to buy something, I would prefer to be able to configure it with terraform.

Thanks in advance for your suggestions!

https://redd.it/1rxuc40
@r_devops
Has anyone actually used Port1355? Worth it or just hype?

Has anyone here actually used this? Is it worth trying?

I know I could just search or ask AI, but I’m more interested in hearing from real people who have used it and seen actual benefits.

Not just something that’s “nice to have,” but something genuinely useful.

https://port1355.dev/

https://redd.it/1ryrj1p
@r_devops
I calculated how much my CI failures actually cost

I calculated how much failed CI runs cost over the last month - the number was worse than I expected.

I've been tracking CI metrics on a monorepo pipeline that runs on self-hosted 2xlarge EC2 spot instances (we need the size for several of the jobs). The numbers were worse than I expected.

It's a build and test workflow with 20+ parallel jobs per run - Docker image builds, integration tests, system tests. Over about 1,300 runs the success rate was 26%. 231 failed, 428 cancelled, 341 succeeded. Average wall-clock time per run is 43 minutes, but the actual compute across all parallel jobs averages 10 hours 54 minutes. Total wasted compute across failed and cancelled runs: 208 days. So almost exactly half of all compute produced nothing.

That 43 min to 11 hour gap is what got me. Each run feels like 43 minutes but it's burning nearly 11 hours of EC2 time across all the parallel jobs. 15x multiplier.

On spot 2xlarge instances at ~$0.15/hr, 208 days of waste works out to around $750. On-demand would be 2-3x that. Not great, but honestly the EC2 bill is the small part.

The expensive part is developer time. Every failed run means someone has to notice it, dig through logs across 20+ parallel jobs, figure out if it's their code or a flaky test or infra, fix it or re-run, wait another 43 minutes, then context-switch back to what they were doing before. At a 26% success rate that's happening 3 out of every 4 runs. If you figure 10 min of developer time per failure at $100/hr loaded cost, the 659 failed+cancelled runs cost something like $11K in engineering time. The $750 EC2 bill barely registers.

A few things surprised me:

The cancelled runs (428) actually outnumber the failed runs (231). They have concurrency groups set up, so when a dev pushes a new commit before the last build finishes the old run gets cancelled. Makes sense as a policy, but it means a huge chunk of compute gets thrown away mid-run. Also, at 26% success rate the CI isn't really a safety net anymore — it's a bottleneck. It's blocking shipping more than it's catching bugs. And nobody noticed because GitHub says "43 minutes per run" which sounds totally fine.

Curious what your pipeline success rate looks like. Has anyone else tracked the actual wasted compute time?

https://redd.it/1rxlfxd
@r_devops
New junior DevOps engineer - the best way to succeed

Hi guys, I started to work as a junior DevOps engineer 9 days ago, before that I finished colleague and worked 1 year as a System administrator T1.

Now, I have my own dedicated mentor/buddy and first few days were like really awesome, he wanted to help with information and everything but in the last few days it's like some really weird feedback with some blaming vibe of how I don't know something - and I'm not asking silly things, like before running any plan or apply script in our CI/CD pipeline - because I don't want to destroy anything and similar situations, now, he already told that to our team lead which makes me a bit worried/scared on how to proceed, because I do believe it's a smart thing to not be a hero, but on the other hand, if questions in first few weeks-even months would be considered "how come you don't know that" for a person that never worked on this position and reported to TL I'm really confused on what to ask and approach.

Also, documentation almost don't exist, as seniors were leaving the company documentation wasn't built and now too many of them left and few that are here are not having time to do it because of their work which I can understand. One feedback that I also got was that why I don't ask questions on daily meetings when he is explaining something - well how should I ask if even in dm he seems to be a bit unwilling to help. My bf is telling me that situations like this never got any better for him in the past so he is saying that I should already chasing another opportunity while working on this passive.

I don't know, I don't like quitting at all, and it's really a great opportunity, but I never had situation like this.

And yeah, colleague, courses, certs and even my own projects are basically just a scratch when you come into production, like the only thing is helping me are some commands around terminal haha.

https://redd.it/1rxipdh
@r_devops
Chubo: An attempt at a Talos-like, API-driven OS for the Nomad/Consul/Vault stack

TL;DR: I’m building Chubo, an immutable, API-driven Linux distribution designed specifically for the Nomad / Consul / Vault stack. Think "Talos Linux," but for (the OSS version of) the HashiCorp ecosystem—no SSH-first workflows, no configuration drift, and declarative machine management. Currently in Alpha and looking for feedback from operators.

I’ve been building an experiment called Chubo:

[https://github.com/chubo-dev/chubo](https://github.com/chubo-dev/chubo)

The basic idea is simple: I love the Talos model—no SSH, machine lifecycle through an API, and zero node drift. But Talos is tightly tied to Kubernetes. If you want to run a Nomad / Consul / Vault stack instead, you usually end up back in the world of SSH, configuration management (Ansible/Chef/Puppet ...), and nodes that slowly drift into snowflakes over time. Chubo is my exploration of what an "appliance-model" OS looks like for the HashiCorp ecosystem.

The Current State:

* No SSH/Shell: Manage the OS through a gRPC API instead.
* Declarative: Generate, validate, and apply machine config with chuboctl.
* Native Tooling: It fetches helper bundles so you can talk to Nomad/Consul/Vault with their native CLIs.
* The Stack: I’m maintaining forks aimed at this model: openwonton (Nomad) and opengyoza (Consul),

The goal is to reduce node drift without depending on external config management for everything and bring a more appliance-like model to Nomad-based clusters.

I’m looking for feedback:

* Does this "operator model" make sense outside of K8s?
* What are the obvious gaps you see compared to "real-world" ops?
* Is removing SSH as the primary interface viable for you, or just annoying?

Note: This is Alpha and currently very QEMU-first. I also have a reference platform for Hetzner/Cloud here: [https://github.com/chubo-dev/reference-platform](https://github.com/chubo-dev/reference-platform)

Other references:

[https://github.com/openwonton/openwonton](https://github.com/openwonton/openwonton)

[https://github.com/opengyoza/opengyoza](https://github.com/opengyoza/opengyoza)

https://redd.it/1ryvqo2
@r_devops
I got tired of writing boilerplate config parsers in C, so I built a zero-dependency schema-to-struct generator (cfgsafe)

Hey everyone,

Like a lot of you, I find dealing with application configuration in C to be a massive pain. You usually end up choosing between:

1. Pulling in a heavy library.
2. Using a generic INI parser that forces you to use string lookups (`hash_get("db.port")`) everywhere.
3. Writing a bunch of manual, brittle `strtol` and validation boilerplate.

I wanted something that gives me **strongly-typed structs** and **guarantees that my data is valid** before my core application logic even runs.

So I built **cfgsafe**. It’s a pure C99 code generator and parser.

You define your configuration shape in a tiny `.schema` file:

schema ServerConfig {
service_name: string {
min_length: 3
}

section database {
host: string { default: "localhost", env: "DB_HOST" }
port: int { range: 1..65535 }
}

use_tls: bool { default: false }

cert: path {
required_if: use_tls == true
exists: true
}
}

Then you run my generator (`cfg-gen config.schema`). It spits out a **single-file STB-style C header** containing both your exact structs and the parsing implementation.

In your `main.c`, using it is completely native and completely safe:

ServerConfig_t cfg;
cfg_error_t err;

// Loads the INI, applies ENV variables, and runs your validation checks
cfg_status_t status = ServerConfig_load(&cfg, "config.ini", &err);

if (status == CFG_SUCCESS) {
// 100% type-safe. No void pointers. No manual parsing.
printf("Starting %s on %s:%d\n",
cfg.service_name,
cfg.database.host,
(int)cfg.database.port);

ServerConfig_free(&cfg);
} else {
// Gives you granular errors: e.g. "Field 'database.port' out of range"
fprintf(stderr, "Startup error (%s): %s\n", err.field, err.message);
}

# Why I think it's cool:

* **Zero Dependencies:** No external regex engines or JSON libraries needed. The generated STB header is all you need.
* **Complex Validation Baked In:** Built-in support for numeric ranges (`1..100`), regex patterns, array lengths, cross-field conditional logic (`required_if`), and even checking if file paths actually exist on the system *during* parsing!
* **First-Class Env Variables:** If `DB_HOST` is set in the environment, it seamlessly overrides the INI file.

I’d love to get feedback from other C developers. Is this something you'd use in your projects? Are there config features I missed?

**Repo:** [https://github.com/aikoschurmann/cfgsafe](https://github.com/aikoschurmann/cfgsafe) *(Docs and examples are in the README!)*

https://redd.it/1ryup8z
@r_devops
Managing state of applications

I recently got a new job and im importibg every cloud resource to IaC. Then I will just change the terraform variables and deploy everything to prod (they dont have a prod yet)

There is postgres and keycloak deployed. I also think that I should postgres databases and users in code via ansible. Same with keycloak. Im thinking to reduce the permissons of the developers in postgres and keycloak, so only way they can create stuff is through PRs to ansible with my revier

I want to double check if it has any downsides or good practice.
Any comments?



https://redd.it/1rz19ei
@r_devops
I Benchmarked Redis vs Valkey vs DragonflyDB vs KeyDB

Hi everyone

I just created a benchmark comparing Redis, Valkey, DragonflyDB, and KeyDB.

Honestly this one was pretty interesting, and some of the results were surprising enough that I reran the benchmark quite a few times to make sure they were real.
As requested on my previous benchmarks, I also uploaded the benchmark to GitHub.

|Benchmark|Redis 8.4.0|DragonflyDB v1.37.0|Valkey 9.0.3|KeyDB v6.3.4|
|:-|:-|:-|:-|:-|
|Small writes throughput (higher is better)|452,812 ops/s|494,248 ops/s|432,825 ops/s|385,182 ops/s|
|Hot reads throughput (higher is better)|460,361 ops/s|494,811 ops/s|445,592 ops/s|475,307 ops/s|
|Mixed workload throughput (higher is better)|444,026 ops/s|468,316 ops/s|428,907 ops/s|405,764 ops/s|
|Pipeline throughput (higher is better)|1,179,179 ops/s|951,274 ops/s|1,461,472 ops/s|647,779 ops/s|
|Hot reads p95 latency (lower is better)|0.607 ms|0.743 ms|1.191 ms|0.711 ms|
|Mixed workload p95 latency (lower is better)|0.623 ms|0.783 ms|1.271 ms|0.735 ms|
|Pub/Sub p95 latency (lower is better)|0.592 ms|0.583 ms|1.002 ms|0.557 ms|

Full benchmark + charts: here

GitHub

Happy to run more tests if there’s interest

https://redd.it/1rz2tx1
@r_devops
What cloud cost fixes actually survive sprint planning on your team?

I keep coming back to this because it feels like the real bottleneck is not detection.

Most teams can already spot some obvious waste:

gp2 to gp3

log retention cleanup

unattached EBS

idle dev resources

old snapshots nobody came back to

But once that has to compete with feature work, a lot of it seems to die quietly.

The pattern feels familiar:

everyone agrees it should be fixed

nobody really argues with the savings

a ticket gets created

then it loses to roadmap work and just sits there

So I’m curious how people here actually handle this in practice.

What kinds of cloud cost fixes tend to survive prioritization on your team?

And what kinds usually get acknowledged, ticketed, and then ignored for weeks?

I’ve been building around this problem, so I’m biased, but I’m starting to think the real gap is not finding waste. It’s turning it into work that actually has a chance of getting done.

https://redd.it/1rz607q
@r_devops
Does anyone works for SKY TV UK?

Hi All,

I have an interview scheduled at SKY headoffice on next Monday for the SRE engineer second round. Does anyone have an idea of how it would be?

https://redd.it/1rz7blw
@r_devops
Is it wise for me to work on this and migrate out of Jenkins to Bitbucket Pipelines?

I have an existing infra repository that uses terraform to build resources on AWS for various projects. It already have VPC and other networking set up and everything is working well.

I’m looking to migrate it out to opentofu and using bitbucket pipelines to do our CI/CD as opposed to Jsnkins which is our current CI/CD solution.

Is it wise for me to create another VPC on a new mono-repo or should I just leverage the existing VPC? for this?

I’m looking to shift all our staging environment to on-site and using NGINX and ALB to direct all traffic to the relevant on-site resources and only use AWS for prod services. Would love to have your advice on this

https://redd.it/1rzg0no
@r_devops
Replacing MinIO with RustFS via simple binary swap (Zero-data migration guide)

Hi everyone, I’m from the RustFS team (u/rustfs_official).

If you’re managing MinIO clusters, you’ve probably seen the recent repo archiving. For the r/devops community, "migration" usually means a massive headache—egress costs, downtime, and the technical risk of moving petabytes of production data over the network.

We’ve been working on a binary replacement path to skip that entirely. Instead of a traditional move, you just update your Docker image or swap the binary. The engine is built to natively parse your existing bucket metadata, IAM policies, and lifecycle rules directly from the on-disk format.

Why this fits a DevOps workflow:

Actually "Drop-in": Designed to be swapped into your existing `docker-compose` or K8s manifests. It maintains S3 API parity, so your application-level endpoints don't need to change.
Rust-Native Performance: We built this for high-concurrency AI/ML workloads. Using Rust lets us eliminate the GC-related latency spikes often found in Go-based systems. RDMA and DPU support are on our roadmap to offload the storage path from the CPU.
Predictable Tail Latency: We’ve focused on a leaner footprint and more consistent performance than legacy clusters, especially under heavy IOPS.
Zero-Data Migration: No re-uploading or network transfer. RustFS reads the existing MinIO data layout natively, so you keep your data exactly where it is during the swap.

We’re tracking the technical implementation and the step-by-step migration guide in this GitHub issue:

https://github.com/rustfs/rustfs/issues/2212

We are currently at v1.0.0-alpha.87 and pushing toward a stable Beta in April.

https://redd.it/1rz148h
@r_devops