Which small cybersecurity company deserves way more attention?
Hey everyone,
I'm curious to hear your thoughts — which lesser-known or small cybersecurity companies do you think are really underrated or deserve way more attention than they’re getting?
I’m not talking about the big names like CrowdStrike, Palo Alto, or SentinelOne, but rather smaller, niche players doing innovative or impactful work. Whether it’s a company with a cool product, a solid team, or just a fresh approach to solving real security challenges — I’d love to learn more.
Looking forward to your recommendations!
https://redd.it/1l82rgf
@r_devops
Hey everyone,
I'm curious to hear your thoughts — which lesser-known or small cybersecurity companies do you think are really underrated or deserve way more attention than they’re getting?
I’m not talking about the big names like CrowdStrike, Palo Alto, or SentinelOne, but rather smaller, niche players doing innovative or impactful work. Whether it’s a company with a cool product, a solid team, or just a fresh approach to solving real security challenges — I’d love to learn more.
Looking forward to your recommendations!
https://redd.it/1l82rgf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Why Are GitOps Tools So Popular When Helmfile + GitHub Actions Are Simpler?
I’ve been working with Kubernetes for about 8 years, and I’ve used Helmfile in production enough to feel comfortable with it. It’s simple, declarative, and works well with GitHub Actions or any CI system. It’s easy to reason about, and in many cases, it just works.
I’ve also prototyped ArgoCD and Flux, and honestly… I don’t get the appeal.
From my perspective:
* GitOps tools introduce a lot of complexity: CRDs, controllers, syncing logic, and additional moving parts that can be hard to debug.
* Debugging issues in GitOps setups can be non-intuitive, especially when something silently drifts or fails to sync.
* Helmfile + CI/CD is transparent and flexible you know exactly what’s being applied and when.
What’s even more confusing is that I often see teams using CI tools alongside GitOps not because they want to, but because they have to. For example:
* GitOps tools don’t handle templating or secrets management directly, so you end up needing tools like External Secrets, which isn’t always appropriate.
* It’s also surprisingly difficult to pass output values from your IaC tool (like Terraform or Pulumi) into your cluster via GitOps. Tools like Crossplane try to bridge that gap, but in practice, it often feels convoluted and heavy for what should be a simple handoff.
And while I’ll admit the ArgoCD dashboard is nice, you can get a similar experience using something like Headlamp, which doesn’t even require installing anything in your cluster.
Another thing I don’t quite get is the strong preference for pull-based over push-based workflows. People say pull is “more secure” or “more GitOps-y,” but:
* It’s not difficult to keep cluster credentials safe in a push-based system.
* You often end up triggering syncs manually or via CI anyway.
* Push-based workflows are simpler to reason about and easier to integrate with IaC tools.
Yet GitOps seems to be the default recommendation everywhere Reddit, blogs, conference talks, etc. It feels like the popularity is driven more by:
1. Vendor marketing: GitOps tools are often backed by companies with strong incentives to push them. Think Akuity (ArgoCD), Codefresh, Control Plane, and previously Weaveworks (Flux).
2. Social momentum: Once a few big players adopt something, it becomes the “best practice.”
3. Buzzword appeal: “GitOps” sounds cool and modern, even if the underlying mechanics aren’t new.
Curious to hear from others:
* Have you used both GitOps tools and simpler CI/CD setups?
* What made you choose one over the other?
* Do you think GitOps is overhyped, or am I missing something?
https://redd.it/1l85yu8
@r_devops
I’ve been working with Kubernetes for about 8 years, and I’ve used Helmfile in production enough to feel comfortable with it. It’s simple, declarative, and works well with GitHub Actions or any CI system. It’s easy to reason about, and in many cases, it just works.
I’ve also prototyped ArgoCD and Flux, and honestly… I don’t get the appeal.
From my perspective:
* GitOps tools introduce a lot of complexity: CRDs, controllers, syncing logic, and additional moving parts that can be hard to debug.
* Debugging issues in GitOps setups can be non-intuitive, especially when something silently drifts or fails to sync.
* Helmfile + CI/CD is transparent and flexible you know exactly what’s being applied and when.
What’s even more confusing is that I often see teams using CI tools alongside GitOps not because they want to, but because they have to. For example:
* GitOps tools don’t handle templating or secrets management directly, so you end up needing tools like External Secrets, which isn’t always appropriate.
* It’s also surprisingly difficult to pass output values from your IaC tool (like Terraform or Pulumi) into your cluster via GitOps. Tools like Crossplane try to bridge that gap, but in practice, it often feels convoluted and heavy for what should be a simple handoff.
And while I’ll admit the ArgoCD dashboard is nice, you can get a similar experience using something like Headlamp, which doesn’t even require installing anything in your cluster.
Another thing I don’t quite get is the strong preference for pull-based over push-based workflows. People say pull is “more secure” or “more GitOps-y,” but:
* It’s not difficult to keep cluster credentials safe in a push-based system.
* You often end up triggering syncs manually or via CI anyway.
* Push-based workflows are simpler to reason about and easier to integrate with IaC tools.
Yet GitOps seems to be the default recommendation everywhere Reddit, blogs, conference talks, etc. It feels like the popularity is driven more by:
1. Vendor marketing: GitOps tools are often backed by companies with strong incentives to push them. Think Akuity (ArgoCD), Codefresh, Control Plane, and previously Weaveworks (Flux).
2. Social momentum: Once a few big players adopt something, it becomes the “best practice.”
3. Buzzword appeal: “GitOps” sounds cool and modern, even if the underlying mechanics aren’t new.
Curious to hear from others:
* Have you used both GitOps tools and simpler CI/CD setups?
* What made you choose one over the other?
* Do you think GitOps is overhyped, or am I missing something?
https://redd.it/1l85yu8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Should I add links to public github repo's i've contributed to on my resume?
Been sprucing up the ol' resume as I'm not too thrilled where things are going at my current job. It's a shame too, as I love working with the team I have.
Currently, I am employed at a GCP centric consulting company. We are partnered with Google Cloud and we have done many projects for them. Over the course of the last two years I had a big hand in 2 major projects, which were eventually published by Google, now sitting in their official repositories. Out of the two, I authored one of them myself along with a data engineer, while the other I was part of a smaller team which I and two other engineers were responsible mainly for infrastructure (all terraform).
To me, a big milestone in my career. Obviously I would like to point it out on my resume. I'm a bit conflicted as to whether to add links to these repositories somewhere on my resume or not. I'm unsure if 1) the AI or algorithm HR uses will flag links on my resume and weed it out and 2) if it does pass, will managers will even bother looking at them.
https://redd.it/1l81w0r
@r_devops
Been sprucing up the ol' resume as I'm not too thrilled where things are going at my current job. It's a shame too, as I love working with the team I have.
Currently, I am employed at a GCP centric consulting company. We are partnered with Google Cloud and we have done many projects for them. Over the course of the last two years I had a big hand in 2 major projects, which were eventually published by Google, now sitting in their official repositories. Out of the two, I authored one of them myself along with a data engineer, while the other I was part of a smaller team which I and two other engineers were responsible mainly for infrastructure (all terraform).
To me, a big milestone in my career. Obviously I would like to point it out on my resume. I'm a bit conflicted as to whether to add links to these repositories somewhere on my resume or not. I'm unsure if 1) the AI or algorithm HR uses will flag links on my resume and weed it out and 2) if it does pass, will managers will even bother looking at them.
https://redd.it/1l81w0r
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
CNCF, Your Certification Exams Are a Privileged, Ableist Joke — And I'm Done Pretending Otherwise
I’m sick of it.
These so-called "industry standard" Kubernetes certifications (CKA, CKAD, CKS) have become a monument to privilege, not merit. You want to prove your skills in Kubernetes? Cool. But apparently, first you need to prove you own a luxury apartment, live alone in a soundproof bunker, and don’t blink too much.
Let me break this down for the CNCF and their sanctimonious proctors:
Not everyone has a dedicated home office.
Not everyone can afford to book a quiet coworking space or even a hotel for a whole night just to take your absurdly strict exam.
Not everyone lives in a country where stable internet is guaranteed, or where the "exam spyware" even runs properly.
And some of us are disabled, neurodivergent, or otherwise unable to sit still and silent in front of a single screen while being eyeball-tracked by an AI that treats a sneeze like a felony.
You know what happens when I try to take the exam from my living room — which, by the way, is also my office, bedroom, and kitchen?
I get flagged because someone walked past the door.
I get banned for “looking away” to stretch my neck.
I get stressed out to hell before the exam even starts, just trying to pass the ridiculous room scan.
And then if the proctor’s software crashes, guess what? No refund. No re-entry. No second chance. Just another $395 down the drain.
Oh, and let’s talk about ableism, shall we?
People with ADHD, autism, mobility constraints, chronic pain — you’ve built a system that excludes them by default. Can’t sit still? Can’t control your eye movement? Can’t guarantee your kid won’t cry in the next room?
Too bad. No cert for you. Try again with a different life.
This isn’t “security.” It’s elitism wrapped in bureaucracy.
You know who passes these exams easily? People in tech hubs, with quiet apartments, corporate backing, expensive equipment, and no roommates.
You know who gets flagged, banned, or priced out? Everyone else.
So here’s a wild idea:
Make it fair. Make it accessible. Make it human.
Offer test centers.
Offer accommodations.
Stop treating remote exam-takers like criminals.
And while you’re at it, stop pretending like this system represents “the future of cloud.”
It represents the past, just with more invasive surveillance.
Signed,
One very pissed-off, cloud engineer
Who doesn’t need your cert to prove it
But wanted the badge anyway, before you made it a gatekeeping farce
https://redd.it/1l88uej
@r_devops
I’m sick of it.
These so-called "industry standard" Kubernetes certifications (CKA, CKAD, CKS) have become a monument to privilege, not merit. You want to prove your skills in Kubernetes? Cool. But apparently, first you need to prove you own a luxury apartment, live alone in a soundproof bunker, and don’t blink too much.
Let me break this down for the CNCF and their sanctimonious proctors:
Not everyone has a dedicated home office.
Not everyone can afford to book a quiet coworking space or even a hotel for a whole night just to take your absurdly strict exam.
Not everyone lives in a country where stable internet is guaranteed, or where the "exam spyware" even runs properly.
And some of us are disabled, neurodivergent, or otherwise unable to sit still and silent in front of a single screen while being eyeball-tracked by an AI that treats a sneeze like a felony.
You know what happens when I try to take the exam from my living room — which, by the way, is also my office, bedroom, and kitchen?
I get flagged because someone walked past the door.
I get banned for “looking away” to stretch my neck.
I get stressed out to hell before the exam even starts, just trying to pass the ridiculous room scan.
And then if the proctor’s software crashes, guess what? No refund. No re-entry. No second chance. Just another $395 down the drain.
Oh, and let’s talk about ableism, shall we?
People with ADHD, autism, mobility constraints, chronic pain — you’ve built a system that excludes them by default. Can’t sit still? Can’t control your eye movement? Can’t guarantee your kid won’t cry in the next room?
Too bad. No cert for you. Try again with a different life.
This isn’t “security.” It’s elitism wrapped in bureaucracy.
You know who passes these exams easily? People in tech hubs, with quiet apartments, corporate backing, expensive equipment, and no roommates.
You know who gets flagged, banned, or priced out? Everyone else.
So here’s a wild idea:
Make it fair. Make it accessible. Make it human.
Offer test centers.
Offer accommodations.
Stop treating remote exam-takers like criminals.
And while you’re at it, stop pretending like this system represents “the future of cloud.”
It represents the past, just with more invasive surveillance.
Signed,
One very pissed-off, cloud engineer
Who doesn’t need your cert to prove it
But wanted the badge anyway, before you made it a gatekeeping farce
https://redd.it/1l88uej
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Monitoring showed green. Users were getting 502s. Turns out it was none of the usual suspects.
Ran into this with a client recently.
They were seeing random 502s and 503s. Totally unpredictable.
Code was clean. No memory leaks. CPU wasn’t spiking.
They were using Watchdog for monitoring and everything looked normal.
So the devs were getting blamed.
I dug into it and noticed memory usage was peaking during high-traffic periods.
But it would drop quickly just long enough to cause issues, but short enough to disappear before anyone saw it.
Turns out Watchdog was only sampling every 5 mins (and even slower for longer time ranges).
So none of the spikes were ever caught. Everything looked smooth on the graphs.
We swapped it out for Prometheus + Node Exporter and let it collect for a few hours.
There it was full memory saturation during peak times.
We set up auto scaling based on to handle peak traffic demands.
Errors gone. Devs finally off the hook.
Lesson: when your monitoring doesn’t show the pain, it’s not the code. It’s the visibility.
Anyway, just thought I’d share in case anyone’s been hit with mystery 5xxs and no clear root cause.
If you’re dealing with anything similar, I wrote up a quick checklist we used to debug this. DM me if you want a copy.
Also curious have you ever chased a bug and it ended up being something completely different than what everyone thought?
Would love to read your war stories.
https://redd.it/1l86ynq
@r_devops
Ran into this with a client recently.
They were seeing random 502s and 503s. Totally unpredictable.
Code was clean. No memory leaks. CPU wasn’t spiking.
They were using Watchdog for monitoring and everything looked normal.
So the devs were getting blamed.
I dug into it and noticed memory usage was peaking during high-traffic periods.
But it would drop quickly just long enough to cause issues, but short enough to disappear before anyone saw it.
Turns out Watchdog was only sampling every 5 mins (and even slower for longer time ranges).
So none of the spikes were ever caught. Everything looked smooth on the graphs.
We swapped it out for Prometheus + Node Exporter and let it collect for a few hours.
There it was full memory saturation during peak times.
We set up auto scaling based on to handle peak traffic demands.
Errors gone. Devs finally off the hook.
Lesson: when your monitoring doesn’t show the pain, it’s not the code. It’s the visibility.
Anyway, just thought I’d share in case anyone’s been hit with mystery 5xxs and no clear root cause.
If you’re dealing with anything similar, I wrote up a quick checklist we used to debug this. DM me if you want a copy.
Also curious have you ever chased a bug and it ended up being something completely different than what everyone thought?
Would love to read your war stories.
https://redd.it/1l86ynq
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Thinking about “tamper-proof logs” for LLM apps - what would actually help you?
Hi!
I’ve been thinking about “tamper-proof logs for LLMs” these past few weeks. It's a new space with lots of early conversations, but no off-the-shelf tooling yet. Most teams I meet are still stitching together scripts, S3 buckets and manual audits.
So, I built a small prototype to see if this problem can be solved. Here's a quick summary of what we have:
1. encrypts all prompts (and responses) following a BYOK approach
2. hash-chain each entry and publish a public fingerprint so auditors can prove nothing was altered
3. lets you decrypt a single log row on demand when someone (auditors) says “show me that one.”
Why this matters
Regulators - including HIPAA, FINRA, SOC 2, the EU AI Act - are catching up with AI-first products. Think healthcare chatbots leaking PII or fintech models mis-classifying users. Evidence requests are only going to get tougher and juggling spreadsheets + S3 is already painful.
My ask
What feature (or missing piece) would turn this prototype into something you’d actually use? Export, alerting, Python SDK? Or something else entirely? Please comment below!
I’d love to hear how you handle “tamper-proof” LLM logs today, what hurts most, and what would help.
Brutal honesty welcome. If you’d like to follow the journey and access the prototype, DM me and I’ll drop you a link to our small Slack.
Thank you!
https://redd.it/1l8bxl3
@r_devops
Hi!
I’ve been thinking about “tamper-proof logs for LLMs” these past few weeks. It's a new space with lots of early conversations, but no off-the-shelf tooling yet. Most teams I meet are still stitching together scripts, S3 buckets and manual audits.
So, I built a small prototype to see if this problem can be solved. Here's a quick summary of what we have:
1. encrypts all prompts (and responses) following a BYOK approach
2. hash-chain each entry and publish a public fingerprint so auditors can prove nothing was altered
3. lets you decrypt a single log row on demand when someone (auditors) says “show me that one.”
Why this matters
Regulators - including HIPAA, FINRA, SOC 2, the EU AI Act - are catching up with AI-first products. Think healthcare chatbots leaking PII or fintech models mis-classifying users. Evidence requests are only going to get tougher and juggling spreadsheets + S3 is already painful.
My ask
What feature (or missing piece) would turn this prototype into something you’d actually use? Export, alerting, Python SDK? Or something else entirely? Please comment below!
I’d love to hear how you handle “tamper-proof” LLM logs today, what hurts most, and what would help.
Brutal honesty welcome. If you’d like to follow the journey and access the prototype, DM me and I’ll drop you a link to our small Slack.
Thank you!
https://redd.it/1l8bxl3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What's eating up most of your time as a DevOps engineer?
I've been in DevOps for several years and I'm curious if others are experiencing the same time drains I am. Feels like we're all constantly reinventing the wheel.
What repetitive tasks are killing your productivity?
For me, it's:
* Setting up Jenkins pipelines for the 100th time with slight variations
* Terraform configs that are 90% copy-paste from previous projects
* Debugging why the same deployment failed... again
* Writing Ansible playbooks for standard server configurations
* Answering "why is the build broken?" at 2 AM
Quick questions:
1. What repetitive tasks eat up most of your day?
2. How many hours/week do you spend on "boring but necessary" work?
3. If you could automate or delegate any part of your job, what would it be?
4. For developers: How long do you typically wait for DevOps to set up environments/pipelines?
Just trying to see if this is a universal experience or if some teams have figured out better ways to handle the mundane stuff.
https://redd.it/1l8dsax
@r_devops
I've been in DevOps for several years and I'm curious if others are experiencing the same time drains I am. Feels like we're all constantly reinventing the wheel.
What repetitive tasks are killing your productivity?
For me, it's:
* Setting up Jenkins pipelines for the 100th time with slight variations
* Terraform configs that are 90% copy-paste from previous projects
* Debugging why the same deployment failed... again
* Writing Ansible playbooks for standard server configurations
* Answering "why is the build broken?" at 2 AM
Quick questions:
1. What repetitive tasks eat up most of your day?
2. How many hours/week do you spend on "boring but necessary" work?
3. If you could automate or delegate any part of your job, what would it be?
4. For developers: How long do you typically wait for DevOps to set up environments/pipelines?
Just trying to see if this is a universal experience or if some teams have figured out better ways to handle the mundane stuff.
https://redd.it/1l8dsax
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Instrumentation Score - an open spec to measure instrumentation quality
Hi, Juraci here. I'm an active member of the OpenTelemetry community, part of the project's governance committee, and since January, co-founder at OllyGarden. But this isn't about OllyGarden.
This is about a problem I've seen for years: we pour tons of effort into instrumentation, but we've never had a standard way to measure if it's any good. We just rely on gut feeling.
To fix this, I've started working with others in the community on an open spec for an "Instrumentation Score." The idea is simple: a numerical score that objectively measures the quality of OTLP data against a set of rules.
Think of rules that would flag real-world issues, like:
* Traces missing `service.name`, making them impossible to assign to a team.
* High-cardinality metric labels that are secretly blowing up your time series database.
* Incomplete traces with holes in them because context propagation is broken somewhere.
The early spec is now on GitHub at [https://github.com/instrumentation-score/](https://github.com/instrumentation-score/), and I believe this only works if it's a true community effort. The experience of the engineers here is what will make it genuinely useful.
What do you think? What are the biggest "bad telemetry" patterns you see, and what kinds of rules would you want to add to a spec like this?
https://redd.it/1l8jm3u
@r_devops
Hi, Juraci here. I'm an active member of the OpenTelemetry community, part of the project's governance committee, and since January, co-founder at OllyGarden. But this isn't about OllyGarden.
This is about a problem I've seen for years: we pour tons of effort into instrumentation, but we've never had a standard way to measure if it's any good. We just rely on gut feeling.
To fix this, I've started working with others in the community on an open spec for an "Instrumentation Score." The idea is simple: a numerical score that objectively measures the quality of OTLP data against a set of rules.
Think of rules that would flag real-world issues, like:
* Traces missing `service.name`, making them impossible to assign to a team.
* High-cardinality metric labels that are secretly blowing up your time series database.
* Incomplete traces with holes in them because context propagation is broken somewhere.
The early spec is now on GitHub at [https://github.com/instrumentation-score/](https://github.com/instrumentation-score/), and I believe this only works if it's a true community effort. The experience of the engineers here is what will make it genuinely useful.
What do you think? What are the biggest "bad telemetry" patterns you see, and what kinds of rules would you want to add to a spec like this?
https://redd.it/1l8jm3u
@r_devops
GitHub
instrumentation-score
instrumentation-score has 3 repositories available. Follow their code on GitHub.
I’m co-founder at SigNoz - an open-source Datadog alternative with over 22k Github stars. Ask Me Anything! [AMA]
Hey r/devops!
I am Pranay, one of the co-founders of [SigNoz](https://github.com/SigNoz/signoz), an opentelemetry native observability tool that provides APM, logs, traces, metrics, exceptions, alerts, etc. in a single tool.
A bit on how and why we started SigNoz:
4 years back, I and my co-founder, Ankit, identified a gap in observability tooling. There was a huge difference between what was available in open source vs proprietary tools. We thought there should be much better tooling available in Open Source. There was none available, hence we started building one.
We applied with this idea to YCombinator and were selected.
4 years from then we now have a much more mature product, many users using the product every day and Github repo with 22K stars (vanity metric), but atleast it shows it has got some interest.
Not here to sell anything, but thought our journey may be interesting to some and might insipire the next set of ppl. Feel free to ask me anything about building and maintaining SigNoz, observability practices, etc. A few things in my mind that we can talk about:
- engineering and technical questions around SigNoz
- existing and upcoming features
- Building and maintaining an open-source project
- existing observability landscape, your pain points, etc.
- state of opentelemetry and its future
or anything related to observability in general. SigNoz is now being used by engineering teams at companies of all sizes, so I can definitely help you with questions around your observability set up.
I will start answering questions from 9:30 am PT (11th June, Wednesday). Leaving it here now so that folks from other timezones can leave their questions. Looking forward to a great chat.
To prove that I am real and not an LLM bot :) : https://www.linkedin.com/posts/pranay01_if-youre-on-reddit-i-am-doing-a-reddit-activity-7338425383240773634-dz6V
https://redd.it/1l8jrc2
@r_devops
Hey r/devops!
I am Pranay, one of the co-founders of [SigNoz](https://github.com/SigNoz/signoz), an opentelemetry native observability tool that provides APM, logs, traces, metrics, exceptions, alerts, etc. in a single tool.
A bit on how and why we started SigNoz:
4 years back, I and my co-founder, Ankit, identified a gap in observability tooling. There was a huge difference between what was available in open source vs proprietary tools. We thought there should be much better tooling available in Open Source. There was none available, hence we started building one.
We applied with this idea to YCombinator and were selected.
4 years from then we now have a much more mature product, many users using the product every day and Github repo with 22K stars (vanity metric), but atleast it shows it has got some interest.
Not here to sell anything, but thought our journey may be interesting to some and might insipire the next set of ppl. Feel free to ask me anything about building and maintaining SigNoz, observability practices, etc. A few things in my mind that we can talk about:
- engineering and technical questions around SigNoz
- existing and upcoming features
- Building and maintaining an open-source project
- existing observability landscape, your pain points, etc.
- state of opentelemetry and its future
or anything related to observability in general. SigNoz is now being used by engineering teams at companies of all sizes, so I can definitely help you with questions around your observability set up.
I will start answering questions from 9:30 am PT (11th June, Wednesday). Leaving it here now so that folks from other timezones can leave their questions. Looking forward to a great chat.
To prove that I am real and not an LLM bot :) : https://www.linkedin.com/posts/pranay01_if-youre-on-reddit-i-am-doing-a-reddit-activity-7338425383240773634-dz6V
https://redd.it/1l8jrc2
@r_devops
GitHub
GitHub - SigNoz/signoz: SigNoz is an open-source observability platform native to OpenTelemetry with logs, traces and metrics in…
SigNoz is an open-source observability platform native to OpenTelemetry with logs, traces and metrics in a single application. An open-source alternative to DataDog, NewRelic, etc. 🔥 🖥. 👉 Open s...
how do you stay efficient when working inside large, loosely connected codebases?
I spent most of this week trying to refactor a part of our app that fetches external reports, processes them, and displays insights across different user dashboards.
The logic is spread out
– the fetch logic lives in a service file that wraps multiple third-party API calls
– parsing is done via utility functions buried two folders deep
– data transformation happens in a custom hook, with conditional mappings based on user role
– the UI layer applies another layer of formatting before rendering
None of this is wrong on its own, but there’s minimal documentation and almost no direct link between layers.
Tho used blackbox to surface a few related usages and pattern matches, which actually helped, but the real work was just reading line by line and mapping it all mentally
The actual change was small: include an extra computed field and display it in two places. But every step required tracing back assumptions and confirming side effects.
in tightly scoped projects, I guess this would’ve taken 30 minutes. and here, it took almost two days
what’s your actual workflow in this kind of environment?
do you write temporary trace logs? build visual maps? lean on tests or rewrite from scratch?
I’m trying to figure out how to be faster at handling this kind of loosely coupled structure without relying on luck or too much context switching
https://redd.it/1l8julj
@r_devops
I spent most of this week trying to refactor a part of our app that fetches external reports, processes them, and displays insights across different user dashboards.
The logic is spread out
– the fetch logic lives in a service file that wraps multiple third-party API calls
– parsing is done via utility functions buried two folders deep
– data transformation happens in a custom hook, with conditional mappings based on user role
– the UI layer applies another layer of formatting before rendering
None of this is wrong on its own, but there’s minimal documentation and almost no direct link between layers.
Tho used blackbox to surface a few related usages and pattern matches, which actually helped, but the real work was just reading line by line and mapping it all mentally
The actual change was small: include an extra computed field and display it in two places. But every step required tracing back assumptions and confirming side effects.
in tightly scoped projects, I guess this would’ve taken 30 minutes. and here, it took almost two days
what’s your actual workflow in this kind of environment?
do you write temporary trace logs? build visual maps? lean on tests or rewrite from scratch?
I’m trying to figure out how to be faster at handling this kind of loosely coupled structure without relying on luck or too much context switching
https://redd.it/1l8julj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
PSA- MS have expired cert on onegetcdn.azureedge.net
As title says, MS cert expired a few hours ago and pipelines with Power Platform Tool Installer task may fail when trying to connect to this shared CDN service: unable to get NuGet
Have raised sev1 with MS and they’re investigating and hopefully will resolve soon…
https://redd.it/1l8madc
@r_devops
As title says, MS cert expired a few hours ago and pipelines with Power Platform Tool Installer task may fail when trying to connect to this shared CDN service: unable to get NuGet
Have raised sev1 with MS and they’re investigating and hopefully will resolve soon…
https://redd.it/1l8madc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to get started with observability as a developer?
Hi,
I am a backend developer looking to learn and implement observability.
What would be a good starting point on the domain language around observing applications?
How does observability and alerting fit into product architecture?
What would be some good and robust open source tools to perform observation?
https://redd.it/1l8nt8v
@r_devops
Hi,
I am a backend developer looking to learn and implement observability.
What would be a good starting point on the domain language around observing applications?
How does observability and alerting fit into product architecture?
What would be some good and robust open source tools to perform observation?
https://redd.it/1l8nt8v
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Built a simple SSH jump tool (sshop) for managing many client/server combos
Hey all!
I built sshop, a lightweight CLI helper that lets you pick a client → server from a structured JSON config file, and SSH into it instantly. Reason for building this was my own struggle with managing many clients with dev/stage/prod environments.
Under the hood it uses
I made it open-source, and I'm curious if others find it useful or have any feedback or suggestions.
Repo with more info can be found here: https://github.com/Skullsneeze/sshop
https://redd.it/1l8qino
@r_devops
Hey all!
I built sshop, a lightweight CLI helper that lets you pick a client → server from a structured JSON config file, and SSH into it instantly. Reason for building this was my own struggle with managing many clients with dev/stage/prod environments.
Under the hood it uses
fzf \+ jq for fast, interactive selection, and allows for adding, updating and deleting of servers via CLI flags.I made it open-source, and I'm curious if others find it useful or have any feedback or suggestions.
Repo with more info can be found here: https://github.com/Skullsneeze/sshop
https://redd.it/1l8qino
@r_devops
GitHub
GitHub - Skullsneeze/sshop: An SSH connection helper that let's you hop to a server with ease
An SSH connection helper that let's you hop to a server with ease - Skullsneeze/sshop
Built a tool to stop wasting hours debugging Kubernetes config issues
Spent way too many late nights debugging "mysterious" K8s issues that turned out to be:
- Typos in resource references
- Missing ConfigMaps/Secrets
- Broken service selectors
- Security misconfigurations
- Docker images that don't exist or have wrong architecture
Built Kogaro to catch these before they cause incidents. It's like a linter for your running cluster.
Key insight: Most validation tools focus on policy compliance. Kogaro focuses on operational reality - what actually breaks in production.
Features:
- 60+ validation types for common failure patterns
- Docker image validation (registry existence, architecture compatibility, version)
- Structured error codes (KOGARO-XXX-YYY) for automated handling
- Prometheus metrics for monitoring trends
- Production-ready (HA, leader election, etc.)
Takes 5 minutes to deploy, immediately starts catching issues.
Latest release v0.4.2: https://github.com/topiaruss/kogaro
Demo: https://kogaro.dev
What's your most annoying "silent failure" pattern in K8s?
https://redd.it/1l8qwyq
@r_devops
Spent way too many late nights debugging "mysterious" K8s issues that turned out to be:
- Typos in resource references
- Missing ConfigMaps/Secrets
- Broken service selectors
- Security misconfigurations
- Docker images that don't exist or have wrong architecture
Built Kogaro to catch these before they cause incidents. It's like a linter for your running cluster.
Key insight: Most validation tools focus on policy compliance. Kogaro focuses on operational reality - what actually breaks in production.
Features:
- 60+ validation types for common failure patterns
- Docker image validation (registry existence, architecture compatibility, version)
- Structured error codes (KOGARO-XXX-YYY) for automated handling
- Prometheus metrics for monitoring trends
- Production-ready (HA, leader election, etc.)
Takes 5 minutes to deploy, immediately starts catching issues.
Latest release v0.4.2: https://github.com/topiaruss/kogaro
Demo: https://kogaro.dev
What's your most annoying "silent failure" pattern in K8s?
https://redd.it/1l8qwyq
@r_devops
GitHub
GitHub - topiaruss/kogaro: Kogaro - Kubernetes Configuration Hygiene Agent
Kogaro - Kubernetes Configuration Hygiene Agent. Contribute to topiaruss/kogaro development by creating an account on GitHub.
Anyone else learning Python just to stop copy-pasting random shell commands?
When i started working with cloud stuff, i kept running into long shell commands and YAML configs I didn’t fully understand.
At some point I realized: if I learned Python properly, I could actually automate half of it ...... and understand what i was doing instead of blindly copy-pasting scripts from Stack Overflow.
So I’ve been focusing more on Python scripting for small cloud tasks:
→ launching test servers
→ formatting JSON from AWS CLI
→ even writing little cleanup bots for unused resources
Still super early in the journey, but honestly, using Python this way feels way more rewarding than just “finishing tutorials.”
Anyone else taking this path — learning Python because of cloud/infra work?
Curious how you’re applying it in real projects.
https://redd.it/1l8uhvk
@r_devops
When i started working with cloud stuff, i kept running into long shell commands and YAML configs I didn’t fully understand.
At some point I realized: if I learned Python properly, I could actually automate half of it ...... and understand what i was doing instead of blindly copy-pasting scripts from Stack Overflow.
So I’ve been focusing more on Python scripting for small cloud tasks:
→ launching test servers
→ formatting JSON from AWS CLI
→ even writing little cleanup bots for unused resources
Still super early in the journey, but honestly, using Python this way feels way more rewarding than just “finishing tutorials.”
Anyone else taking this path — learning Python because of cloud/infra work?
Curious how you’re applying it in real projects.
https://redd.it/1l8uhvk
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
❤1
8 YOE all at the same company Is my resume senior-worthy at a tech company?
Hey all,
I’ve been working full-time for over 8 years at the same Fortune 500 non-tech company (and interned at a different one prior to that), but I’m finally ready to look elsewhere because of being what I perceive as underpaid relative to the value I can create. Here’s my anonymized resume:
https://imgur.com/a/nd3T1MA
I’ve been in 4 different organizations within the company, but I can’t tell whether I am actually going to get looks at FAANG-adjacent companies or if I’m wasting my time by going through the application process. The bar is so low to meet expectations at my current company that I worry it’s made me soft/lazy/unattractive to more prestigious employers. I don’t want to get into a senior or staff interview and make an ass out of myself. What are your thoughts?
Thank you!
https://redd.it/1l8yyie
@r_devops
Hey all,
I’ve been working full-time for over 8 years at the same Fortune 500 non-tech company (and interned at a different one prior to that), but I’m finally ready to look elsewhere because of being what I perceive as underpaid relative to the value I can create. Here’s my anonymized resume:
https://imgur.com/a/nd3T1MA
I’ve been in 4 different organizations within the company, but I can’t tell whether I am actually going to get looks at FAANG-adjacent companies or if I’m wasting my time by going through the application process. The bar is so low to meet expectations at my current company that I worry it’s made me soft/lazy/unattractive to more prestigious employers. I don’t want to get into a senior or staff interview and make an ass out of myself. What are your thoughts?
Thank you!
https://redd.it/1l8yyie
@r_devops
Imgur
Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.
Change Log Creation
I added a step to my build process to generate a Changlog by using the commit messages by date before the last tag. Now facing an interesting decisión and want to get some suggestions. I can call the change log build task when I generate the release (on GitHub) and only make it part of the release. That’s option 1. Option 2, generate the change log on build and commit it back to the repository as part of the build process. I am not thrilled with either option but I want to make this as easy as possible, but it Alfredo’s dirty to commit as part of the build. I can do this as a pre-commit hook as well, not sure if that’s better but it will require some setup on the dev machine. What are you folks doing in a similar scenario? This is part of a generic build agent/pipline, I think I posted it on here already.
https://redd.it/1l8z1q2
@r_devops
I added a step to my build process to generate a Changlog by using the commit messages by date before the last tag. Now facing an interesting decisión and want to get some suggestions. I can call the change log build task when I generate the release (on GitHub) and only make it part of the release. That’s option 1. Option 2, generate the change log on build and commit it back to the repository as part of the build process. I am not thrilled with either option but I want to make this as easy as possible, but it Alfredo’s dirty to commit as part of the build. I can do this as a pre-commit hook as well, not sure if that’s better but it will require some setup on the dev machine. What are you folks doing in a similar scenario? This is part of a generic build agent/pipline, I think I posted it on here already.
https://redd.it/1l8z1q2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Airflow: how to reload webserver_config.py without restarting the webserver?
I tried making edits to the config file but that doesn’t get picked up. Using airflow 2. Surely there must be a way to reload without restarting the pod?
https://redd.it/1l8yl06
@r_devops
I tried making edits to the config file but that doesn’t get picked up. Using airflow 2. Surely there must be a way to reload without restarting the pod?
https://redd.it/1l8yl06
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Cloud DevOps mentorship/tutoring needed
Background
I am a msc it security student in Germany and btech computer science graduate from india, with multiple internship experience with full stack web dev. I have completed some course on docker and AWS cloud practitioner.
Expectations
I will complete my first year of msc in 3 more months after which I need to land job with a company to do my master thesis along with the company. I want to do it specifically in the intersection of cloud DevOps and security.
Requirement
I am looking for experienced cloud DevOps engineer (at least 1 years), who can get me interview ready to land a job for such roles. I only have 3 months to land a job so the duration of the contract will also be 3 months. I specifically want to learn in depth about Kubernetes, observability and infrastructure as code (terraform).
Bonus
If someone also can teach me potential security aspects of cloud DevOps and a potential master thesis in this field that would very beneficial for me.
Pay: up to 12 euro per hour
https://redd.it/1l93ej2
@r_devops
Background
I am a msc it security student in Germany and btech computer science graduate from india, with multiple internship experience with full stack web dev. I have completed some course on docker and AWS cloud practitioner.
Expectations
I will complete my first year of msc in 3 more months after which I need to land job with a company to do my master thesis along with the company. I want to do it specifically in the intersection of cloud DevOps and security.
Requirement
I am looking for experienced cloud DevOps engineer (at least 1 years), who can get me interview ready to land a job for such roles. I only have 3 months to land a job so the duration of the contract will also be 3 months. I specifically want to learn in depth about Kubernetes, observability and infrastructure as code (terraform).
Bonus
If someone also can teach me potential security aspects of cloud DevOps and a potential master thesis in this field that would very beneficial for me.
Pay: up to 12 euro per hour
https://redd.it/1l93ej2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What do you use to automate self-healing scripts?
Hey everyone! just asking this to see if I'm missing something or the hereditary blindness already got me.
The thing is, I've been a DevOps engineer for about 5–6 years in two different companies, and in both of them, my main task was creating auto-remediation/self-healing scripts that run automatically when a monitoring tool detects something, like a spike in CPU, swap usage, low disk space, and so.
For that whole pipeline, I've been using a mix of Python/Go/Shell (sensible scripts), orchestrated by Rundeck/Jenkins/n8n/Tower as the executors, and Grafana/Datadog or similar tools for monitoring.
So my question is: is there anything dedicated to this? I mean, a tool that, when a monitoring metric hits a threshold, can automatically trigger something on a machine or group of machines?
https://redd.it/1l956jb
@r_devops
Hey everyone! just asking this to see if I'm missing something or the hereditary blindness already got me.
The thing is, I've been a DevOps engineer for about 5–6 years in two different companies, and in both of them, my main task was creating auto-remediation/self-healing scripts that run automatically when a monitoring tool detects something, like a spike in CPU, swap usage, low disk space, and so.
For that whole pipeline, I've been using a mix of Python/Go/Shell (sensible scripts), orchestrated by Rundeck/Jenkins/n8n/Tower as the executors, and Grafana/Datadog or similar tools for monitoring.
So my question is: is there anything dedicated to this? I mean, a tool that, when a monitoring metric hits a threshold, can automatically trigger something on a machine or group of machines?
https://redd.it/1l956jb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Developer cheat sheet
I created this free cheat sheet for cli commands.
I tend to prefer to invoke commands in my IDE vs GUI.
This is free.
If there is anything you want me to add please let me know.
Https://devcheatsheet.io
https://redd.it/1l95236
@r_devops
I created this free cheat sheet for cli commands.
I tend to prefer to invoke commands in my IDE vs GUI.
This is free.
If there is anything you want me to add please let me know.
Https://devcheatsheet.io
https://redd.it/1l95236
@r_devops
devcheatsheet.io
Dev Cheatsheets
One place for all your cheatsheets