Reddit DevOps
268 subscribers
2 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
OpenLIT: Self-hosted observability dashboards built on ClickHouse — now with full drag-and-drop custom dashboard creation

We just added custom dashboards to OpenLIT, our open-source engineering analytics tool.

Create folders, drag & drop widgets
Use any SDK to send data to ClickHouse
No vendor lock-in
Auto-refresh, filters, time intervals

📺 Tutorials: YouTube Playlist
📘 Docs: OpenLIT Dashboards

GitHub: https://github.com/openlit/openlit

Would love to hear what you think or how you’d use it!

https://redd.it/1lzvlbu
@r_devops
KubeDiagrams

**KubeDiagrams**, an open source Apache 2.0 License project hosted on GitHub, is a tool to generate Kubernetes architecture diagrams from Kubernetes manifest files, kustomization files, Helm charts, helmfile descriptors, and actual cluster state. **KubeDiagrams** supports most of all Kubernetes built-in resources, any custom resources, namespace/label/annotation-based resource clustering, and declarative custom diagrams. **KubeDiagrams** is available as a Python package in PyPI, a container image in DockerHub, a kubectl plugin, a Nix flake, and a GitHub Action.

Try it on your own Kubernetes manifests, Helm charts, helmfiles, and actual cluster state!

https://redd.it/1lzvsb7
@r_devops
Kubernetes Homelab Rescue: Troubleshooting with AI (and the Lessons Learned)

Although the post is about my homelab I have previously had similar types of issues happen at work. The troubleshooting steps would have been similar and other than the freedom to simply paste logs/terminal output directly to Claude 4 for "assistance" I can easily see AI-assisted troubleshooting go down this route.

The suggestions Claude gave for figuring out what was wrong started out sensibly but fairly quickly turned into suggestions that would have left me redeploying at least a portion of the cluster and possibly restoring data from backups.

I ended up going on a tangent and thinking about just how dangerous following troubleshooting suggestions from an AI can be if you don't have at least some knowledge as to the possible consequences. Even Claude admitted (when asked afterwards in the conversation) that the suggestions quickly became destructive and that it never reset even when new information and context was introduced.

Kubernetes Homelab Rescue: Troubleshooting with AI (and the Lessons Learned)

https://redd.it/1lzz0db
@r_devops
Paid courses to move from Full Stack to DevOps.

Hi, i am currently working as a Full Stack dev, but after years in company feels like i do every single role a little bit. UI React.js / Backend Node.js and java/ Pipelines a bit / sonarcube / code scanners etc.

I want to move to Devops fully because want some career shift and new knowledge.
( i did similar prior, i was QA Automaton Architect and moved to Full Stack Development )

So i want to focus DevOps and Security.

Can someone suggest courses? Paid courses are fine. Or what is the best path to move from one role to another?

Or what certifications to take.

Yes i can use AI for that knowledge, but i wonder if there is a structured patch to take so i wont miss things which are must have for that role.

Or if you had similar experience, how did you shifted roles?

Thanks all for suggestions and tips.

https://redd.it/1m05m0u
@r_devops
IAM in DevOps

To all DevOps/SecOps engineers interested in IAM:

I’ve just published a blog on integrating Keycloak as an Idp with GitLab via SAML and Kubernetes via OpenID Connect. SAML and OIDC are two modern protocols for secure authentication. It’s a technical guide that walks through setting up centralized authentication across your DevOps stack.

Check it out!

https://medium.com/@aymanegharrabou/integrating-keycloak-with-gitlab-saml-and-kubernetes-openid-connect-da036d3b8f3c

https://redd.it/1m06ysv
@r_devops
Karpenter - Protecting batch jobs from consolidation/disruption

An approach to ensuring Karpenter doesn't interrupt your long-running or critical batch jobs during node consolidation in an Amazon EKS cluster. Karpenter’s consolidation feature is designed to optimize cluster costs by terminating underutilized nodes—but if not configured carefully, it can inadvertently evict active pods, including those running important batch workloads.

To address this, use a custom `do_not_disrupt: "true"` annotation on your batch jobs. This simple yet effective technique tells Karpenter to avoid disrupting specific pods during consolidation, giving you granular control over which workloads can safely be interrupted and which must be preserved until completion. This is especially useful in data processing pipelines, ML training jobs, or any compute-intensive tasks where premature termination could lead to data loss, wasted compute time, or failed workflows
https://youtu.be/ZoYKi9GS1rw

https://redd.it/1m09umg
@r_devops
How do I highlight my work without sounding bitter in an exec email?

Hi everyone. I posted here a while back about a newly acquired global team trying to reverse-engineer a solution I built for my region (corporate). They were instructed by a senior executive to replicate my work and roll it out globally as one of their first projects. However, they couldn't do it so they contacted me to handover everything (as per my regional manager's approval) due to higher hierarchy politics. The general advice was to stay cooperative, which is essentially what I did.

I've now completed the full handover. My manager is about to send an email update to execs and asked me to write a draft email with everything I want to include. So I want to make sure the email strikes the right tone and not sound too bitter or boastful, but also not overly humble, since in the end I had to give them everything and walk them through it line-by-line because they couldn’t figure out how to implement on their own at first without the hand-holding. It took a lot of time away from my actual work too. Anyways, here's my draft email which I'm planning to send to my manager. I would appreciate any thoughts on things I should add or remove. Thank you.

> Following our earlier alignment with Team G, we've successfully completed the full technical handover of our engineered solution XYZ.
>
> Over the past weeks, we worked closely with them to provide everything needed to support global replication and scaling. This included:
>
> Complete export and transfer of the entire engineered solution as per their request, including source code, application packages, automated workflows, schemas, dashboards, and assets
>
Comprehensive documentation detailing the architecture, data models, and deployment procedures
> Direct access to the Region A development environments and source materials
>
A solution designed for streamlined deployment, requiring only minimal configuration (simple changes to IDs and endpoint references)
>
> With this, Team G is now well-equipped to roll out the solution across regions efficiently without the engineering overhead or need for rebuilding. Our team remains available for support while continuing to advance in other priorities.
>
> We're pleased to see our work serving as the foundation for broader improvements and look forward to the positive impact across all regions.

https://redd.it/1m0b0rv
@r_devops
DevOps learning - How do I continue from the spot I am at?

Hello, I recently took a DevOps course within my college curriculum.

Sadly it was also a very short DevOps course but it taught me all the essentials - Github actions & workflows, CI/CD, Docker, working in Linux environment.

I do feel like I have very weak knowledge when it comes to working with the largest cloud providers - AWS, Azure, GCP.

The CD process I learned was how to deploy to a Render server, Which honestly was pretty easy and painless.

Which online technical information do you advice me so I can continue and deepen my devops knowledge from the spot I am at? Thank you very much for reading.



https://redd.it/1m0aopk
@r_devops
Live challenge: building a data pipeline in under 15 minutes

hey follks, RB from hevo here!

This Thursday, I’m going live with a challenge: build and deploy a fully automated data pipeline in under 15 minutes, without writing code. So if you're spending hours writing custom scripts or debugging broken syncs, you might want to check this out :)

What I’ll cover live:

Ingesting from sources like S3, SQL Server, or internal APIs
Streaming into destinations like Snowflake, Redshift, or BigQuery
Auto-scaling, schema drift handling, and built-in alerting/monitoring
Live Q&A where you can throw us the hard questions

When: Thursday, July 17 @ 1PM EST

You can sign up here: Reserve your spot here!

Happy to answer any qs!

https://redd.it/1m0d0wn
@r_devops
What Security & Integration Features Matter Most for Enterprise Teams?

Hi everyone,

we're a group of Master's students in Information Systems at the University Münster (Germany) developing SqueelGPT, a SaaS that converts plain-English questions into production-ready SQL queries with a focus on enterprises (API, IT-Admin Dashboard).

Goal: Let non-technical team members generate ad-hoc reports without bothering your developers or DBAs
Current features: Multi-step query processing pipeline, schema analysis, sandboxed query validation

Questions for you:

Would you prefer a Chat Interface or an API that can be used to translate English into SQL?
What database security controls would be absolutely critical? (row-level security, query limits, audit logs)
Which enterprise integrations are must-haves? (SAML, OIDC, Slack, User Dashboard)
How do you currently handle ad-hoc data requests from business teams?

We'd love to learn from your experiences managing enterprises at scale. We are looking for any insights we can get, but also have a website with a waitlist if you are intrested: https://squeelgpt.com/

Thanks for any insights!

https://redd.it/1m0co1e
@r_devops
Get $50 free credit on signup at Any Router! 🚀


Access Claude Code AI, no credit card needed.
Perfect for devs, learners, and hobbyists.
Sign up now: https://anyrouter.top/register?aff=7ilr
#AI #ClaudeCode

https://redd.it/1m0emyh
@r_devops
Fail the workflow based on conditions

Hey there,


Trying to tackle a scenario in which an third-party action fails cause of two reasons (call them X and Y), thereby failing the whole job.

Is there any we can check whether error X or Y has happened, in consecutive step(s) - so as to deal with failure appropriately.

PS: the third-party action doesn't set any output that we can use, it simply returns 127 exit code


Thanks.

https://redd.it/1m0ajpi
@r_devops
How are you deploying to Azure from Bitbucket without OpenID Connect support?

I'm curious to know how teams are handling deployments to Azure from Bitbucket, especially since Bitbucket doesn't currently support OIDC integration for Azure like GitHub or GitLab does.

How are you managing Azure credentials securely in your pipelines?
Are you relying on service principals with client secrets or certificates?
Have you implemented any workarounds or third-party tools to simulate federated identity/OIDC flows?
Are there any best practices or security considerations you'd recommend in this setup?

Would love to hear how others are handling this.

https://redd.it/1m0a1w5
@r_devops
Tried AWS Kiro IDE: A Spec-First, AI-Powered IDE That Feels Surprisingly Practical

Unlike most AI tools that generate quick code from prompts, Kiro starts by generating structured specs, user stories, design docs, and database schemas, before writing any code. It also supports automation hooks and task breakdowns, which makes it feel more like a true engineering tool.

I’ve been exploring ways to bring AI into real DevOps workflows, and Kiro's structured approach feels a lot closer to production-grade engineering than the usual vibe coding.

Read it here: https://blog.prateekjain.dev/kiro-ide-by-aws-ai-coding-with-specs-and-structure-8ae696d43638?sk=f2024fa4dc080e105f73f21d57d1c81d

https://redd.it/1m0kflo
@r_devops
SRP and SoC (Separation of Concerns) in DevOps/GitOps

Puppet Best Practices does a great job explaining design patterns that still hold up, especially as config management shifts from convergence loops (Puppet, Chef) to reconciliation loops (Kubernetes).

In both models, success or failure often hinges on how well you apply SRP (Single Responsibility Principle) and SoC (Separation of Concerns).

I’ve seen GitOps repos crash and burn because config and code were tangled together (config artifacts tethered to code artifacts and vice-versa): making both harder to test, reuse, or scale. In this setting, when they needed to make a small configuration change, such as adding a new region, the application with untested code would be pushed out. A clean structure, where each module handles a single concern (e.g., a service, config file, or policy), is more maintainable.

# Summary of Key Principles

Single Responsibility Principle (SRP): Each module, class, or function should have one and only one reason to change. In Puppet, this means writing modules that perform a single, well-defined task, such as managing a service, user, or config file, without overreaching into unrelated areas.
Separation of Concerns (SoC): Avoid bundling unrelated responsibilities into the same module. Delegate distinct concerns to their own modules. For example, a module that manages a web server shouldn't also manage firewall rules or deploy application code, those concerns belong elsewhere.

TL;DR:

SRP: A module should have one reason to change.
SoC: Don’t mix unrelated tasks in the same module, delegate.





https://redd.it/1m0m3b3
@r_devops
ELK a pain in the ass

Contextual Overview of the Task:

I’m a Software Engineer (not a DevOps specialist), and a few months ago, I was assigned a task directly by my manager to set up log tracking for an internal Java-based application. The goal was to capture and display logs (specifically request and response logs involving bank communications) in a searchable way, user-wise.

Initially, I explored using APIs for the task, but was explicitly told by my dev lead not to use any APIs. Upon researching alternatives, I discovered that Filebeat could be used to forward logs, and ELK (Elasticsearch, Logstash, and Kibana) could be used for parsing and visualizing them.

Project Structure:

The application in question acts as a central service for banking communications and has been deployed as 9 separate instances—each handling communication with a different bank. As a result, the logs which are expected by the client come in multiple formats: XML, JSON, and others along with the regular application logs.

To trace user-specific logs, I modified the application to tag each internal message with a userCode and timestamp. Later in the flow, when the request and response messages are generated, they include the requestId, allowing correlation and tracking.

Challenges Faced:

I initially attempted to set up a complete Dockerized ELK stack—something I had no prior experience with. This turned into a major hurdle. I struggled with container issues, incorrect configurations, and persistent failures for over 1.5 months. During this time, I received no help from the DevOps team, even after reaching out. I was essentially on my own trying to resolve something outside my core domain.

Eventually, I shifted to setting up everything locally on Windows, avoiding Docker entirely. I managed to get Filebeat pushing logs to Logstash, but I'm currently stuck with Logstash filters not parsing correctly, which in turn blocks data from reaching Elasticsearch.

Team Dynamics & Feedback:

Throughout this, I was always communicating with my dev lead about the issues faced and I need help on it, but my dev lead has been disengaged and uncommunicative. There’s been a lack of collaboration and constructive feedback to the manager from my dev lead . Despite handling multiple other responsibilities—most of which are now in QA or pre-production—this logging setup has become the one remaining task. Unfortunately, this side project, which I took on in addition to my primary duties, has been labeled as “poor output” by my manager, without any recognition of the constraints or lack of support.


Request for Help:

I’m now at a point where I genuinely want to complete this properly, but I need guidance—especially on fixing the Logstash filter and ensuring data flows properly into Elasticsearch. Any suggestions, working examples, or advice from someone with ELK experience would be really appreciated.

Now I feel burned out and tired even after so much effort and no support I am feeling like to give up on my job, I feel like I am not valued properly here.

Any help would be much appreciated.

https://redd.it/1m0laue
@r_devops
Skills to learn

Hi all,

Looking for advice on what skills to learn to get into DevOps.

I’ve been in IT for over eight years. I’m currently in IT management and have been doing mostly IT Support (specialist, admin, management). I’ve always enjoyed working with users so I felt right at home in my role. But lately I’ve been feeling a bit stuck and want to get out of my shell and do something new. I’ve been looking at some AWS or Microsoft certs to learn more lingo and I’ve been thinking about building a home lab to run some tools.

What advice can you give me? Where should I start? What should I start learning? Sorry if this is not the right place to post.

https://redd.it/1m0k9j0
@r_devops
Problem to upload files to an Apache server with rsync

Hello. I am new to CI/CD. I wanted to automatically create an apache server with ec2 in AWS using Terraform. I also wanto to deploy the code after the server has been created.


Everything works nearly perfectly, the problem is that immediatly after I do the command to start the apache server I do the rsync command, but I get an error. I think it's because the folders var/www/html haven't been created yet.


Which would be the beset DevOps aproach? Add a sleep for 10 secos aprox. to give my server time to launch or what? Thanks for your help.


Terraform infrastructure:

name: "terraform-setup"


on:
  push:
    branches:
      - main

  workflowdispatch:


jobs:
  infra:
    runs-on: ubuntu-latest
    steps:
      - name: Get the repo
        uses: actions/[email protected]
      - name: "files"
        run: ls

      - name: Set up terraform
        uses: hashicorp/setup-terraform@v3
     
      - name: Configure AWS Credentials
        uses: aws-actions/[email protected]
        with:
          aws-access-key-id: ${{ secrets.KEY
ID }}
          aws-secret-access-key: ${{ secrets.ACCESSKEY }}
          aws-region: us-east-1

      - name: Initialize Terraform
        run: |
          cd infrastructure
          terraform init

      - name: Terraform plan
        run: |
          cd infrastructure
          terraform plan

      - name: Terraform apply
        run: |
          cd infrastructure
          terraform apply -auto-approve

      - name: Safe public
dns
        run: |
          cd infrastructure
          terraform output -raw publicdnsinstance
          terraform output publicdnsinstance
          publicdns=$(terraform output -raw publicdnsinstance)
          echo $public
dns
          cd ..
          mkdir -p tfvars
          echo $public
dns > tfvars/publicdns.txt
          cat tfvars/publicdns.txt

      - name: Read file
        run: cat tfvars/publicdns.txt

      - uses: actions/upload-artifact@v4
        with:
          name: tfvars
          path: tf
vars



Deployment:



name: deploy code

on:
  workflowrun:
    workflows: ["terraform-setup"]
    types:
      - completed


permissions:
  actions: read
  contents: read


jobs:
  deployment:
    runs-on: ubuntu-latest
     
    steps:
      - uses: actions/checkout@v3

      - uses: actions/download-artifact@v4
        with:
          name: tf
vars
          github-token: ${{ github.token }}
          repository: ${{ github.repository }}
          run-id: ${{ github.event.workflowrun.id }}


      - name: View files
        run: ls


      - name: rsync deployments
        uses: burnett01/[email protected]
        with:
          switches: -avzr --delete --rsync-path="sudo rsync"
          path: app/
          remote
path: /var/www/html/
          remotehost: $(cat publicdns.txt)
          remoteuser: ubuntu
          remote
key: ${{ secrets.PRIVATEKEYPAIR }}


https://redd.it/1m0nued
@r_devops
terraform tutorial 101 - modules

hi there!

im back with another series from my terraform tutorial 101 series.

Its about modules in terraform! If you want to know more, or if you have questions or suggestion for more topics regarding terraform let me know.

Thank you!

https://salad1n.dev/2025-07-15/terraform-modules-101

https://redd.it/1m0onme
@r_devops
Is it an exaggeration saying a product without unit-tests is not a product?

I joined this product, the guy said it was 80-90% complete. Plenty of problems, than i found out it didn't have unit tests

To me that product is doomed to break so much in production he won't have a functional product and people will, at best, cancel the subscription. At worst, ask for their money back within a week

My opinion is, he doesn't have a product there. It's "working" when you as the dev use it, but it can be broken easily (i seen it), and i broken it myself when adding or fixing features (it broke others and i had no tests to know)

Is it an exaggeration to tell him "hey, you don't have a product, this has no tests and thus you can't find out if things are working, this'll break in production and nobody will wanna use it"?

EDIT: some info that might be crucial

The problem is, he (when asked me to join) said he had a very high chance of launching what he already had, and then re-doing the whole thing again because it was so broken, and he wanted me to do it (from scratch, but with the ideas figured out)

https://redd.it/1m0xd6z
@r_devops
Feeling Lost in my Tech Internship - what do I do

Hey everyone,

I’m a rising college freshman and interning at a small tech/DS startup. I am supposed to be working on infrastructure and DevOps-type tasks. The general guidance I’ve been given is to help “document the infrastructure” and “make it better,” but I’m struggling to figure out what to even do. I sat down today and tried documenting the S3 structure, just to find there’s already documentation on it. Idk what to do

I know next to nothing. Ik basic python and learned a little AWS and Linux but I have no idea what half the technologies even do. Honestly, idrk what documentation is.

Also, it seems to me there’s already documentation in place. I don’t want to just rewrite things for the sake of it, but at the same time, I want to contribute meaningfully and not just sit around waiting for someone to tell me exactly what to do. I’ve got admin access to a lot of systems (AWS, EC2, S3, IAM, internal deployment stuff, etc.), and I’m trying to be proactive but I’m hitting a wall.

There’s no one else really in my role.

If anyone’s been in a similar spot — especially if you’ve interned somewhere without a super structured program — I’d love to hear what worked for you.



https://redd.it/1m11tvu
@r_devops