Reddit DevOps
270 subscribers
7 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
My company just did mandatory RTO and I found out that it might be based on radius. I've never had an official Cloud job but here's my latest work experience. Can I make the jump?

My problem is I've done all of this on-prem, I don't have much infrastructure as code experience although I understand it. I have also only worked in AWS and azure for more simple projects


This is my most recent resume entry
------------ -
Architected and maintained DevOps automation frameworks supporting Unity-based XR application deployment, enabling scalable delivery across multiple internal platforms.

Maintained a production-grade re-signing environment and introduced a signing infrastructure for Unity-based applications, ensuring compatibility with internal distribution and MDM tooling.

Built extensible automation scripts and system tools in Python, Bash, and PowerShell to reduce manual operations across infrastructure, build, and release processes.

Developed internal web-based tooling to streamline deployment validation, asset tracking, and environment introspection for cross-functional development teams.

Introduced AI-assisted automation into engineering workflows—accelerating tasks such as documentation generation, technical analysis, and pipeline logic optimization.

Integrated observability and alerting systems for both infrastructure health and deployment quality, ensuring early detection of anomalies and reducing downtime.

Provided end-to-end support for CI/CD systems, including Jenkins orchestration and MDM platform integrations, while aligning with regulatory constraints (e.g., HIPAA, FDA, ISO 13485).

Collaborated across engineering, security, and business teams to turn functional requirements into production-ready tooling and infrastructure.

Mentored team members and led initiatives that elevated engineering standards, operational resilience, and developer experience.


https://redd.it/1l560lb
@r_devops
DevOps Project(pipeline).. need inputs

I recently built and deployed a Tetris game using automation tools to simulate how real-world companies manage software delivery. I’m a recent graduate with no professional experience yet, so I wanted to create a hands-on project that mimics a production-like environment. Github

First, I created servers on AWS and installed tools like Jenkins, Docker, and Terraform.
Then, I used Jenkins to automatically create a Kubernetes cluster (EKS) and deploy the game.
Then created another pipeline which checks the code for bugs (SonarQube) and security issues (Trivy), builds a Docker image, and uploads it to DockerHub.
I used ArgoCD to automatically deploy the latest version of the app whenever the code or image was updated. When I wanted to upgrade the app (version 2.0), Jenkins detected the new code, built a new image, updated the deployment file, and ArgoCD pushed the change live all without manual steps.

I did not implement the monitoring in this project yet.

I’d really love your feedback on this pipeline. what limitations or flaws you can spot? What would you do differently if this were a real production setup? Feel free to roast it, I genuinely want to improve and learn from my mistakes before tackling my next one.

https://redd.it/1l59n46
@r_devops
Already in IT as support consultant but want to go the DevOps route

Hey all, currently working as a support consultant for a ERP system. I want to slowly transition to cloud devops althoug I do not have formal training in IT. The advantage is that I am already in the IT department of my company. I am planning to do a bunch of study of my own and transition if possible within the company I work in, it'd be the easiest way. Alternativale, I could do a masters in in CS. Do you think a masters would be helpful? Or just studying/practicing on my own and waiting for the right opportunity would be enough?

https://redd.it/1l5ji9b
@r_devops
Help!

Hello Guys!

I recently landed a DevOps intern role, and there’ll be a few weeks of training before I actually start working.
Since I’m from a mechanical engineering background, they’re going to help me get used to the new environment. I also started an online DevOps course recently, and so far I’ve learned the basics of Linux, Vagrant, and Docker.

I was just wondering — what should I start focusing on next or start learning to be better prepared for the role and for training in advance? Would love to hear some advice! Also any resources or any specific places to learn them ! Thanks in Advance !

https://redd.it/1l5lmyi
@r_devops
Strategically scaling up in AWS DevOps for remote roles

Hey folks,

I’ve been working in AWS DevOps for the past 2 years and am now planning the next phase of my career growth with a focus on remote opportunities.

I’m based in a lower income country and currently earning well below the global market average. My goal is to transition into remote roles that pay around $3,500 to $4,000 per month within the next 12 to 18 months.

I’ve already earned the AWS SAA certification. What certifications or skills would you recommend I pursue next to strengthen my profile for remote positions? I’m especially interested in areas like security, infrastructure as code (Terraform or CDK), Kubernetes, or cost optimization. I’m open to anything that adds real value in a cloud native DevOps environment.

I would also appreciate insights into the kinds of personal or open-source projects that have helped others break into higher-paying remote roles. I’m not looking for shortcuts, just clear and actionable direction.

Thanks in advance for sharing your experience or advice.

https://redd.it/1l5lz67
@r_devops
Why are DevOps and Cloud becoming inseparable? Can I just be a Cloud Engineer or do I need DevOps to grow?

I've been diving deep into cloud engineering lately (AWS/Azure), but I keep noticing this trend—every cloud job post or roadmap seems to include DevOps tools like CI/CD, Terraform, Docker, Jenkins, and even Kubernetes. It's like cloud and DevOps are slowly merging into one big role.

Why is this happening?

Is it still possible to just be a cloud engineer (architect, admin, or specialist) without going deep into DevOps? Or is DevOps becoming mandatory for career progression in cloud roles?

I don't mind learning DevOps if it's really needed—but I want to understand why they’re becoming so tightly coupled and whether there’s still room to specialize.

Appreciate honest opinions from people in the field. Are you seeing the same trend?

Thanks in advance!

https://redd.it/1l5ogcs
@r_devops
Versioning scheme for custom docker images based on upstream version

Hello.

I have created a custom Postgres image, based on the official Postgres image in Docker hub to include some extra software, but I have some doubts about how to best manage the version of my own image.

My requirements are the following:

\- The image tag should contain reference to the upstream version (ex: postgres 17) and a custom version of my custom image

\- I want to keep my custom image in sync with upstream. For example is a new postgres version is released upstream I want to automatically realease a version of my own image with that image as upstream. (I want to have some limits here, like only major and minor versions of alpine based images).

Currently, I am following this version schema my-image:<postgres-upstream-version>-<custom build number>. So an example would be myimage-17.4-1

Is this a good practice?

How can I handle new Postgres versions? I could have a scheduled github action that fetches all the tags from docker hub, compares to any version I have for my custom image in my docker repository and build the missing tags.

What if I do a change in my custom image, ideally I would need to build for all the combinations of postgres versions. Again, I would need to query my docker registry to get all versions and run my build pipeline for all of them. this could be heavy.

Another small problem is that since I am using build number from GitHUb Actions as my custom version, the numbers for each postgres versions would not be in sync.

Ex: I could have a my-image:17-1 and my-image-18-6. To have independent versioning I would need somehow to came up with my own versioning scheme and would need to store that information somewhere (a json file in the repo) ??


I feel I might be overthinking and overengineering this. What are the general good approaches for this?


Thank you.

https://redd.it/1l5pcq3
@r_devops
For my Last two posts Got Support, Got Critique. So what's Next...a New Idea Brewing

So just wanted to share a small update and a thought that's been on my mind lately.

Over the past few weeks, I’ve been helping folks fix cloud/devops infra issues (mostly through DMs), and wow… I’ve learned a lot more than I expected.
Out of the 3 people I helped closely, one of them paid and, but I didn’t mind , it genuinely felt good fixing things and learning in the process.

Later, I spoke to a few senior brothers and they referred me internally to their companies. Hopefully, something clicks by next month 🤞

But here’s the thing:
After talking to so many people and solving real infra pain points, I’m convinced there’s a huge scope in the backend/infrastructure/devops space right now especially in this AI-first world where everyone’s trying to scale fast but forget infra is the backbone.

So... last weekend I sent a DM to 8-10 folks who had reached out earlier just asking them some questions and casually sharing what I was thinking.
To my surprise, a few replied like:

>

I didn’t reach out to more because, honestly, I can only manage 2-3 people at the moment and I don’t want to waste anyone’s time. But just knowing that folks are willing to collaborate gave me a lot of confidence to maybe take a first small step soon.

Still figuring it out... just wanted to thank everyone who gave honest feedback, even the ones who roasted me a bit but it helped 🙂

If you're building something similar or have ideas in this space, feel free to drop in. I’m always open to chat and learn.

https://redd.it/1l5obz4
@r_devops
Switch from DevOps to SDE

I currently work as a DevOps Consultant at AWS. The pay is good but I realised lately a lot I am doing is not DevOps related like I have never worked with Linux and so far never got a project with K8s. I have built a lot of infrastructure with Terraform, built event driven architecutures on AWS, have done a lot of backend work with Python and built CI/CDs. I always had a deeper interest in coding than troubleshooting and I was wondering if it would be worth to switch to SDE either internally or externally?

Some things I’m grappling with:

* Would switching to SDE be a career **step sideways or backwards** in terms of scope, compensation, or growth path—even within FAANG?
* Long-term, is there more **upside and flexibility** in being an SDE versus staying in DevOps/SRE/platform?
* Is it common (or even possible) to switch internally within FAANG from DevOps to SDE, or would it require an external move?
* How do SDEs and DevOps compare when it comes to **technical depth** and **impact** on product?
* Anyone made a similar switch at a big tech company? Regrets? Wins?

Would love to hear from others who’ve made this kind of transition (or decided not to). Any advice on how to evaluate this properly—or how to make the move if I decide to go for it—would be hugely appreciated.

Thanks!

https://redd.it/1l5rrei
@r_devops
Haven't done this before, docker versions, environments, and devops

Greetings,

I just got my first github build action working where it pushes images up to the packages section of my repository. Now I'm trying to work out the rest of the process. I'm currently managing the docker stacks on the internal network using Portainer, so I can trigger an update using a webhook. I'm going to set up a cloudflare so that I can trigger the portainer updates via webhook from github while still keeping things protected.

However, I'm a little stuck. At the moment, portainer setup can reach out to github and get the images (I think, anyway, I haven't tested this yet). What's the best way to tag my docker images when I build them such that my two docker stacks (dev and production, I guess) in portainer can tell which images to pull? The images are in github in the packages section for my repo currently, so what's a good way to differentiate the environments? I'm using docker compose for structuring my stacks, btw.

https://redd.it/1l5twb7
@r_devops
Is DSA required for DevOps Roles ?

I am a cs student currently in final year learning DevOps. I just want to know that is DSA required for the DevOps Roles or even asked in interviews or technical rounds.

https://redd.it/1l5yzbh
@r_devops
Would love feedback on our Zero Drift browser security engine before we release it

I’ve been developing a browser-native security platform (patented) that tackles fingerprint spoofing, identity cloaking, session lockdown, and high-trust privacy in real time—with zero reliance on external APIs or cloud calls.

The project is called Zero DriftX7, and it’s designed for high-integrity, offline-and Airgap first environments. I’m building this for both advanced privacy users and organizations that need hardened browser tools without giving up control to third-party clouds.

Here’s what the early product suite includes (names are finalized, features in ongoing development):



CoGen / Zero DriftX7 Product Suite

DriftLockX7

Locks session activity to a live fingerprint snapshot and alerts or freezes interaction if drift (device or identity tampering) is detected.

Snapshot Engine

Browser-integrated capture and verification of the user’s session environment. No server pings. Fully local diff checker for spoofing attempts.

Remote Kill Switch

Instant, remote-triggered disablement of a browser instance or tab cluster—configurable to run offline.

Cloaked Decoy Mode

Creates high-fidelity ghost session environments for penetration testing, bot evasion, or behavioral masking.

Session Watchdog Engine

Constant validation loop running locally that self-terminates rogue script execution or extension mutation.

Trust Fingerprint

Unique locally-generated user signature to enforce trust zones between browser tabs, without calling external fingerprint services.

GeoTrust & IP Zone Control

Region-based enforcement policies (e.g., block actions outside your trusted country, even without VPN or proxy detection).

CSP Enforcer + Frame Guard

Hardens browser frame execution, enforcing fine-tuned Content Security Policies with zero third-party injection exposure.

Local Analytics & Activity Vault

Everything is stored client-side in encrypted blobs, viewable only via authenticated extension access. No remote telemetry.



This is all still under internal testing (no public repo or code yet), but I’d love to hear:
• Would you trust a browser-native privacy suite that runs entirely offline?
• What features matter most to you in browser-level threat defense?
• Are there attack surfaces you think we’re missing?

Any and all feedback welcome—this is early-stage and built by a DevSecOps engineer who’s tired of cloud bloat and telemetry leaks.

Thanks

https://redd.it/1l618ko
@r_devops
Need suggestion about my first Devops project

https://github.com/ad1822/cloudOps/blob/main/diagram\_new.png

I’m learning Kubernetes, AWS, and TF, so I built this project purely for learning purposes.

Tech Stack:

CI/CD: GitHub Actions
Infra as Code: Terraform
GitOps: ArgoCD
Backend: Go (Gin)
Frontend: React
DB: AWS RDS
Image Storage: S3 + CDN
Hosting: AWS EKS (Kubernetes) with LoadBalancers for both frontend & backend

The app lets users upload images → images go to S3, links (with image name) are saved in RDS, and the React frontend renders them from the CDN.

I’m a beginner, and this is my first project — the diagram might have a few mistakes, so feel free to drop suggestions or feedback. 🙌



https://redd.it/1l69048
@r_devops
DevOps Isn’t Just Pipelines—It’s Creating Environments Where Quality Can Emerge

In the DevOps world, we champion automation, CI/CD, and fast delivery. But what about the organizational conditions that make true quality sustainable?

My new post looks at the resistance to quality practices (tests, simple design, pair programming) and how it's often tied to:

* Short-term delivery pressure
* Team-level silos and lack of alignment
* Poor feedback loops

We need more than tools—we need cultures that enable trust, learning, and shared ownership.

Full post here: [https://www.eferro.net/2025/06/overcoming-resistance-and-creating-conditions-for-quality.html](https://www.eferro.net/2025/06/overcoming-resistance-and-creating-conditions-for-quality.html)

How are you addressing the “people and incentives” side of quality in your DevOps practices?

https://redd.it/1l69g0j
@r_devops
Open to take suggestions and review on my skills and projects for Internships

I am open to take suggestions and what other projects can I build for DevOps roles and internships.And how to get internships or jobs and where to apply ?
What else can I change and modify. And what else can I include?

Programming Languages : Java, Python, SQL, MySQL

Web Technologies: Spring Boot

DevOps & Cloud: Git, GitHub, Docker, Shell Scripting (Bash), Terraform, Azure, Jenkins
(Beginner), AWS (Foundational)

Operating Systems: Linux (Ubuntu, Red Hat)

Tools: VS Code, IntelliJ IDEA, Vim, Jupyter Notebook

GitHub: https://github.com/ariefshaik7


Projects:

Terraform Azure Jenkins Setup – GitHub May 2025
• Provisioned a Jenkins-ready Azure VM using modular Terraform with secure networking and NSGs.
• Automated Jenkins setup using a Bash script executed via Azure CustomScript extension.
• Designed reusable infrastructure modules for seamless CI/CD environment provisioning.
Azure Infrastructure with Terraform – GitHub May 2025
• Engineered scalable Azure infrastructure using modular and reusable Terraform codebase.
• Integrated remote backend for Terraform state management via Azure Storage for team collaboration.
• Supported multi-environment deployment using workspace-specific configurations and variable files.
Bash Scripts for Linux Automation – GitHub April 2025
• Built robust Bash scripts to automate system updates, cleanup, health checks, and resource backups.
• Developed CLI tools for cloud operations like Azure resource enumeration via Azure CLI.
• Enhanced consistency, efficiency, and maintainability across Linux server environments.
Todo Web Application – GitHub Feb - Mar 2025
• Developed a full-stack CRUD web app using Spring Boot, Thymeleaf, and MySQL.
• Containerized the application with Docker Compose for repeatable deployments.
• Implemented MVC architecture and validation for clean code and robust user input handling.

https://redd.it/1l6a0ib
@r_devops
I tried making DevOps easier and myself obsolete

# How everything started...

Life as a developer ain't easy. Don't get me wrong, I absolutely love a good challenge, and I get lots of energy from tackling complex problems all throughout the day. That may also be one of the reasons why I love the fact that our development teams at work, despite having a small dedicated DevOps team at hand, are advised to build their own deployment pipelines, terraform modules and such.

As time passed, I tried helping where I could and supported those who were missing some knowledge to properly handle their DevOps requirements, essentially taking load off of our small team of DevOps experts. They loved it, I loved it. It was or rather still is a win-win situation. After all, I did have prior DevOps experience due to previous employments and also my side-business (which, tbh., probably at least every second IT guy out there has).

Doing all of this, I noticed that most of the processes that I faced were kind of repetitive and follow the same steps or at least principals. Yet, since non-DevOps people were doing this work, some of the more complex stuff was prone to errors. Nothing inherently bad or anything. Just the usual problems understanding the deeper functionality of the required tooling, which was needed to complete a task. Thus, a need for support was given that I was more than happy to satisfy. Of course, the rise of AI helped a lot with this already. However, if you don't know what you are searching for, AI is not going to help you much either, so human knowledge was and is still the way to go.

# Making DevOps easier and myself essentially obsolete...

Seeing patterns and constantly noticing repetitive work made me think about potential opportunities for further process automation. Being a developer, I did have the tools at hand which were needed to build an application. So I did and not much after, Kublade was born. At its core, the application is a templating engine for Kubernetes manifests, which allows DevOps teams to offer a certain set of templates which can then be utilized by development teams to rapidly deploy new applications with a minimal risk of errors.

Whilst the software used to be pretty basic and just a kind of crazy experiment back in the day (the first line of code was written at least 3 years ago), it has involved to be a very helpful companion in my daily DevOps journey. It may not be perfect and require some setup, but I tempt to save lots of time not having to modify the same YAML structures by hand over and over again.

Now, did I make myself obsolete with this? Essentially, yes. Sadly, due to regulatory madness, I could not directly integrate the software with the clusters at work, but generating most of my manifests using templates allowed me to focus on the more interesting challenges. Also, making the software open-source allowed me to share it with the community, so others may enjoy it even more than I personally can as of now.

If you want to check it out or even contribute, you can do so jumping over to the homepage. Over there you can also find a documentation and API specification should you be interested in taking a closer look at what I've built.

# Why did I do it?

Writing a software like this is lots of work. So why did I do it? The short answer to that is as simple as they come: I'm a nerd and a sucker for process simplicity. So when I saw an opportunity, I had to jump on it. Also, it gave me a chance to experimentally explore new topics like AI chat integration, proper prompt building and in general just stuff that I don't have too many touchpoints with during my day job. Thus, I would encourage everyone who has an idea to go for it and see what happens (as long as the risks don't exceed the benefits, ofc.).

# Let's discuss...

First and foremost. Thanks for reading through this huge of a post. Let me know what you think! Does DevOps need new tools like this? Is AI going to
revolutionize DevOps as we know it? What's your experience with all of this? Looking forward to having a lively discussion!

https://redd.it/1l6bzmx
@r_devops
Still editing PrometheusRules manually ? Please, take care of your mental health.

Manually rewriting PrometheusRule YAMLs or recreating them from scratch just to change a label or "for:" duration is like rebuilding your house because you want to repaint the mailbox.

Between awesome-prometheus-alerts and monitoring Mixins, it's chaos.

But the kube-prometheus-stack already ships with dozens of production-grade alerts, so, why not patch them in place ?

I built kps-alert-editor.sh, a simple Bash script that lets you:

Edit alert labels like team=devops
Change for durations (15m → 3m)
Route alerts via Alertmanager without YAML suffering
Keep a local changelog for tracking

Uses just kubectl + yq. No Helm, no chart rebuilding. Just run-and-patch.

Alertmanager routing with team label also explained with config example.

Github -> github.com/adrghph/kps-alert-editor.sh

bye!

https://redd.it/1l6bsfc
@r_devops
Life before ci/cd

Hello,

Can anyone explain how life was before ci/cd pipeline.

I understand developers and operations team were so separate.

So how the DevOps culture now make things faster!? Is it like developer doesn’t need to depend on operations team to deploy his application ? And operations team focus on SRE ? Is my understanding correct ?

https://redd.it/1l6djk5
@r_devops
New to DevOps

While I may have been taught some theoretical concepts of Cloud and DevOps during my CS Degree, I still know only the theoretical basics, mostly how AWS IAM and EC2 works, how Docker and Kubernetes is set up, how Terraform works. But I think doing projects and an on-the-go learning approach is always suited for developers.

Where and how do I start? What kind of contents did you follow to learn DevOps? What kind of projects can get you a good grasp on how DevOps is used in the industry?

Thanks :)

https://redd.it/1l6dinu
@r_devops
Writing my first script in linux, any advice?

I have learnt the basics commands and have a little experience in navigating linux but this is the first time I'm writing executable scripts and I want to know what were some mistakes you've done and corrected along the way and any advice is appreciated, i genuinely want to learn so please let me know.

https://redd.it/1l6egcf
@r_devops