Reddit DevOps
266 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Question: ArgoCD for Dynamic Apps?

Hi,

I wanted to get some thoughts on an approach I'm thinking of. Say I have web apps with Helm charts for K8s deployment, and I want users to instantiate custom versions of these apps with their configuration e.g branding, title etc.

Does it make sense to store user configs in repos and then have ArgoCD sync that with the web app Helm charts via values.yaml? Whenever users change their custom configs, ArgoCD updates their deployments.

Are there other approaches/tools I should consider?

Thanks!



https://redd.it/1iw4e63
@r_devops
What should I do as a DevOps Intern, prepare for MNC's aptitude exams or for Certifications?

I am a final-year engineering student from a not-so-good college. Currently, I’m doing an internship at an AI startup as a DevOps/SRE intern. I’m happy with the job and the company, but I want to explore and learn more, preferably outside my state.

I have completed the AZ-104 Azure Associate certification and am preparing for the CKA and other DevOps-related certifications. However, as a fresher, I’m confused about whether I should focus on certifications or prepare for aptitude and coding tests for big MNCs like TCS, Infosys, Wipro, and IBM.

I personally prefer working in startups because I’ve seen that they offer great learning and growth opportunities. But all my friends and brothers are in big MNCs, and they suggest aiming for MNCs for job security, please guide me with your experiences what should I do.

https://redd.it/1iw4wjo
@r_devops
Production-Ready Coding: Best Practices for Developers

Hey all!
I wanted to share a quick list of my "rules of thumb" for the production-ready coding.

Basically, when you want to move from a hobby pet project to a real production application - what is needed?

For me, the list is simple:

0. Code must be compilable :)

1. Code must be readable for others. E.g. no 1-letter variables, good comments where appropriate, no long methods
2. Code must be tested - not necessarily 100% coverage, but "good" coverage and different types of tests to be available - unit, integration, end-to-end
3. Code must be documented. At least in the readme.md. Better if you have additional documentation available describing the architecture, design decisions, and contribution process.
4. Code must be monitored. There should be at least logs to standard output about errors that are happening and be able to track infrastructure metrics somehow.
5. Code must be somewhat secure. User input should be sanitized and something like OWASP top 10 should be checked
6. Code should be deployable via CI/CD tool.

What else would you add to the list?

And just in case, as a self-promotion, I added a video about this, describing those topics in a bit more detail - https://youtu.be/cdzrS-w\_bJo It would be great if you could like & subscribe :)

https://redd.it/1iwcdur
@r_devops
Are DevOps Under Job Threat?

Hello everyone.
I'm currently tagged as a DevOps Engineer having following experience:
Azure Webapp and VMs, Azure DevOps.
I'm having 4.2 YOE since I started my career in IT industry.
I don't have any kind of experience in K8s or docker or monitoring or jenkins or any other tools.

I want to know how much should I be afraid of this AI impact?
Should I change my domain from devops to data engineer or anything else?
Which DevOps Zone is AI impact proof(so that our job won't affeft much)

I'm really afraid and in panic mode right now as people are getting laid off and these CEOs and big companies are coming up new thing every week that AI will impact our job.
Please guys HELP ME!!

https://redd.it/1iwd4yz
@r_devops
Pull request testing on Kubernetes: vCluster for isolation and costs control

This week’s post is the third and final in my series about running tests on Kubernetes for each pull request. In the first post, I described the app and how to test locally using Testcontainers and in a GitHub workflow. The second post focused on setting up the target environment and running end-to-end tests on Kubernetes.

I concluded the latter by mentioning a significant quandary. Creating a dedicated cluster for each workflow significantly impacts the time it takes to run. On GKE, it took between 5 and 7 minutes to spin off a new cluster. If you create a GKE instance upstream, you face two issues:

Since the instance is always up, it raises costs. While they are reasonable, they may become a decision factor if you are already struggling. In any case, we can leverage the built-in Cloud autoscaler. Also, note that the costs mainly come from the workloads; the control plane costs are marginal.
Worse, some changes affect the whole cluster, e.g., CRD version changes. CRDs are cluster-wide resources. In this case, we need a dedicated cluster to avoid incompatible changes. From an engineering point of view, it requires identifying which PR can run on a shared cluster and which one needs a dedicated one. Such complexity hinders the delivery speed.

In this post, I’ll show how to benefit from the best of both worlds with vCluster: a single cluster with testing from each PR in complete isolation from others.

Read more...

https://redd.it/1iwhrz2
@r_devops
Help - Best way to interview SRE/DevOps

Looking for advice from anyone with experience as a hiring manager or interviewer for an SRE team.

I usually prefer candidates with some HackerRank coding experience, strong Linux administration, Kubernetes expertise, and networking fundamentals. If anyone can share their best practices for evaluating these skills, that would be great.

I need to validate candidates for the following skills:

* Linux Administration (hands-on with Ubuntu)
* Networking Concepts (L2/L3, OSI layers)
* Kubernetes Administration (on-prem)
* Programming - Python/Go (developer-level preferred, but not mandatory)
* Observability Stack (Prometheus, Grafana, Loki, VictoriaMetrics)
* AWS Proficiency
* Ansible (comfortable using it for automation)

Ideal Candidate would have 5 years of experience. Again I am only looking for feedback and tips in the interview process feel free to share your views.

https://redd.it/1iwjd7s
@r_devops
Looking for a DevSecOps Role - Remote


Hey folks! I'm looking for a DevSecOps role where I can leverage my skills in automation, security, CI/CD, and cloud infrastructure. Experienced in AWS, Kubernetes, Docker, Terraform, and security best practices. Also, have a strong background in SecOps, DevOps, and FinOps.

Open to remote opportunities! Feel free to connect or drop any leads. Cheers!

https://redd.it/1iwm5kh
@r_devops
Simplifying Infrastructure-as-Code with Our SaaS Solution

Imagine deploying powerful cloud infrastructurelike Google Cloud Storage or a full virtual machine without ever needing to write a single line of code or wrestle with complex tools. Our Software-as-a-Service (SaaS) application takes the headache out of Infrastructure-as-Code (IaC) and puts it into the hands of anyone, regardless of experience. Whether you're a small business owner, a startup founder, or a developer looking to save time, we make Google Cloud Platform (GCP) deployments effortless, secure, and scalable.

What We Offer

Our SaaS is built for simplicity and power:

No Expertise Needed: You don’t need to know Terraform, IaC, or even how GCP fully works. Just connect your GCP project, pick a service—like Google Cloud Storage—and hit "Deploy." We handle the rest.
Ready-Made Building Blocks: We maintain a library of pre-built Terraform modules (think of them as blueprints for cloud services) in our own GitHub repository. These are battle-tested and ready to go.
Personalized Deployment: Your infrastructure lives in your GCP project not ours. We use your authorized credentials to set everything up exactly where you want it.
Future-Proof Growth: Starting with services like Google Cloud Storage, we’re designed to easily add more GCP offerings as your needs evolve.

# How It Works: The Big Picture

Here’s what happens behind the scenes when you use our SaaS:

1. You Connect: Through a clean, intuitive interface, you link your GCP project to our app.
2. You Choose: Pick a service from our list-say, a secure storage bucket for your files.
3. We Deploy: Our system fetches the right Terraform module from our GitHub repo, customizes it for your project, and deploys it to GCP using your secure credentials. Done!

You get enterprise-grade infrastructure without the complexity.

# The Tech That Powers It

Frontend: It’s where you log in, connect your GCP account, and make selections.
Backend: They securely handle your authentication, fetch the Terraform modules, and execute the deployment process.
Terraform Magic: We store our predefined Terraform modules in a GitHub repository (saas-infra-modules). These are reusable scripts that define how services like Google Cloud Storage should be built in GCP. When you deploy, we tailor and apply them to your project.
Scalability: Our architecture is modular. Adding support for new GCP services—like Compute Engine or BigQuery—is as simple as dropping new Terraform modules into our repo.

# Authentication: How We Keep It Secure and Simple

Let’s talk about how we connect to your GCP project—because security and trust are non-negotiable. We use a standard called OAuth 2.0, the same technology you’ve likely used to log into apps with your Google account. Here’s how it works and why it’s safe:

1. Your Permission: When you connect your GCP project, our app redirects you to a Google login page. You sign in with your Google account—the one tied to your GCP project—and grant us permission to manage resources on your behalf. This happens in a secure, Google-controlled environment, not ours.
2. Limited Access: Google generates an OAuth token (a kind of digital key) that we use to act only within your project and only for the tasks you approve—like deploying a storage bucket. This token has an expiration date and can be revoked by you at any time through your Google account settings.
3. No Stored Secrets: We don’t ask for your GCP passwords or private keys. The OAuth token is temporary and encrypted, ensuring your credentials stay yours alone.
4. Our Side: To fetch our Terraform modules from GitHub, we use a Personal Access Token (PAT)—but that’s our key, not yours. It’s locked down to read-only access for our repo, keeping everything compartmentalized.

Think of it like giving a trusted contractor a keycard to renovate one specific room in your house. They can’t wander into other rooms, and you can take the keycard back whenever you want. That’s how we
authenticate and protect your project.

# Why This Matters to You

Time Savings: Deploying infrastructure that might take hours or days (and a hired expert) now takes minutes.
Cost Efficiency: No need to hire IaC specialists or spend weeks learning Terraform. Our SaaS is your shortcut.
Control: Your infrastructure lives in your GCP account, under your billing and ownership—not some third-party sandbox.
Security: With Google’s OAuth and our transparent process, you’re protected at every step.

# The Vision

Today, it’s Google Cloud Storage. Tomorrow, it’s Compute Engine, Kubernetes, or whatever GCP service you need. Our SaaS grows with you, simplifying the cloud so you can focus on your business—not the tech.

Ready to deploy your first service? Let’s connect your GCP project and get started—no coding required.

Simplifying Infrastructure-as-Code with Our SaaS Solution

If you found this service helpful, how much would you be willing to pay to use it?

If you’re interested in this service, please reach out to join our waitlist! When we launch, you’ll get one month of free usage.

https://redd.it/1iwne3x
@r_devops
Best practices on storing user-uploaded files in containerized environment

I’m working on a job board and have recently containerized our Next.js/Node.js application using Docker (deployed on AWS ECS). One big technical hurdle is handling user-uploaded files (resumes) in a containerized setup.

Currently I'm writing these files to the container’s filesystem---definitely not ideal! What's a clean & simple way approach to file storage that aligns with DevOps best practices. Specifically:

1. Persistent storage options: Which solutions work best for ephemeral containers? An NFS volume, EFS, or a cloud storage bucket (e.g., S3)?
2. Deployment pipeline integration: How do you usually handle storing or moving uploads during blue/green or rolling deployments?
3. Security considerations: Any recommended steps to ensure data integrity and secure transfer? (e.g., encryption in transit, SSE for S3, etc.)

Ty!

https://redd.it/1iwohig
@r_devops
Keycloak on EKS Failing to Mount AWS Secrets Manager Credentials

Hey folks,
I’m running Keycloak on an EKS (v1.27) cluster and having trouble mounting secrets from AWS Secrets Manager using the Secrets Store CSI Driver (v1.3.4). Both the Keycloak and PostgreSQL pods are stuck in a `CreateContainerConfigError` state with errors like:

Error: secret "keycloak-secrets" not found
csi-secrets-store-controller: file matching objectName [secret] not found in pod


Below are the relevant details of my setup:

# Environment

* **EKS version**: 1.27
* **Secrets Store CSI Driver**: 1.3.4
* **AWS Secrets Manager**: Verified the secrets exist
* **IAM Policies**: Node role and/or IRSA with `SecretsManagerReadWrite` policy

# SecretProviderClass

Here’s an excerpt (Terraform format) showing how I’m configuring my `SecretProviderClass`:

resource "kubernetes_manifest" "keycloak_secret_provider" {
manifest = {
apiVersion = "secrets-store.csi.x-k8s.io/v1"
kind = "SecretProviderClass"
metadata = {
name = "keycloak-secret-provider"
namespace = "my-namespace"
}
spec = {
provider = "aws"
secretObjects = [{
secretName = "keycloak-secrets"
type = "Opaque"
data = [{
key = "postgres-password"
objectName = "nonprod-secret-postgres_keycloak_auth"
}]
}]
}
}
}


# Pod/Deployment Snippet

Here’s a condensed example of how my Keycloak Deployment references the `SecretProviderClass`:

apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
securityContext:
fsGroup: 1000
serviceAccountName: keycloak-sa # (Has IRSA or node role with Secrets Manager perms)
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:21.1
volumeMounts:
- name: secrets-store
mountPath: /mnt/secrets
readOnly: true
# other container configs ...
volumes:
- name: secrets-store
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: keycloak-secret-provider


# What’s Happening

1. Pods fail to start with `CreateContainerConfigError`.
2. Logs/Events complain that `secret "keycloak-secrets" not found`.
3. `csi-secrets-store-controller` logs say `file matching objectName [secret] not found in pod`.

# Troubleshooting So Far

* **AWS Secrets Manager**: Confirmed the secret `nonprod-secret-postgres_keycloak_auth1` exists.
* **IAM Policies**: Verified the node role (or service account with IRSA) has `secretsmanager:GetSecretValue` and other necessary permissions.
* **Terraform**: No drift reported; everything else is applying cleanly.
* **Namespace Check**: Both the `SecretProviderClass` and Keycloak pods are in the same namespace (`my-namespace`).
* **Multiple Pod Restarts**: No change in error status.

# Potential Issues / Questions

1. **Permission Gaps?** Is there a hidden or additional permission needed for the node (or service account) beyond `SecretsManagerReadWrite`?
2. **Secret Sync vs. Ephemeral Mount?** Am I accidentally referencing a Kubernetes Secret (`keycloak-secrets`) that isn’t being created because I only set up ephemeral volume mounting?
* If I need a native K8s Secret, do I have to enable `syncSecret.enabled: true` in the SecretProviderClass?
3. **Name Mismatch?** Could there be a subtle naming or label mismatch in my code—`keycloak-secret-provider` vs. `keycloak_secrets` or a missing [`metadata.name`](https://metadata.name) or `namespace`?
4. **Volume Permissions?** Does `fsGroup: 1000` cause any issues with
how the CSI driver writes secret files?

# Additional Info

* **Logs**: I’ve checked the CSI driver logs in `kube-system` (or wherever it’s installed). They only say “file not found” which hints it can’t read or place the files in `/mnt/secrets`.
* **Secrets Manager Tests**: I can successfully `aws secretsmanager get-secret-value` from my workstation using the same IAM role to confirm the secret is accessible.
* **Terraform**: My `kubernetes_manifest` might need more explicit fields. But so far, I haven’t spotted an obvious misconfiguration.

# Key Things I’d Love Feedback On

* Has anyone run into this “file matching objectName not found” error with Secrets Store CSI on EKS?
* Is there a detail or annotation required to mount AWS secrets as ephemeral files under `/mnt/secrets`?
* Am I missing a step in the process of syncing the AWS Secret to a native K8s Secret if that’s what my app is expecting?

Any insights, especially from folks who have Keycloak + AWS Secrets Manager working in EKS, would be hugely appreciated. Thank you! I feel like I am between a rock and a hard place and have been going in circles with this.

https://redd.it/1iwnxmj
@r_devops
US cloud providers and Europe

Hi !
So i live in europe, and we all know about the actualities in the US. And a lot of company are talking about US cloud providers (that they should leave).
A lot of them are talking about RGPD(Personal data protection in EU) and about the fact that the US can have free access as the want to your data stored in ther servers (even hosted in EU).
What do you think about this ? Is Europe need to worry about this ?

https://redd.it/1iwqs76
@r_devops
Looking for a Devops/Data Engineer Job

Individual with 1 year 9 months industry experience at a MNC. Looking for a job to learn and grow more.



https://redd.it/1iwvf6r
@r_devops
Best practices for managing schema updates during deployments (both rollout and rollback)

Hello there,

while walking my devops learning path, I started wondering about the industry best practices for the following use case:

1. app container gets update from v1 to v2
2. database schema need to be upgraded (new table, new columns)
3. (I suppose) the app have all the migration SQL commands to do that on startup once it detects that the schema need to be changed
4. App is online, great
5. OUCH! Something went wrong. Let's roll back... two scenarios:
1. data has been added into the DB in the meantime, we need to save that data and merge it later
2. let's ignore new data, just revert back ASAP

What do you think about those two scenarios? Should the app be responsible for everything or is it a separate process, which isn't automatable ?

Thanks for any explanation.

https://redd.it/1iwy2su
@r_devops
Help Shape the Future of Incident Management! Seeking Insights from Engineering Teams

Ever found yourself wishing your incident response process was less "pulling hair out" and more "smooth sailing"? Well, here’s your chance to help make that happen! We’ve put together a survey because we’re dying to know how you handle the chaos when everything hits the fan.

From alert avalanches to post-mortem ghost towns, tell us what ticks you off and what tools save your bacon. It’s short, sweet, and your chance to rant (constructively!) about the tools and trials of your trade.

👉 Dive into the survey here: Incident Response 2025 Survey

Spare us 10 minutes (it's a coffee break well spent!) and who knows? Your insights might just lead to fewer late-night incident calls and more time for actual life. Let’s face it, we could all use a bit more of that.

https://redd.it/1iwywq8
@r_devops
One-time payment vs. subscription 🔥 what actually makes more money?

I built a habit-tracking app and launched it six months ago. Initially, I made it a one-time purchase for $9.99. Sales were okay, but nothing crazy. Recently, I switched to a $3.99/month subscription model, and suddenly my revenue is way higher... even with fewer purchases.

But now I’m getting tons of complaints from users who bought it before and feel “cheated.” Some are leaving 1-star reviews, and I feel like I burned my early adopters.

Did I screw up? Should I have offered lifetime access at a higher price? If you’ve switched models before, what worked best for you?

https://redd.it/1iwyszm
@r_devops
Why pay $150 per parallel e2e test, am I missing something?

Sharding Playwright across a few runners isn't particularly tricky. So, I'm confused how saucelabs and browserstack can charge $150 per parallel test in their virtual cloud. That's not even on real devices.

Is there something I'm missing that makes this appealing? Maybe it's only relevant for bigger test suites for reasons I haven't encountered yet.

https://redd.it/1ix065e
@r_devops
Just tried a new profiler: what would you optimize first?

I was looking for better ways to debug performance bottlenecks and came across a new profiling tool that just dropped on GitHub. Decided to test it out on one of our services, and the results were... eye-opening.

The flame graph it generated (screenshot attached) revealed:

\- A DB operation consuming way more resources than expected... we thought it was optimized, but apparently not.

\- Some unexpected runtime garbage collection overhead, wasn’t on our radar at all.

For those who’ve worked with flame graphs before, where would you start optimizing? Do I tackle the DB queries first or look at memory management?

Screenshot is attached here: https://drive.google.com/file/d/1QZJHtEyRxDr2LfIW8VIDVD6sZwokCneo/view?usp=sharing

https://redd.it/1ix0wfc
@r_devops
GitHub Actions, Pulumi GCP, Artifact Registry and Docker - Cannot perform an interactive login from a non TTY device

Hi everyone! [I'm cross-posting ](https://stackoverflow.com/questions/79463461/github-actions-pulumi-gcp-artifact-registry-and-docker-cannot-perform-an-int)from Stack Overflow.



I'm using Pulumi in GitHub Actions to deploy to GCP's Artifact Registry with Workload Identity Federation. When it reaches Pulumi's code to push to artifact registry I receive:


```
docker:image:Image temporal-worker-dev {"Client":{"Platform":{"Name":"Docker Engine - Community"},"Version":"26.1.3","ApiVersion":"1.45","DefaultAPIVersion":"1.45","GitCommit":"b72abbb","GoVersion":"go1.21.10","Os":"linux","Arch":"amd64","BuildTime":"Thu May 16 08:33:35 2024","Context":"default"},"Server":{"Platform":{"Name":"Docker Engine - Community"},"Components":[{"Name":"Engine","Version":"26.1.3","Details":{"ApiVersion":"1.45","Arch":"amd64","BuildTime":"Thu May 16 08:33:35 2024","Experimental":"false","GitCommit":"8e96db1","GoVersion":"go1.21.10","KernelVersion":"6.8.0-1021-azure","MinAPIVersion":"1.24","Os":"linux"}},{"Name":"containerd","Version":"1.7.25","Details":{"GitCommit":"bcc810d6b9066471b0b6fa75f557a15a1cbf31bb"}},{"Name":"runc","Version":"1.2.4","Details":{"GitCommit":"v1.2.4-0-g6c52b3f"}},{"Name":"docker-init","Version":"0.19.0","Details":{"GitCommit":"de40ad0"}}],"Version":"26.1.3","ApiVersion":"1.45","MinAPIVersion":"1.24","GitCommit":"8e96db1","GoVersion":"go1.21.10","Os":"linux","A
docker:image:Image temporal-worker-dev error: Error: Cannot perform an interactive login from a non TTY device
docker:image:Image temporal-worker-dev docker login failed
docker:image:Image remix-app-dev error: Error: Cannot perform an interactive login from a non TTY device
docker:image:Image remix-app-dev docker login failed
pulumi:pulumi:Stack alertdown-infra-dev running error: an unhandled error occurred: program failed:
docker:image:Image remix-app-dev **failed** 1 error
docker:image:Image temporal-worker-dev **failed** 1 error
pulumi:pulumi:Stack alertdown-infra-dev **failed** 1 error
Diagnostics:
docker:image:Image (remix-app-dev):
error: Error: Cannot perform an interactive login from a non TTY device
docker:image:Image (temporal-worker-dev):
error: Error: Cannot perform an interactive login from a non TTY device
pulumi:pulumi:Stack (alertdown-infra-dev):
error: an unhandled error occurred: program failed:
waiting for RPCs: docker login failed with error: exit status 1
```

I have two docker containers, and this is my yaml:

```
name: Deploy to Staging
on:
push:
branches:
- main
permissions:
actions: read
contents: read
id-token: write
jobs:
ci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: oven-sh/setup-bun@v2
- uses: pnpm/action-setup@v4
with:
version: 9
- uses: actions/setup-node@v4
with:
node-version: 22
cache: 'pnpm'
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Build affected apps
run: pnpm exec nx affected -t build

deploy:
runs-on: ubuntu-latest
environment: staging
needs: [ci]
steps:
- uses: actions/checkout@v4
- name: Create .env file
run: |
cat << EOF > libs/infrastructure/src/pulumi/.env
PULUMI_MAIN_SERVICE_ACCOUNT_STAGING="${{ secrets.PULUMI_MAIN_SERVICE_ACCOUNT_STAGING }}"
PULUMI_WORKLOAD_IDENTITY_PROVIDER_ID_STAGING="${{ secrets.PULUMI_WORKLOAD_IDENTITY_PROVIDER_ID_STAGING }}"
PULUMI_DOPPLER_REMIX_PROJECT="remix-app"
PULUMI_DOPPLER_REMIX_STAGING_TOKEN="${{ secrets.PULUMI_DOPPLER_REMIX_STAGING_TOKEN }}"
PULUMI_DOPPLER_REMIX_STAGING_BRANCH_NAME="stg"
PULUMI_DOPPLER_TEMPORAL_PROJECT="temporal-worker"
PULUMI_DOPPLER_TEMPORAL_STAGING_TOKEN="${{ secrets.PULUMI_DOPPLER_TEMPORAL_STAGING_TOKEN }}"
PULUMI_DOPPLER_TEMPORAL_STAGING_BRANCH_NAME="stg"