Devops job market
Just curious how the devops job market is as compared to software engineering? Is it as bad a software engineering these days?
https://redd.it/1mok4we
@r_devops
Just curious how the devops job market is as compared to software engineering? Is it as bad a software engineering these days?
https://redd.it/1mok4we
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Retraining into DevOps/cloud with no prior experience—Is “DevOps Beginners to Advanced with Projects” a solid starting point?
>Hey everyone, I’m looking to switch into a DevOps or cloud role for a better work–home balance and have zero background in IT or ops. I’ve found the Udemy course “DevOps Beginners to Advanced with Projects” (by Imran Teli). It’s a bestseller with 4.6 rating, updated August 2025, over 54 hours of lessons—tools include Linux, scripting, AWS, Jenkins, GitHub Actions, Ansible, Docker, Kubernetes, Terraform, etc. .
>
>The hands-on, project-based format seems promising, but I wonder whether it’s too broad. Have any of you taken this course (or something similar)? Does it give a solid foundation? What additional resources or next steps would you recommend to truly understand the why behind the tools, and start applying them effectively in real-world scenarios?
>
>Appreciate any advice—even on hands-on labs, free resources, certification paths, or community groups would be really helpful.
https://redd.it/1mokol5
@r_devops
>Hey everyone, I’m looking to switch into a DevOps or cloud role for a better work–home balance and have zero background in IT or ops. I’ve found the Udemy course “DevOps Beginners to Advanced with Projects” (by Imran Teli). It’s a bestseller with 4.6 rating, updated August 2025, over 54 hours of lessons—tools include Linux, scripting, AWS, Jenkins, GitHub Actions, Ansible, Docker, Kubernetes, Terraform, etc. .
>
>The hands-on, project-based format seems promising, but I wonder whether it’s too broad. Have any of you taken this course (or something similar)? Does it give a solid foundation? What additional resources or next steps would you recommend to truly understand the why behind the tools, and start applying them effectively in real-world scenarios?
>
>Appreciate any advice—even on hands-on labs, free resources, certification paths, or community groups would be really helpful.
https://redd.it/1mokol5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Are LangGraph + Temporal a good combo for automating KYC/AML workflows to cut compliance overhead?
I’m designing a compliance-heavy SaaS platform (real estate transactions) where every user role—seller, investor, wholesaler, title officer—has to pass full KYC/KYB, sanctions/PEP screening, and milestone-based rescreening before they can act.
The goal:
* Automate onboarding checks, sanctions rescreens, and deal milestone gating
* Log everything immutably for audit readiness (no manual report compilation)
* Trigger alerts/escalations if compliance requirements aren’t met
* Reduce the human compliance team’s workload by \~70% so they only handle exceptions
I’m considering using LangGraph to orchestrate AI agents for decisioning, document validation, and notifications, combined with Temporal to run deterministic workflows for onboarding, milestone checks, and partner webhooks (title/escrow updates).
Question to the community:
* Has anyone paired LangGraph (or similar LLM graph orchestration) with Temporal for production-grade compliance operations?
* Any pitfalls in using Temporal for long-lived KYC/AML processes (14-day onboarding timeouts, daily sanctions cron, etc.)?
* Does this combo make sense for reducing manual workload in a high-trust, regulated environment, or would you recommend another orchestration stack?
Looking for insights from anyone who’s run similar patterns in fintech, proptech, or other regulated SaaS.
https://redd.it/1mokg0f
@r_devops
I’m designing a compliance-heavy SaaS platform (real estate transactions) where every user role—seller, investor, wholesaler, title officer—has to pass full KYC/KYB, sanctions/PEP screening, and milestone-based rescreening before they can act.
The goal:
* Automate onboarding checks, sanctions rescreens, and deal milestone gating
* Log everything immutably for audit readiness (no manual report compilation)
* Trigger alerts/escalations if compliance requirements aren’t met
* Reduce the human compliance team’s workload by \~70% so they only handle exceptions
I’m considering using LangGraph to orchestrate AI agents for decisioning, document validation, and notifications, combined with Temporal to run deterministic workflows for onboarding, milestone checks, and partner webhooks (title/escrow updates).
Question to the community:
* Has anyone paired LangGraph (or similar LLM graph orchestration) with Temporal for production-grade compliance operations?
* Any pitfalls in using Temporal for long-lived KYC/AML processes (14-day onboarding timeouts, daily sanctions cron, etc.)?
* Does this combo make sense for reducing manual workload in a high-trust, regulated environment, or would you recommend another orchestration stack?
Looking for insights from anyone who’s run similar patterns in fintech, proptech, or other regulated SaaS.
https://redd.it/1mokg0f
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Trading Support Engineer looking to transition into SRE/Devops after lay off. What are my chances?
I am currently weighing my options as I recently got laid off and I see no future in the support engineering role.
It really sucks to be in this position as I know that having different titles in my resume can hurt my chances because I am not going on a sensible trajectory or something\~
My experience:
In the past I have worked as a Quality Analyst for Facebook (2 years) under contract with WiPro, A testing engineer (2 years) for Facebook under Wipro, and a quality assurance engineer for a year at a lesser known company. In my current role as a Support engineer with 4 years of experience, I manage incidents, failovers, config management, troubleshooting kubernetes services, monitoring and alerting, approve releases and do rollbacks. I support a low-latency trading platform at a hedge fund and often have to investigate networking problems using Grafana and look at logs from all types of services.
Transition into Devops/SRE:
As I do my research I came across devops as the path to take when transitioning to SRE roles, but I don't have experience in the following: Cloud, Linux, Terraform, Deployments . I have basic experience with Python, SQL for data analytic projects, and use Grafana and Elk but I don't actually make the dashboards. I know how to use ArgoCD and have used Jenkins before although I forgot. I have exposure to most tools on a superficial level.
My plan:
I am considering doing the Cloud Computing and DevOps Certification Program from Purdue and Simplilearn to get experience in these areas. I think this is going to give me the guidance and structure I need and the hands on experience I am lacking as it's project heavy. After finishing I would take some AWS certs that are relevant to the role's I am applying.
My questions:
\- Has anyone heard of or taken this certification?
\- Is this line of work affected by the tech lay offs?
\- What are my chances of entering a well known company with my experience and the Certifications?
\- Is Support engineering -> DevOps or SRE a good transition path or are these not related?
\- Any advice anyone can give me as I navigate my options in DevOps and SRE?
Side note: I know my work is reactive and Devops SREis proactive. But i think it can help that I deal with live issues in production environments and the goal is to reduce down time?
https://redd.it/1mos4j2
@r_devops
I am currently weighing my options as I recently got laid off and I see no future in the support engineering role.
It really sucks to be in this position as I know that having different titles in my resume can hurt my chances because I am not going on a sensible trajectory or something\~
My experience:
In the past I have worked as a Quality Analyst for Facebook (2 years) under contract with WiPro, A testing engineer (2 years) for Facebook under Wipro, and a quality assurance engineer for a year at a lesser known company. In my current role as a Support engineer with 4 years of experience, I manage incidents, failovers, config management, troubleshooting kubernetes services, monitoring and alerting, approve releases and do rollbacks. I support a low-latency trading platform at a hedge fund and often have to investigate networking problems using Grafana and look at logs from all types of services.
Transition into Devops/SRE:
As I do my research I came across devops as the path to take when transitioning to SRE roles, but I don't have experience in the following: Cloud, Linux, Terraform, Deployments . I have basic experience with Python, SQL for data analytic projects, and use Grafana and Elk but I don't actually make the dashboards. I know how to use ArgoCD and have used Jenkins before although I forgot. I have exposure to most tools on a superficial level.
My plan:
I am considering doing the Cloud Computing and DevOps Certification Program from Purdue and Simplilearn to get experience in these areas. I think this is going to give me the guidance and structure I need and the hands on experience I am lacking as it's project heavy. After finishing I would take some AWS certs that are relevant to the role's I am applying.
My questions:
\- Has anyone heard of or taken this certification?
\- Is this line of work affected by the tech lay offs?
\- What are my chances of entering a well known company with my experience and the Certifications?
\- Is Support engineering -> DevOps or SRE a good transition path or are these not related?
\- Any advice anyone can give me as I navigate my options in DevOps and SRE?
Side note: I know my work is reactive and Devops SREis proactive. But i think it can help that I deal with live issues in production environments and the goal is to reduce down time?
https://redd.it/1mos4j2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Ask for Career Shift Advice
Can I transition from business to DevOps? I'm 27 years old, so I think I'm too old to start learning something heavy such as DevOps from scratch without any programming language. What would you recommend I start with first?
https://redd.it/1moxsir
@r_devops
Can I transition from business to DevOps? I'm 27 years old, so I think I'm too old to start learning something heavy such as DevOps from scratch without any programming language. What would you recommend I start with first?
https://redd.it/1moxsir
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
VSCode extensions
Which extensions helped you the most, while using k8s, TF, Fastlane, Gitlab etc.
https://redd.it/1moysyx
@r_devops
Which extensions helped you the most, while using k8s, TF, Fastlane, Gitlab etc.
https://redd.it/1moysyx
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Prep for aws devops role - associate level?
Hi all,
So i have been a sysadmin for past 2½ year, now looking for devops role(preferably aws). Finally after applying for 100+ jobs, i was shortlisted in one and the interview is scheduled for 24th of August(10 days to prerp). I don't want to blow it so how would you guys recommend on the preparation?
About me:
2.6 years exp as API Gateway sysadmin
Good in linux, python, docker, GitHub Actions, API gateway (although it doesn't matter in this job i guess)
Moderate in Ansible, AWS(Ec2,ecs,iam, networking basics)
Below par in Kubes in general, AWSeks,s3, code deploy, other stuffs , IaC
Few things, most of the devops tools I've learnt myself and doesn't have experience in doing at a prod/enterprises level. Even aws I've leveraged free tier and learned most things.
I believe EKS/Kubes is something i can't be slacking so should i try running a dummy eks cluster (it is not in free tier so I did not try yet).
So how would i go by to be interview ready? Any tips/resource would be helpful
Thanks in advance.
P.S I've been geniune in my resume/during frist screening about my background and knowledge.
https://redd.it/1mozfb4
@r_devops
Hi all,
So i have been a sysadmin for past 2½ year, now looking for devops role(preferably aws). Finally after applying for 100+ jobs, i was shortlisted in one and the interview is scheduled for 24th of August(10 days to prerp). I don't want to blow it so how would you guys recommend on the preparation?
About me:
2.6 years exp as API Gateway sysadmin
Good in linux, python, docker, GitHub Actions, API gateway (although it doesn't matter in this job i guess)
Moderate in Ansible, AWS(Ec2,ecs,iam, networking basics)
Below par in Kubes in general, AWSeks,s3, code deploy, other stuffs , IaC
Few things, most of the devops tools I've learnt myself and doesn't have experience in doing at a prod/enterprises level. Even aws I've leveraged free tier and learned most things.
I believe EKS/Kubes is something i can't be slacking so should i try running a dummy eks cluster (it is not in free tier so I did not try yet).
So how would i go by to be interview ready? Any tips/resource would be helpful
Thanks in advance.
P.S I've been geniune in my resume/during frist screening about my background and knowledge.
https://redd.it/1mozfb4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
EBPF tools moving fast, but docs still a mess
Been playing around with eBPF lately for some observability stuff. The tools are getting really good, but finding clear info on kernel changes or verifier errors is still painful.
How are you all keeping up? Blogs?Just trial and error?
https://redd.it/1mp015h
@r_devops
Been playing around with eBPF lately for some observability stuff. The tools are getting really good, but finding clear info on kernel changes or verifier errors is still painful.
How are you all keeping up? Blogs?Just trial and error?
https://redd.it/1mp015h
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Company doesn't pay for training - should I leave ?
I work in the UK as a Junior DevOps Engineer on 40k per year. I have been with my company a year now.
I have managed to touch a wide range of the DevOps tool stack and I feel quite confident in my skills.
I've been looking for new roles to hopefully move into the mid level. And although I know experience is better than certs, every single recruiter I have spoken to has highlighted my lack of certificates.
The problem is that my company doesn't pay for them. They refuse to buy any online courses. And they even refuse to provide us with a sandpit account or learning resources on Aws.
I don't earn a lot of money, but I feel like saving a bit and trying to get SAA AWS under my belt through my own money.
Does anyone know anyways I can make this cheaper for myself or better recommendations on what I should do
https://redd.it/1moy0q1
@r_devops
I work in the UK as a Junior DevOps Engineer on 40k per year. I have been with my company a year now.
I have managed to touch a wide range of the DevOps tool stack and I feel quite confident in my skills.
I've been looking for new roles to hopefully move into the mid level. And although I know experience is better than certs, every single recruiter I have spoken to has highlighted my lack of certificates.
The problem is that my company doesn't pay for them. They refuse to buy any online courses. And they even refuse to provide us with a sandpit account or learning resources on Aws.
I don't earn a lot of money, but I feel like saving a bit and trying to get SAA AWS under my belt through my own money.
Does anyone know anyways I can make this cheaper for myself or better recommendations on what I should do
https://redd.it/1moy0q1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Understanding SAP
I’ve got a web shop project, that creates SAP orders, to manage and I need to get comfortable with the way SAP operates. Every company has their own way of implementations so I imagine there is no plug-and-play strat, but the docs I got are shit so I’m hoping there is some common ground. I have started going through BAPI tutorials since it’s the outer communication endpoint and maybe I’ll be able to understand the docs a little more.
I’ll appreciate any advice 🙏
https://redd.it/1mp1xag
@r_devops
I’ve got a web shop project, that creates SAP orders, to manage and I need to get comfortable with the way SAP operates. Every company has their own way of implementations so I imagine there is no plug-and-play strat, but the docs I got are shit so I’m hoping there is some common ground. I have started going through BAPI tutorials since it’s the outer communication endpoint and maybe I’ll be able to understand the docs a little more.
I’ll appreciate any advice 🙏
https://redd.it/1mp1xag
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Scaling open-source Jenkins vs. adopting CloudBees: What's the real tipping point?
Looking for some real-world takes on Jenkins scaling dilemma.
I work for a company with \~1500 employee size. Our self-managed Jenkins is hitting \~450 concurrent jobs, and we expect that number to keep climbing. We're at a crossroads: keep throwing more hardware at it or seriously consider CloudBees that offers horizontal scaling along with other enterprise features.
I'm trying to figure out the real tipping point.
For CloudBees customers: What pain point finally made you adopt Cloudbees? Did it truly solve your scaling problems, and was it worth the cost?
For Jenkins admins: How have you scaled past this point? Is there a practical limit to just beefing up the hardware?
Genuinely curious to hear your experience to make an informed decision. Thanks!
https://redd.it/1mp3bmz
@r_devops
Looking for some real-world takes on Jenkins scaling dilemma.
I work for a company with \~1500 employee size. Our self-managed Jenkins is hitting \~450 concurrent jobs, and we expect that number to keep climbing. We're at a crossroads: keep throwing more hardware at it or seriously consider CloudBees that offers horizontal scaling along with other enterprise features.
I'm trying to figure out the real tipping point.
For CloudBees customers: What pain point finally made you adopt Cloudbees? Did it truly solve your scaling problems, and was it worth the cost?
For Jenkins admins: How have you scaled past this point? Is there a practical limit to just beefing up the hardware?
Genuinely curious to hear your experience to make an informed decision. Thanks!
https://redd.it/1mp3bmz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Pro tip - avoid working at small no-names at all costs
Out of my 3 year career, 2 of those were spent at a small unknown eCommerce SaaS (it doesn't matter that they have/had interesting clients), and my job hunt is basically just:
* It doesn't matter that I have the skills that I have.
* It doesn't matter what I've done or achieved.
* It doesn't matter if I'm an exact match to the job description.
* Nothing about me, my work history, etc. matters.
* Because I didn't spend enough time at a bigger/more impactful company, and so I couldn't possibly be a viable person to hire.
I had 3 separate calls today all mentioning this directly. Back to square one, again (I'm crashing out if you can't tell).
https://redd.it/1mp5q6s
@r_devops
Out of my 3 year career, 2 of those were spent at a small unknown eCommerce SaaS (it doesn't matter that they have/had interesting clients), and my job hunt is basically just:
* It doesn't matter that I have the skills that I have.
* It doesn't matter what I've done or achieved.
* It doesn't matter if I'm an exact match to the job description.
* Nothing about me, my work history, etc. matters.
* Because I didn't spend enough time at a bigger/more impactful company, and so I couldn't possibly be a viable person to hire.
I had 3 separate calls today all mentioning this directly. Back to square one, again (I'm crashing out if you can't tell).
https://redd.it/1mp5q6s
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I built a LeetCode-style site for real-world Linux & DevOps debugging challenges
While preparing for my Meta Production Engineer interview, I realized there’s no good place to practice these Linux operations problems.
* Linux troubleshooting
* Bash scripting & automation
* Performance bottlenecks
* Networking misconfigurations
* Debugging weird production issues
So I built [sttrace.com](https://sttrace.com/), its a LeetCode-like platform, but for real-world software engineering ops problems.
Right now it only has 6 questions but I will add more soon. Let me know what you guys think.
🔗 [sttrace.com](https://sttrace.com/)
**PS:** Apologies if the website feels slow, currently it is hosted on my homelab.
https://redd.it/1mp6ott
@r_devops
While preparing for my Meta Production Engineer interview, I realized there’s no good place to practice these Linux operations problems.
* Linux troubleshooting
* Bash scripting & automation
* Performance bottlenecks
* Networking misconfigurations
* Debugging weird production issues
So I built [sttrace.com](https://sttrace.com/), its a LeetCode-like platform, but for real-world software engineering ops problems.
Right now it only has 6 questions but I will add more soon. Let me know what you guys think.
🔗 [sttrace.com](https://sttrace.com/)
**PS:** Apologies if the website feels slow, currently it is hosted on my homelab.
https://redd.it/1mp6ott
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Need Help with Elasticsearch, Redis, and Weighted Round Robin for Product Search System (Newbie Here!)
Hi everyone, I'm working on a search system for an e-commerce platform and need some advice. I'm a bit new to this, so please bear with me if I don't explain things perfectly. I'll try to break it down and would love your feedback on whether my approach makes sense or if I should do something different. Here's the setup:
# What I'm Trying to Do
I want to use **Elasticsearch** (for searching products) and **Redis** (for caching results to make searches faster) in my system. I also want to use **Weighted Round Robin (WRR)** to prioritize how products are shown. The idea is to balance **sponsored products** (paid promotions) and **non-sponsored products** (regular listings) so that both get fair visibility.
* **Per page**, I want to show **70 products**, with **15 of them being sponsored** (from different indices in Elasticsearch) and the rest non-sponsored.
* I want to split the sponsored and non-sponsored products into **separate WRR pools** to control how they’re displayed.
# My Weight Calculation for WRR
To decide which products get shown more often, I'm calculating a **weight** based on:
* **Product reviews** (positive feedback from customers)
* **Total product sales** (how many units sold)
* **Seller feedback** (how reliable the seller is)
Here's the formula I'm planning to use:
`Weight = 0.5 * (1 + log(productPositiveFeedback)) + 0.3 * (1 + log(totalProductSell)) + 0.2 * (1 + log(sellerFeedback))`
To make sure big sellers don’t dominate completely, I want to **cap the weight** in a way that balances things for new sellers. For example:
* If the calculated weight is above **10**, it gets counted as **11** (e.g., actual weight of 20 becomes 11).
* If it’s above **100**, it becomes **101** (e.g., actual weight of 960 becomes 101).
* So, a weight of **910** would count as **100**, and so on.
This way, I hope to give newer sellers a chance to compete with big sellers. **Question 1: Does this weight calculation and capping approach sound okay? Or is there a better way to balance things?**
# My Search Process
Here’s how I’m planning to handle searches:
1. When someone searches (e.g., "GTA 5"), the system first checks **Redis** for results.
2. If it’s not in Redis, it queries **Elasticsearch**, stores the results in Redis, and shows them on the UI.
3. This way, future searches for the same term are faster because they come from Redis.
**Question 2: Is this Redis + Elasticsearch approach good? How many products should I store in Redis per search to keep things efficient?** I don’t want to overload Redis with too much data.
# Handling Categories
My products are also organized by **categories** (e.g., electronics, games, etc.). **Question 3: Will my weight calculation mess up how products are shown within categories?** Like, will it prioritize certain products across all categories in a weird way?
# Search Term Overlap Issue
I noticed that if someone searches for **"GTA 5"** and I store those results in Redis, a search for just **"GTA"** might pull up a lot of the same GTA 5 products. Since both searches have similar data, **Question 4: Could this cause problems with how products are prioritized?** Like, is one search getting higher priority than it should?
# Where to Implement WRR
Finally, I’m unsure where to handle the **Weighted Round Robin logic**. Should I do it in **Elasticsearch** (when fetching results) or in **Redis** (when caching or serving results)? **Question 5: Which is better for WRR, and why?**
# Note for Readers
I’m pretty new to building systems like this, so I might not have explained everything perfectly. I’ve read about Elasticsearch, Redis, and WRR, but putting it all together is a bit overwhelming. I’d really appreciate it if you could explain things in a simple way or point out any big mistakes I’m making. If you need more details, let me know!
Thanks in advance for any help! 🙏
https://redd.it/1mpbkba
@r_devops
Hi everyone, I'm working on a search system for an e-commerce platform and need some advice. I'm a bit new to this, so please bear with me if I don't explain things perfectly. I'll try to break it down and would love your feedback on whether my approach makes sense or if I should do something different. Here's the setup:
# What I'm Trying to Do
I want to use **Elasticsearch** (for searching products) and **Redis** (for caching results to make searches faster) in my system. I also want to use **Weighted Round Robin (WRR)** to prioritize how products are shown. The idea is to balance **sponsored products** (paid promotions) and **non-sponsored products** (regular listings) so that both get fair visibility.
* **Per page**, I want to show **70 products**, with **15 of them being sponsored** (from different indices in Elasticsearch) and the rest non-sponsored.
* I want to split the sponsored and non-sponsored products into **separate WRR pools** to control how they’re displayed.
# My Weight Calculation for WRR
To decide which products get shown more often, I'm calculating a **weight** based on:
* **Product reviews** (positive feedback from customers)
* **Total product sales** (how many units sold)
* **Seller feedback** (how reliable the seller is)
Here's the formula I'm planning to use:
`Weight = 0.5 * (1 + log(productPositiveFeedback)) + 0.3 * (1 + log(totalProductSell)) + 0.2 * (1 + log(sellerFeedback))`
To make sure big sellers don’t dominate completely, I want to **cap the weight** in a way that balances things for new sellers. For example:
* If the calculated weight is above **10**, it gets counted as **11** (e.g., actual weight of 20 becomes 11).
* If it’s above **100**, it becomes **101** (e.g., actual weight of 960 becomes 101).
* So, a weight of **910** would count as **100**, and so on.
This way, I hope to give newer sellers a chance to compete with big sellers. **Question 1: Does this weight calculation and capping approach sound okay? Or is there a better way to balance things?**
# My Search Process
Here’s how I’m planning to handle searches:
1. When someone searches (e.g., "GTA 5"), the system first checks **Redis** for results.
2. If it’s not in Redis, it queries **Elasticsearch**, stores the results in Redis, and shows them on the UI.
3. This way, future searches for the same term are faster because they come from Redis.
**Question 2: Is this Redis + Elasticsearch approach good? How many products should I store in Redis per search to keep things efficient?** I don’t want to overload Redis with too much data.
# Handling Categories
My products are also organized by **categories** (e.g., electronics, games, etc.). **Question 3: Will my weight calculation mess up how products are shown within categories?** Like, will it prioritize certain products across all categories in a weird way?
# Search Term Overlap Issue
I noticed that if someone searches for **"GTA 5"** and I store those results in Redis, a search for just **"GTA"** might pull up a lot of the same GTA 5 products. Since both searches have similar data, **Question 4: Could this cause problems with how products are prioritized?** Like, is one search getting higher priority than it should?
# Where to Implement WRR
Finally, I’m unsure where to handle the **Weighted Round Robin logic**. Should I do it in **Elasticsearch** (when fetching results) or in **Redis** (when caching or serving results)? **Question 5: Which is better for WRR, and why?**
# Note for Readers
I’m pretty new to building systems like this, so I might not have explained everything perfectly. I’ve read about Elasticsearch, Redis, and WRR, but putting it all together is a bit overwhelming. I’d really appreciate it if you could explain things in a simple way or point out any big mistakes I’m making. If you need more details, let me know!
Thanks in advance for any help! 🙏
https://redd.it/1mpbkba
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Tools to generate CycloneDX 1.6 SBOM from GitHub/Azure DevOps repository dependencies (Django backend)
I’m working on a backend application in Django where I’ll receive a repository (either from Azure DevOps or GitHub) and need to generate an SBOM (Software Bill of Materials) based on the **CycloneDX 1.6** standard.
The goal is to analyze the dependencies of that repository (language/framework agnostic if possible, but primarily Python/Django for now) and output an SBOM in JSON format that complies with CycloneDX 1.6.
I’m aware that GitHub has some APIs that could help, but Azure DevOps does not seem to have an equivalent for SBOM generation, so I might need to clone the repo and run the analysis locally.
**Questions:**
* What tools or libraries would you recommend for generating a CycloneDX 1.6 SBOM from a given repository’s dependencies?
* Are there CLI tools or Python packages that can parse dependency manifests (e.g., `requirements.txt`, `pom.xml`, `package.json`, etc.) and produce a valid SBOM?
* Any recommendations for handling both GitHub and Azure DevOps sources in a unified way?
https://redd.it/1mpdbnd
@r_devops
I’m working on a backend application in Django where I’ll receive a repository (either from Azure DevOps or GitHub) and need to generate an SBOM (Software Bill of Materials) based on the **CycloneDX 1.6** standard.
The goal is to analyze the dependencies of that repository (language/framework agnostic if possible, but primarily Python/Django for now) and output an SBOM in JSON format that complies with CycloneDX 1.6.
I’m aware that GitHub has some APIs that could help, but Azure DevOps does not seem to have an equivalent for SBOM generation, so I might need to clone the repo and run the analysis locally.
**Questions:**
* What tools or libraries would you recommend for generating a CycloneDX 1.6 SBOM from a given repository’s dependencies?
* Are there CLI tools or Python packages that can parse dependency manifests (e.g., `requirements.txt`, `pom.xml`, `package.json`, etc.) and produce a valid SBOM?
* Any recommendations for handling both GitHub and Azure DevOps sources in a unified way?
https://redd.it/1mpdbnd
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Built a tiny GitHub Action to gate LLM outputs in CI (schema/regex/cost, no API keys)
I made a lightweight Action that fails PRs when recorded LLM outputs break contracts.
No live model calls in CI — runs on fixtures.
Deterministic checks: JSON schema, regex, list/set equality, numeric bounds, file diff
Snapshots + regression compare
Cost budget gate
PR comment + HTML report
Marketplace: https://github.com/marketplace/actions/promptproof-eval
Demo: https://github.com/geminimir/promptproof-demo-project
Sample report: https://geminimir.github.io/promptproof-action/reports/before.html
Blunt feedback welcome: onboarding rough spots? missing checks? is the report clear enough to make it a required check?
https://redd.it/1mpefnm
@r_devops
I made a lightweight Action that fails PRs when recorded LLM outputs break contracts.
No live model calls in CI — runs on fixtures.
Deterministic checks: JSON schema, regex, list/set equality, numeric bounds, file diff
Snapshots + regression compare
Cost budget gate
PR comment + HTML report
Marketplace: https://github.com/marketplace/actions/promptproof-eval
Demo: https://github.com/geminimir/promptproof-demo-project
Sample report: https://geminimir.github.io/promptproof-action/reports/before.html
Blunt feedback welcome: onboarding rough spots? missing checks? is the report clear enough to make it a required check?
https://redd.it/1mpefnm
@r_devops
GitHub
PromptProof Eval - GitHub Marketplace
Deterministic replay and policy checks for LLM outputs
Migration jitters
Currently planning a migration from ROSA to EKS and went over AWS Cloud Prac fundamentals a while ago but got an automation pipeline to handle and was busy with that for months, due to several blockers.
I've made a document about what's required but feel very out of place due to being inexperienced with EBA (my team is new to it too but they have experience with AWS I don't).
Are there any tips or advice that could help - apart from practicing kubectl (I started that today)?
https://redd.it/1mpg4hg
@r_devops
Currently planning a migration from ROSA to EKS and went over AWS Cloud Prac fundamentals a while ago but got an automation pipeline to handle and was busy with that for months, due to several blockers.
I've made a document about what's required but feel very out of place due to being inexperienced with EBA (my team is new to it too but they have experience with AWS I don't).
Are there any tips or advice that could help - apart from practicing kubectl (I started that today)?
https://redd.it/1mpg4hg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Is migrating to Jenkins now is a good idea?
My company has a new requirement to move away from GitHub and self-host our code on-premise
GitLab license isn't in the budget, so we're looking for other self-hosted CI/CD solutions
After a lot of research, to my surprise, Jenkins seems to fit all our requirements: Kubernetes runners, Configuration as Code, and declarative pipelines
After spinning up a playground with the latest version, I was also surprised by the modern UI (kind of)
I've never worked with Jenkins before, but I've been given enough time to learn the ropes and set it up everything using best practices
So, my questions are:
- Do you have any success stories with a modern Jenkins setup? Are you genuinely happy with it?
- Any tips or gotchas I should be aware of to make this implementation a success and not a plugin-mess?
https://redd.it/1mpimma
@r_devops
My company has a new requirement to move away from GitHub and self-host our code on-premise
GitLab license isn't in the budget, so we're looking for other self-hosted CI/CD solutions
After a lot of research, to my surprise, Jenkins seems to fit all our requirements: Kubernetes runners, Configuration as Code, and declarative pipelines
After spinning up a playground with the latest version, I was also surprised by the modern UI (kind of)
I've never worked with Jenkins before, but I've been given enough time to learn the ropes and set it up everything using best practices
So, my questions are:
- Do you have any success stories with a modern Jenkins setup? Are you genuinely happy with it?
- Any tips or gotchas I should be aware of to make this implementation a success and not a plugin-mess?
https://redd.it/1mpimma
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Automating cold-data cleanup in RDS to avoid replica bloat and reduce cost
A client of us was running an AWS RDS MySQL environment that had grown to 1.5 TB with 78 replicas. The strange part was that 99% of the data was years-old and never queried.
They had tried Percona’s pt-archiver before, but it became too complex to run across hundreds of tables and databases when they did not even know each table’s real access pattern.
1. Query pattern analysis – We used slow query logs and performance schema to map which datasets were actually being used, making sure we only touched data that had been cold for months or years.
2. Safe archival – Truly cold datasets were moved to S3 in compressed form to meet compliance requirements and keep them retrievable if ever needed.
3. Targeted purging – After archival, data was dropped only when automated dependency checks confirmed no active queries, joins, or application processes relied on it.
4. Index cleanup – Removed unused indexes consuming gigabytes of storage, reducing both backup size and query planning overhead.
5. Result impact – Storage dropped from 1.5 TB to 130 GB, replicas fell from 78 to 31, CPU load dropped sharply, and the RDS instance size was safely downgraded.
6. Ongoing prevention – We now run an hourly automated cleanup job that removes small batches of unused data, preventing the database from ever swelling to that size again.
No downtime. No application errors. Just a week of work that saved hundreds of thousands annually and made ongoing operations far easier.
We’re interested in seeing how this type of cleanup performs in different RDS setups, let me know if you’ve tackled something similar, or DM if you’d like to test it with us.
https://redd.it/1mphn1g
@r_devops
A client of us was running an AWS RDS MySQL environment that had grown to 1.5 TB with 78 replicas. The strange part was that 99% of the data was years-old and never queried.
They had tried Percona’s pt-archiver before, but it became too complex to run across hundreds of tables and databases when they did not even know each table’s real access pattern.
1. Query pattern analysis – We used slow query logs and performance schema to map which datasets were actually being used, making sure we only touched data that had been cold for months or years.
2. Safe archival – Truly cold datasets were moved to S3 in compressed form to meet compliance requirements and keep them retrievable if ever needed.
3. Targeted purging – After archival, data was dropped only when automated dependency checks confirmed no active queries, joins, or application processes relied on it.
4. Index cleanup – Removed unused indexes consuming gigabytes of storage, reducing both backup size and query planning overhead.
5. Result impact – Storage dropped from 1.5 TB to 130 GB, replicas fell from 78 to 31, CPU load dropped sharply, and the RDS instance size was safely downgraded.
6. Ongoing prevention – We now run an hourly automated cleanup job that removes small batches of unused data, preventing the database from ever swelling to that size again.
No downtime. No application errors. Just a week of work that saved hundreds of thousands annually and made ongoing operations far easier.
We’re interested in seeing how this type of cleanup performs in different RDS setups, let me know if you’ve tackled something similar, or DM if you’d like to test it with us.
https://redd.it/1mphn1g
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
ParrotOS on AWS: quick deployment for pentesting, your setup?
Hey everyone, I recently followed a guide to deploy ParrotOS on AWS, configure your instance, optimize security, and you’re ready for pen-testing or privacy work in just a few minutes.
I’m curious how others approach this:
Do you prefer spinning up ParrotOS (or similar distros) in the cloud vs running locally?
What setup tweaks do you always make, security, performance, tooling?
Any go-to configurations or tips for making this type of deployment smoother or more secure for real-world use?
(Mentioned the guide I used—just in case anyone’s interested: https://medium.com/@techlatest.net/how-to-setup-parrotos-linux-environment-on-aws-amazon-web-services-e38e964b2895)
https://redd.it/1mpkam8
@r_devops
Hey everyone, I recently followed a guide to deploy ParrotOS on AWS, configure your instance, optimize security, and you’re ready for pen-testing or privacy work in just a few minutes.
I’m curious how others approach this:
Do you prefer spinning up ParrotOS (or similar distros) in the cloud vs running locally?
What setup tweaks do you always make, security, performance, tooling?
Any go-to configurations or tips for making this type of deployment smoother or more secure for real-world use?
(Mentioned the guide I used—just in case anyone’s interested: https://medium.com/@techlatest.net/how-to-setup-parrotos-linux-environment-on-aws-amazon-web-services-e38e964b2895)
https://redd.it/1mpkam8
@r_devops
Medium
How to Setup ParrotOS Linux Environment on AWS(Amazon Web Services)
Step by step guide to setup ParrotOS Linux on AWS(Amazon Web Services) with GUI & 1000+ preinstalled security apps