Reddit DevOps
271 subscribers
9 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Azure API - too many requests issue.

I am trying to fetch the cost & the sub for which it is in a certain limit , like under 5 $, May you guys please take a look, how can i optimize this. I have already fetched the sub ID in a different txt file & importing those here in this script. Taken help from co pliot as well



import requests
import pandas as pd
import time
import random
import ssl
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
from datetime import datetime

# Azure Credentials
TENANT_ID = "x"
CLIENT_ID = "x"
CLIENT_SECRET = "x"
# File containing subscription IDs
SUBSCRIPTIONS_FILE = "subscriptions.txt"
# Exclude specific subscriptions
EXCLUDED_NAMES = ["visual studio", "suscripción de visual studio", "mpn", "pay-as-you-go"]

# Azure Endpoints
TOKEN_URL = f"https://login.microsoftonline.com/{TENANT_ID}/oauth2/v2.0/token"
# Force TLS 1.2+ to prevent SSL errors
ssl_context = ssl.create_default_context()
ssl_context.set_ciphers('DEFAULT:@SECLEVEL=1')

# Configure Requests session with retries
session = requests.Session()
retries = Retry(
total=3,
backoff_factor=5, # Increase delay between retries
status_forcelist=[429, 500, 502, 503, 504] # Retry on rate limits and server errors
)
session.mount("https://", HTTPAdapter(max_retries=retries))

# Get Access Token
def get_access_token():
data = {
"grant_type": "client_credentials",
"client_id": CLIENT_ID,
"client_secret": CLIENT_SECRET,
"scope": "https://management.azure.com/.default"
}
response = session.post(TOKEN_URL, data=data)
response.raise_for_status()
return response.json()["access_token"]

# Read subscription IDs from file
def read_subscription_ids():
with open(SUBSCRIPTIONS_FILE, "r") as file:
return [line.strip() for line in file.readlines() if line.strip()]

# Get cost details for multiple subscriptions in a batch
def get_costs_for_subscriptions(subscription_ids, token):
results = []
failed_subscriptions = []

BATCH_SIZE = 5 # Batch size to avoid Azure rate limits
for i in range(0, len(subscription_ids), BATCH_SIZE):
batch = subscription_ids[i:i + BATCH_SIZE]

for sub_id in batch:
COST_URL = f"https://management.azure.com/subscriptions/{sub_id}/providers/Microsoft.CostManagement/query?api-version=2023-03-01"
headers = {"Authorization": f"Bearer {token}"}

cost_query = {
"type": "ActualCost",
"timeframe": "Custom",
"timePeriod": {
"from": "2025-02-01T00:00:00Z",
"to": "2025-02-28T23:59:59Z"
},
"dataset": {
"granularity": "None",
"aggregation": {
"totalCost": {
"name": "PreTaxCost",
"function": "Sum"
}
}
}
}

for attempt in range(3): # Retry max 3 times
try:
response = session.post(COST_URL, headers=headers, json=cost_query)

if response.status_code == 429:
wait = 5 ** attempt + random.uniform(1, 3) # Exponential backoff
print(f"🔁 429 Too Many Requests for {sub_id}. Retrying in {wait:.2f}s...")
time.sleep(wait)
continue # Retry request
elif response.status_code == 400:
print(f" 400 Bad Request for {sub_id}. Skipping...")
failed_subscriptions.append({"Subscription ID": sub_id,
"Error": "400 Bad Request"})
break # Stop retrying on 400 errors
response.raise_for_status()
data = response.json()
rows = data.get("properties", {}).get("rows", [])

if rows:
cost = rows[0][0]
if cost < 5:
print(f" {sub_id} has low spend: ${cost}")
results.append({"Subscription ID": sub_id, "Monthly Spend ($)": cost})
break # Exit retry loop if successful
except requests.exceptions.SSLError as e:
print(f"⚠️ SSL Error on {sub_id}: {e}. Retrying in 5s...")
time.sleep(5)

except requests.exceptions.RequestException as e:
print(f" Failed to fetch cost for {sub_id}: {e}")
failed_subscriptions.append({"Subscription ID": sub_id, "Error": str(e)})
break # Stop retrying
time.sleep(2) # Slower request rate to prevent rate limiting
return results, failed_subscriptions

# Main execution
if __name__ == "__main__":
print("🔄 Fetching Azure costs for February (subscriptions under $5)...")

token = get_access_token()
subscriptions = read_subscription_ids()

results, failed_subscriptions = get_costs_for_subscriptions(subscriptions, token)

# Export results to Excel
if results:
df = pd.DataFrame(results)
filename = f"low_cost_subscriptions_{datetime.now().strftime('%Y%m%d_%H%M%S')}.xlsx"
df.to_excel(filename, index=False)
print(f"\n Exported low-cost subscriptions to: {filename}")

if failed_subscriptions:
df_fail = pd.DataFrame(failed_subscriptions)
fail_filename = f"failed_subscriptions_{datetime.now().strftime('%Y%m%d_%H%M%S')}.xlsx"
df_fail.to_excel(fail_filename, index=False)
print(f"\n⚠️ Exported failed subscriptions to: {fail_filename}")





https://redd.it/1jgh69t
@r_devops
Anyone build their own peronal CI/CD pipeline before?

Hello fellow devops engineers, has anyone ever tried to develop a basic self-hosted CI/CD pipeline before?

https://redd.it/1jgjpmw
@r_devops
I don't know where to get started

I'm a mid-level DevOps engineer with average Java backend experience, and I've just been assigned to a .NET project at my new company. Since my background is in Java, I honestly have no idea what's going on. The project's documentation isn't clear, and even though my teammates might help, I don’t want to come across as someone who needs to be spoon-fed, especially since I'm new to the team. They gave me a high-level overview of the project, but I'm still confused—I don’t even know which file to build or how to run things locally. Any advice?

https://redd.it/1jgh1xt
@r_devops
Is there a better way to build react production projects as a mono repo?

An interesting repo that landed in my lap today, it is not meant for containerized solution but something native.

The repo is just a bunch of really small plugin-ish type react projects all configured with vite. A total of 20 such small plugins and the final artifact to generate was all of the project's production-ready distribution dirs bundled as a final tarball.

CI/CD: Gitlab-CI and push the generated artifacts to Artifactory.

Repo structure is as follows:

repo_root/
plugins/
example-1-plugin/
...
example-20-plugin/


I made a simple Makefile

PLUGINS := example-1 example-2 ... example-20

all: $(PLUGINS)

$(PLUGINS):
npm install --prefix=plugins/$@-plugin/
npm build run --prefix=plugins/$@-plugin/


this will recursively build the projects with a caveat that it will keep installing vite for each and every plugin locally.

In order to avoid redudantly pulling vite everytime I used npm link on installed node_modules in order to symlink the already existing vite vite-react-swc tailwind stuff.

$(PLUGINS):
npm install --prefix=plugins/$@-plugin/ && \
npm link --prefix=plugins/$@-plugin && \
npm link --prefix=plugins/$@-plugin vite vite-react-swc && \
npm run build --prefix=plugins/$@-plugin/


which reduced the build times for me.

Granted this is not by a long shot a good repo structure and neither could I deem it as a monorepo of sorts but this was what handed to me to work with and it got the job done.

Any recommendations, comments on things I can improve, take care or refactor when working with such an npm node scenario.

https://redd.it/1jgnn7u
@r_devops
GitHub Actions Supply Chain Attack: A Targeted Attack on Coinbase Expanded to the Widespread tj-actions/changed-files Incident

The original compromise of the tj-actions/changed-files GitHub action reported last week was initially intended to specifically target Coinbase. After they mitigated it, the attacker initiated the Widespread attack.
https://unit42.paloaltonetworks.com/github-actions-supply-chain-attack/

https://redd.it/1jgob6a
@r_devops
Got a new role in DevOps but need advice since my background is sysadmin

Just received an offer for a full time devops engineer but my background is in linux/sysadmin for the past 4 years. I will say that I was very stagnant in my previous position and instead of learning and developing it was constant firefighting and due to the unstable nature of the job market I was reluctant to look for a new job.

A recruiter reached out to me with this opportunity and even though my experience was limited I still had working knowledge of Jenkins/Datadog but nothing related to docker and AWS but still went ahead and impressed them in the interview process that they gave me an offer. I want to really succeed in this position and just need help where I need to upskill/focus new tools to hit the ground running and keep up.

https://redd.it/1jgpd17
@r_devops
No-code platform for easy editing, responsiveness, and Figma integration

Hey everyone! How’s it going?

I’m a UX Designer, and I’m facing a problem that I believe you might be able to help me with. I design interfaces for an education network, and since we have multiple products, each with its own website, our development team struggled to implement basic updates and improvements. Simple requests, like changing images, text, or buttons, would take days to be completed.

Because of this, management decided to move our websites to a no-code or more user-friendly platform (I was against this decision) and chose WIX as the solution. The issue is that WIX has terrible integration with Figma. Every time I try to import a project, it breaks and comes with a lot of bugs. My only option is to design in Figma and then manually rebuild everything on the platform, which creates a huge amount of extra work. On top of that, the projects become heavy, and I have to fine-tune every little detail using prebuilt elements and templates, which significantly limits customization.

Another major issue is mobile responsiveness. WIX requires manual adjustments on almost every screen, and even then, the final result is far from optimized, which negatively impacts the user experience. Additionally, the platform is incredibly slow for basic tasks like aligning elements and adjusting spacing, making the editing process even more frustrating.

Do you know of any platform similar to WIX that integrates well with Figma, is easy to edit for someone with little coding knowledge, and offers better mobile responsiveness?

https://redd.it/1jgrabw
@r_devops
No-code platform for easy editing, responsiveness, and Figma integration

Hey everyone! How’s it going?

I’m a UX Designer, and I’m facing a problem that I believe you might be able to help me with. I design interfaces for an education network, and since we have multiple products, each with its own website, our development team struggled to implement basic updates and improvements. Simple requests, like changing images, text, or buttons, would take days to be completed.

Because of this, management decided to move our websites to a no-code or more user-friendly platform (I was against this decision) and chose WIX as the solution. The issue is that WIX has terrible integration with Figma. Every time I try to import a project, it breaks and comes with a lot of bugs. My only option is to design in Figma and then manually rebuild everything on the platform, which creates a huge amount of extra work. On top of that, the projects become heavy, and I have to fine-tune every little detail using prebuilt elements and templates, which significantly limits customization.

Another major issue is mobile responsiveness. WIX requires manual adjustments on almost every screen, and even then, the final result is far from optimized, which negatively impacts the user experience. Additionally, the platform is incredibly slow for basic tasks like aligning elements and adjusting spacing, making the editing process even more frustrating.

Do you know of any platform similar to WIX that integrates well with Figma, is easy to edit for someone with little coding knowledge, and offers better mobile responsiveness?

https://redd.it/1jgra8h
@r_devops
"devops"->"DevOps" on Linkedin gave 100,000+ more results

I've been looking for a new job for a few weeks now and decided to look for devops roles on LinkedIn. Typed in "devops" and got like few thousand results.. felt pretty down.

I've been working with Linkedin API and by complete accident I capitalized it to "devops"->"DevOps" and HOLY SHIT - 110,000+ JOBS APPEARED OUT OF NOWHERE! 🤯
This piece of crap website is case sensitive no wonder I saw no results in UI.

https://ibb.co/9BvWDPK vs. https://ibb.co/fYdLJWgC
anyway my side project is devops market analysis tool. I did a UI for it and there results are matching I got few other stats too, gonna keep it updated prepare.sh/trends/devops

https://redd.it/1jgx2mt
@r_devops
Experience with AWS reseller DoIt

People who migrated their AWS organization accounts to FinOps service DoIt, what was your experience of switching your org to DoIt?

Did any of your AWS services break as a consequence of the migration?

In particular, did any existing SSO solution break. (I heard this has happened to some customers.)

https://redd.it/1jgwvbb
@r_devops
I built Envs.AI - a free tool to manage environment variables across your tech stack

Hey everyone,

I wanted to share a tool I built to solve a common headache for developers and DevOps teams - managing environment variables across different environments and platforms.

**What is Envs.AI?** It's a free SaaS that provides a central, secure place to store all your environment variables. You can easily integrate it with Jenkins, Python projects, and other parts of your tech stack.

**Why I built it:** I got tired of scattered .env files, sharing secrets through Slack, and the inevitable "works on my machine" problems that come from mismatched environment setups.

**Features:**

* Store all env variables in one secure location
* Simple integration with CI/CD pipelines
* API access for different languages and frameworks
* Team collaboration tools
* 100% free to use

Would love to hear your thoughts, feedback, or feature requests! What pain points do you have with managing env variables?

[Envs.AI](https://Envs.AI)

https://redd.it/1jgzy5h
@r_devops
Built a fun MERN Chat App on EKS! Roast My DevOps Setup!

Just finished a fun project: a MERN chat app on EKS, fully automated with Terraform & GitLab CI/CD. Think "chat roulette" but for my sanity. 😅

Diagram: https://imgur.com/a/CkP0VBI

My Stack:

Infra: Terraform (S3 state, obvs)
Net: Fancy VPC with all the subnets & gateways.
K8s: EKS + Helm Charts (rollbacks ftw!)
CI/CD: GitLab, baby! (Docker, ECR, deploy!)
Load Balancer: NLB + AWS LB Controller.
Logging: Not in this project yet

I'm eager to learn from your experiences and insights! Thanks in advance for your feedback :)

https://redd.it/1jh2egn
@r_devops
Why my backend app is running slow?

It's a pretty simple Java application which is my personal project and have my frontend(angular) hosted on vercel, backend(Spring Boot) on Koyeb and MySql on aiven cloud.

Here is my link of forntend: gadget-shop-frontend.vercel.app/index
and my backend: gadgetshop-backend.koyeb.app/api/all-products

Apis are: api/all-products, api/all-categories, api/product/1, api/product/2, api/categoty/1, api/categories.

I have an extra facade layer and DTOs also. In my local host it was really perfect but after deploying on cloud, it feels like, it's taking almost 7-8 seconds for every API call. So, if there is someone experienced, I am asking for help, I am looking for expert's opinion.

https://redd.it/1jh83h3
@r_devops
How to deploy Helm charts on AKS GoCD cluster?

I created and deployed GoCD on my AKS. I can make a new pipeline with the Pipeline Wizard and then point to github repo. But what is the way to deploy Heml chars of my MERN stack?


https://redd.it/1jh7n61
@r_devops
Docker private registry not working

my docker private registry is running in a registry container on rhel. All images are being pulled, tagged and pushed to the registry. On another VM i have a K8s controller running crio runtime, I have made changes in the /etc/crio/crio.conf.d/10-crio.conf as below and restarted the crio service on controller. Still my K8s controller is pulling images from docker.io Please suggest !!

[crio.image\]

signature_policy = "/etc/crio/policy.json"

registries = [

"192.168.1.12:5000",

\]

[crio.runtime\]

default_runtime = "crun"

[crio.runtime.runtimes.crun\]

runtime_path = "/usr/libexec/crio/crun"

runtime_root = "/run/crun"

monitor_path = "/usr/libexec/crio/conmon"

allowed_annotations = [

"io.containers.trace-syscall",

\]

[crio.runtime.runtimes.runc\]

runtime_path = "/usr/libexec/crio/runc"

runtime_root = "/run/runc"

monitor_path = "/usr/libexec/crio/conmon"

https://redd.it/1jhaqev
@r_devops
k8s Log Rotation - Best Practice

By default it seems that kubernetes uses kubelet to ensure that log files from the containers are rotated correctly. It also seems that the only way to configure kubelet, is based on file size, not time.

I would like to create a solution, which would rotate logs based on time and not on file size. This comes in especially handy, if you want to ensure that your files are available for set amount of time, regardless of how much log producers produces the logs.

Before proceeding any further, I would like to gain a better understand what is the usual and best practice when it comes to setting up log file rotation based on k8s. Is it customary to use something else, other than kubelet? How does kubelet work, when you introduce something like logrotate on every node (via daemonset)?

Please share your ideas and experience!

https://redd.it/1jhblqj
@r_devops
Database Performance Tuning Training/Resources

Recently I've had to get more and more involved in database tuning and it occurred to me that I really haven't got a clue what I'm doing.

I mean sure, I can tell that a full table scan is bad and ideally want to avoid key lookups but I feel like I struggle.

I do realize that what I lack is probably experience but I also feel that I lack a grasp on the fundamentals.

So are there any courses or books you recommend and why?

I should say that at work we have a mix of SQL Server and Postgres, heavily skewed towards the former.

https://redd.it/1jhcz6a
@r_devops
anyone here prepare for a citadel interview?

Lateral hire coming in 8 years of Support experience at Goldman Sachs, position is site reliability engineer at citadel, have coderpads coming up, can someone please recommend what to study ? anyone have experience with this stuff ? should he study leetcode? thank you

https://redd.it/1jhew7b
@r_devops
Doing freelance DevOps

I’m exploring freelancing in DevOps and wondering how viable it is.

Most roles seem full-time or contract-based, but I see people offering simple services like simple CI/CD setup, cloud automation, and Kubernetes deployments.

How hard is it to find freelance clients in this field?
What are the challenges?


Do you think freelancing in DevOps is sustainable, or is it better suited for consulting?

Would love to hear your insights!



https://redd.it/1jhg9b7
@r_devops
Rsync on temple os or ksync

Everytime i attempt to rsync my bible notes on temple os i find that it obly syncs half my notes. Anyone try ksyncing with temple?

https://redd.it/1jhfk4g
@r_devops