Containers with azure functions
hello lately I have started a new project that have few apps hosted on azure functions, but not as a container. I want to start deploying the apps as containers in azure functions.
the base image is pretty big, the base azure function for node is around 2GB. I used dive to get inside, and I have found there are some unused runtimes installed and some azure function bundles with older version that I can delete.
with cleaning and using slim version, I can get the base image to 1 GB.
I was wondering if you have any tips and tricks for containerized azure function to keep the image small.
cheers
https://redd.it/1kk4zb8
@r_devops
hello lately I have started a new project that have few apps hosted on azure functions, but not as a container. I want to start deploying the apps as containers in azure functions.
the base image is pretty big, the base azure function for node is around 2GB. I used dive to get inside, and I have found there are some unused runtimes installed and some azure function bundles with older version that I can delete.
with cleaning and using slim version, I can get the base image to 1 GB.
I was wondering if you have any tips and tricks for containerized azure function to keep the image small.
cheers
https://redd.it/1kk4zb8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
15 Years of DevOps, yet manual schema migrations still a thing
Hey All,
My name is Rotem, co-founder of atlasgo.io
One of the most surprising things I learned since starting the company 4 years ago is that manual database schema changes are still a thing. Way more common that I had thought.
We commonly see this is in customer calls - the team has CI/CD pipelines for app delivery, maybe even IaC for cloud stuff - but the database - still devs/DBAs connect directly to prod to apply changes.
This came as a surprise to me since tools for automating schema changes have existed since at least 2006.
Our DevRel Engineer u/noarogo published a piece about it today:
https://atlasgo.io/blog/2025/05/11/auto-vs-manual
What's your experience? Do you still see this practice?
If you see it, what's your explanation for this gap?
https://redd.it/1kk8x91
@r_devops
Hey All,
My name is Rotem, co-founder of atlasgo.io
One of the most surprising things I learned since starting the company 4 years ago is that manual database schema changes are still a thing. Way more common that I had thought.
We commonly see this is in customer calls - the team has CI/CD pipelines for app delivery, maybe even IaC for cloud stuff - but the database - still devs/DBAs connect directly to prod to apply changes.
This came as a surprise to me since tools for automating schema changes have existed since at least 2006.
Our DevRel Engineer u/noarogo published a piece about it today:
https://atlasgo.io/blog/2025/05/11/auto-vs-manual
What's your experience? Do you still see this practice?
If you see it, what's your explanation for this gap?
https://redd.it/1kk8x91
@r_devops
atlasgo.io
Atlas is a language-agnostic tool for managing and migrating database schemas using modern DevOps principles. It enables developers to automate schema changes through both declarative (schema-as-code) and versioned migration workflows, supporting inputs like…
Getting env file to digitalocean droplet
Hello I currently have a next.js app and I'm currently deploying to digitalocean droplets using github actions, but I'm kind of confused on how to get my .env file to the droplet. Would I manually just add it to the cloned repo on the droplet? Or scp my env to the droplet. Or some other way? I'm a bit new to this.
https://redd.it/1kk7sw8
@r_devops
Hello I currently have a next.js app and I'm currently deploying to digitalocean droplets using github actions, but I'm kind of confused on how to get my .env file to the droplet. Would I manually just add it to the cloned repo on the droplet? Or scp my env to the droplet. Or some other way? I'm a bit new to this.
https://redd.it/1kk7sw8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What tool are you using for easy provisioning?
Hi, I am experimenting with self managed kubernetes cluster. Kubernetes is cool and all but the underlying servers where the pods run on still need to be provisioned and managed. I understand that terraform can create/manage the infra resources such as network, storage, vm etc. But for provisioning other tools such as Ansible is used. I am looking for an easy to use with web ui preferably to provision my servers.
https://redd.it/1kkaqdq
@r_devops
Hi, I am experimenting with self managed kubernetes cluster. Kubernetes is cool and all but the underlying servers where the pods run on still need to be provisioned and managed. I understand that terraform can create/manage the infra resources such as network, storage, vm etc. But for provisioning other tools such as Ansible is used. I am looking for an easy to use with web ui preferably to provision my servers.
https://redd.it/1kkaqdq
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Starting DevOps role, but no prior experience
I’m starting a DevOps Azure Team Lead. I have long history leading agile teams, but no real hands-on experience in DevOps. I’m pretty sure I’m gonna be just fine, but I would like to thrive.
Hence my question - what’s the fast-track to learn the DevOps craft? With so many tutorials out there, I’d like to know which one to choose or start from. Any recommends?
https://redd.it/1kkcy4y
@r_devops
I’m starting a DevOps Azure Team Lead. I have long history leading agile teams, but no real hands-on experience in DevOps. I’m pretty sure I’m gonna be just fine, but I would like to thrive.
Hence my question - what’s the fast-track to learn the DevOps craft? With so many tutorials out there, I’d like to know which one to choose or start from. Any recommends?
https://redd.it/1kkcy4y
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I made a DevOps tool in Golang
I made a DevOps tool with Golang. Its like Ansible, but I feel better as far as speed and customization. I can't say much more than isn't already in the README file. I just thought it might be of use to someone. Or maybe there's some feedback on something I can't see. If anyone gets time, let me know what you think.
https://github.com/mephistolist/godev
https://redd.it/1kkdkhj
@r_devops
I made a DevOps tool with Golang. Its like Ansible, but I feel better as far as speed and customization. I can't say much more than isn't already in the README file. I just thought it might be of use to someone. Or maybe there's some feedback on something I can't see. If anyone gets time, let me know what you think.
https://github.com/mephistolist/godev
https://redd.it/1kkdkhj
@r_devops
GitHub
GitHub - mephistolist/godev: A simple cross-platform devops project in Golang that's built for speed and customization.
A simple cross-platform devops project in Golang that's built for speed and customization. - mephistolist/godev
Starting my selfhosting journey - k8s or docker?
Hello all, i feel ready enough to start practicing and suffering with my homelab in order to improve my skills on common devops topics and to give a try to a bunch of r/selfhosted projects. Now i'm simply wondering, portainer or kubernetes ? I have a single mini-pc node setup with ubuntu server + docker/podman + minikube running on it. Initially, no network drives, everything will resides on the local disk machine so i need a pretty much easy setup and i don't care so much about FT and DR.
Trying to analyze the two architectures, i would say that the kubernetes one is more reliable and more interesting, but sometimes helm charts aren't updated or they are a bit messy to investigate or manage. But storage and networking would probably be much easier (a single ingress with multiple path, one for each service).
Instead running everything on pure docker with a management system like portainer would be probably easier to manage but dunno if this can really help me in enlarge my skills and if the pure docker approach can be a little bit "aged".
What's your point about this ? Any suggestions or insights ?
Many thanks !
https://redd.it/1kka335
@r_devops
Hello all, i feel ready enough to start practicing and suffering with my homelab in order to improve my skills on common devops topics and to give a try to a bunch of r/selfhosted projects. Now i'm simply wondering, portainer or kubernetes ? I have a single mini-pc node setup with ubuntu server + docker/podman + minikube running on it. Initially, no network drives, everything will resides on the local disk machine so i need a pretty much easy setup and i don't care so much about FT and DR.
Trying to analyze the two architectures, i would say that the kubernetes one is more reliable and more interesting, but sometimes helm charts aren't updated or they are a bit messy to investigate or manage. But storage and networking would probably be much easier (a single ingress with multiple path, one for each service).
Instead running everything on pure docker with a management system like portainer would be probably easier to manage but dunno if this can really help me in enlarge my skills and if the pure docker approach can be a little bit "aged".
What's your point about this ? Any suggestions or insights ?
Many thanks !
https://redd.it/1kka335
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Simple way to Analyse .ddl file
Hey,
we Need a task in a Pipeline with a Script Which Extrakt the properties from the ddl file and if the file has a signature, do you have any Examples with powershell or something Else?
https://redd.it/1kkbzmd
@r_devops
Hey,
we Need a task in a Pipeline with a Script Which Extrakt the properties from the ddl file and if the file has a signature, do you have any Examples with powershell or something Else?
https://redd.it/1kkbzmd
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Simple, self-hosted GitHub Actions runners
I needed more RAM for my GitHub Actions runners and I couldn't really find an offering that I could link to a private repository (they all need organization accounts?).
Anyways, I have a pretty powerful desktop for dev work already so I figured why not put the runner on my local desktop. It turns out the GHA runner is not containerized by default and, more importantly, it is stateful so you have to rewrite the way your actions work to get them to play nicely with the default self-hosted configuration.
To make it easier, I made a Docker image that deploys a self-hosted runner very similar to the GitHub one, check it out! https://github.com/kevmo314/docker-gha-runner
https://redd.it/1kkj6p7
@r_devops
I needed more RAM for my GitHub Actions runners and I couldn't really find an offering that I could link to a private repository (they all need organization accounts?).
Anyways, I have a pretty powerful desktop for dev work already so I figured why not put the runner on my local desktop. It turns out the GHA runner is not containerized by default and, more importantly, it is stateful so you have to rewrite the way your actions work to get them to play nicely with the default self-hosted configuration.
To make it easier, I made a Docker image that deploys a self-hosted runner very similar to the GitHub one, check it out! https://github.com/kevmo314/docker-gha-runner
https://redd.it/1kkj6p7
@r_devops
GitHub
GitHub - kevmo314/docker-gha-runner: Simple self-hosted GitHub Actions Runners
Simple self-hosted GitHub Actions Runners. Contribute to kevmo314/docker-gha-runner development by creating an account on GitHub.
Perplexity for DevOps
Hey !
We’ve been building Anyshift, the Perplexity for DevOps. It answers questions like:
* “Are we deployed across multiple regions or AZs?”
* “What changed in my DynamoDB prod between April 8–11?”
* “Which accounts have stale or unused access keys?”
and make detailed answered with verified sources (AWS URL, git commits etc...)
Behind the scenes, it queries a live graph of your code and cloud with no hallucinations, just real answers backed by real data from:
* GitHub (Terraform & IaC)
* Live AWS resources
* Datadog
Why we built it:
Terraform plans are often opaque. One small change (like a CIDR block or SG rule) can trigger unexpected consequences. We wanted visibility into those dependencies — including unmanaged or clickops resources
Under the hood :
* We use Neo4j graph updated via event-driven pipelines
* We provide factual answers with links to source data
* It can be used as a Slackbot or web UI
The setup takes \~5 mins (GitHub app or AWS read-only on a dev account to test it quickly).
And its free for teams up to 3 users :) [https://app.anyshift.io](https://app.anyshift.io)
Would love your feedback — especially around Terraform drift, shadow IT, or blast radius use cases.
Thanks a lot :)))
Roxane
https://redd.it/1kknbpf
@r_devops
Hey !
We’ve been building Anyshift, the Perplexity for DevOps. It answers questions like:
* “Are we deployed across multiple regions or AZs?”
* “What changed in my DynamoDB prod between April 8–11?”
* “Which accounts have stale or unused access keys?”
and make detailed answered with verified sources (AWS URL, git commits etc...)
Behind the scenes, it queries a live graph of your code and cloud with no hallucinations, just real answers backed by real data from:
* GitHub (Terraform & IaC)
* Live AWS resources
* Datadog
Why we built it:
Terraform plans are often opaque. One small change (like a CIDR block or SG rule) can trigger unexpected consequences. We wanted visibility into those dependencies — including unmanaged or clickops resources
Under the hood :
* We use Neo4j graph updated via event-driven pipelines
* We provide factual answers with links to source data
* It can be used as a Slackbot or web UI
The setup takes \~5 mins (GitHub app or AWS read-only on a dev account to test it quickly).
And its free for teams up to 3 users :) [https://app.anyshift.io](https://app.anyshift.io)
Would love your feedback — especially around Terraform drift, shadow IT, or blast radius use cases.
Thanks a lot :)))
Roxane
https://redd.it/1kknbpf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
❤1
IaCConf: the first community-driven virtual conference focused entirely on infrastructure as code
If you're working with Terraform, OpenTofu, Crossplane, or others, check out IaCConf.
IaCConf is 100% online and free, and it starts at 11:00 am EDT, May 15, 2025.
The conference is for every skill level, and here are some of the topics that will be covered:
Getting started with IaC
Managing IaC at scale
IaC + Platform Engineering
AI in IaC
Full agenda and free registration on the site.
https://redd.it/1kkndmb
@r_devops
If you're working with Terraform, OpenTofu, Crossplane, or others, check out IaCConf.
IaCConf is 100% online and free, and it starts at 11:00 am EDT, May 15, 2025.
The conference is for every skill level, and here are some of the topics that will be covered:
Getting started with IaC
Managing IaC at scale
IaC + Platform Engineering
AI in IaC
Full agenda and free registration on the site.
https://redd.it/1kkndmb
@r_devops
IaCConf
IaCConf - The First Community-Driven IaC Conference | May 15, 2025
IaCConf is the first community-driven virtual IaC conference. It will feature discussions on Infrastructure as Code trends and sessions with industry leaders.
Kubernetes Scaling: Replication Controller vs ReplicaSet vs Deployment - What’s the Difference?
Hey folks! Before diving into my latest post on Horizontal vs Vertical Pod Autoscaling (HPA vs VPA), I’d actually recommend brushing up on the foundations of scaling in Kubernetes.
I published a beginner-friendly guide that breaks down the evolution of Kubernetes controllers, from ReplicationControllers to ReplicaSets and finally Deployments, all with YAML examples and practical context.
Thought of sharing a TL;DR version here:
ReplicationController (RC):
1. Ensures a fixed number of pods are running.
2. Legacy component - simple, but limited.
ReplicaSet (RS):
1. Replaces RC with better label selectors.
2. Rarely used standalone; mostly managed by Deployments.
Deployment:
1. Manages ReplicaSets for you.
2. Supports rolling updates, rollbacks, and autoscaling.
3. The go-to method for real-world app management in K8s.
Each step brings more power and flexibility, a must-know before you explore HPA and VPA.
Would love to hear your thoughts, what part confused you the most when you were learning this, or what finally made it click? Drop a comment, and let’s chat!
Check out the full article with YAML snippets and key commands here:
First, Why You Should Skip RC and Start with Deployments in Kubernetes
Next, Want to Optimize Kubernetes Performance? Here’s How HPA & VPA Help
If you found it helpful, don’t forget to follow me on Medium and enable email notifications to stay in the loop. We wrapped up a solid 30Blogs in the #60Days60Blogs ReadList series of Docker and K8S and there's so much more coming your way.
And hey, if you enjoyed the read, leave a Clap (or 50) in Medium to show some love!
https://redd.it/1kkp3h8
@r_devops
Hey folks! Before diving into my latest post on Horizontal vs Vertical Pod Autoscaling (HPA vs VPA), I’d actually recommend brushing up on the foundations of scaling in Kubernetes.
I published a beginner-friendly guide that breaks down the evolution of Kubernetes controllers, from ReplicationControllers to ReplicaSets and finally Deployments, all with YAML examples and practical context.
Thought of sharing a TL;DR version here:
ReplicationController (RC):
1. Ensures a fixed number of pods are running.
2. Legacy component - simple, but limited.
ReplicaSet (RS):
1. Replaces RC with better label selectors.
2. Rarely used standalone; mostly managed by Deployments.
Deployment:
1. Manages ReplicaSets for you.
2. Supports rolling updates, rollbacks, and autoscaling.
3. The go-to method for real-world app management in K8s.
Each step brings more power and flexibility, a must-know before you explore HPA and VPA.
Would love to hear your thoughts, what part confused you the most when you were learning this, or what finally made it click? Drop a comment, and let’s chat!
Check out the full article with YAML snippets and key commands here:
First, Why You Should Skip RC and Start with Deployments in Kubernetes
Next, Want to Optimize Kubernetes Performance? Here’s How HPA & VPA Help
If you found it helpful, don’t forget to follow me on Medium and enable email notifications to stay in the loop. We wrapped up a solid 30Blogs in the #60Days60Blogs ReadList series of Docker and K8S and there's so much more coming your way.
And hey, if you enjoyed the read, leave a Clap (or 50) in Medium to show some love!
https://redd.it/1kkp3h8
@r_devops
Medium
Why You Should Skip RC and Start with Deployments in Kubernetes
ReplicationController vs ReplicaSet vs Deployment: What You Really Need to Know, ReadList 8.
📌 Case Study Changing GitHub Repository in AWS Amplify — Step-by-Step Guide
Hey folks,
I recently ran into a situation at work where I needed to change the GitHub repository connected to an existing AWS Amplify app. Unfortunately, there's no native UI support for this, and documentation is scattered. So I documented the exact steps I followed, including CLI commands and permission flow.
💡 Key Highlights:
Temporary app creation to trigger GitHub auth
GitHub App permission scoping
Using AWS CLI to update repository link
Final reconnection through Amplify Console
🧠 If you're hitting a wall trying to rewire Amplify to a different repo without breaking your pipeline, this might save you time.
🔗 Full walkthrough with screenshots (Notion):
https://www.notion.so/Case-Study-Changing-GitHub-Repository-in-AWS-Amplify-A-Step-by-Step-Guide-1f18ee8a4d46803884f7cb50b8e8c35d
Would love feedback or to hear how others have approached this!
https://redd.it/1kkpceu
@r_devops
Hey folks,
I recently ran into a situation at work where I needed to change the GitHub repository connected to an existing AWS Amplify app. Unfortunately, there's no native UI support for this, and documentation is scattered. So I documented the exact steps I followed, including CLI commands and permission flow.
💡 Key Highlights:
Temporary app creation to trigger GitHub auth
GitHub App permission scoping
Using AWS CLI to update repository link
Final reconnection through Amplify Console
🧠 If you're hitting a wall trying to rewire Amplify to a different repo without breaking your pipeline, this might save you time.
🔗 Full walkthrough with screenshots (Notion):
https://www.notion.so/Case-Study-Changing-GitHub-Repository-in-AWS-Amplify-A-Step-by-Step-Guide-1f18ee8a4d46803884f7cb50b8e8c35d
Would love feedback or to hear how others have approached this!
https://redd.it/1kkpceu
@r_devops
indranil chakraborty's Notion on Notion
🔁 Case Study: Changing GitHub Repository in AWS Amplify: A Step-by-Step Guide | Notion
Recently at work, I encountered a situation where I needed to change the GitHub repository connected to an AWS Amplify app. However, Amplify doesn’t offer a direct or obvious way to swap repositories once an app is set up. Rebuilding everything from scratch…
Need suggestions
I am 5+ years experience IT professional. First worked as a windows admin then as a cloud (azure) admin. want to shift my career towards DevOps. How can i get handon experience with yaml based pipelines or IAC. My project in company does not use this. Even Containerization is not there.
Also i have been stuck in the same company for 5 years want to change. Please help
https://redd.it/1kkn0co
@r_devops
I am 5+ years experience IT professional. First worked as a windows admin then as a cloud (azure) admin. want to shift my career towards DevOps. How can i get handon experience with yaml based pipelines or IAC. My project in company does not use this. Even Containerization is not there.
Also i have been stuck in the same company for 5 years want to change. Please help
https://redd.it/1kkn0co
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What is usually done in Kubernetes when deploying a Python app (FastAPI)?
Hi everyone,
I'm coming from the Spring Boot world. There, we typically deploy to Kubernetes using a UBI-based Docker image. The Spring Boot app is a self-contained
Now I'm working with a FastAPI-based Python server, and I’d like to deploy it as a self-contained app in a Docker image.
What’s the standard approach in the Python world?
Is it considered good practice to make the FastAPI app self-contained in the image?
What should I do or configure for that?
https://redd.it/1kkswy6
@r_devops
Hi everyone,
I'm coming from the Spring Boot world. There, we typically deploy to Kubernetes using a UBI-based Docker image. The Spring Boot app is a self-contained
.jar file that runs inside the container, and deployment to a Kubernetes pod is straightforward.Now I'm working with a FastAPI-based Python server, and I’d like to deploy it as a self-contained app in a Docker image.
What’s the standard approach in the Python world?
Is it considered good practice to make the FastAPI app self-contained in the image?
What should I do or configure for that?
https://redd.it/1kkswy6
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Learning and Practice: iximiuz Labs vs Sad Servers?
I am keen to learn and practice technologies, particularly Linux troubleshooting, Docker, Kubernetes, Terraform, etc. I came across two websites with a good collection: iximiuz Labs vs Sad Servers.
But I need to choose one of these to get a paid subscription. Which one should I go with?
https://redd.it/1kktlz8
@r_devops
I am keen to learn and practice technologies, particularly Linux troubleshooting, Docker, Kubernetes, Terraform, etc. I came across two websites with a good collection: iximiuz Labs vs Sad Servers.
But I need to choose one of these to get a paid subscription. Which one should I go with?
https://redd.it/1kktlz8
@r_devops
iximiuz Labs
iximiuz Labs - Indie Learning Platform to Master Server Side Craft
Skill up in Linux, Networking, Containers, and Kubernetes by taking courses, following interactive tutorials, and solving fun DevOps challenges.
Is 2025 CKA harder than it was before? (Rant)
I waited to post this for a few months.
For context, I started my Kubernetes journey fresh in September 2024, having minimal experience (only with docker and docker-compose, but no orchestration, but I have sys admin/devops experience). I went through whole KodeKloud course, I did all 70+ killercoda scenarios and scored 80% on my killer.sh attempt. I probably spent 120+ hours studying and practicing for this exam.
I took the exam the updated exam on 1st of March 2025, so I knew about the updates and I went over the additional stuff as well. I took multiple kodekloud mock exams, with mixed results. But I read a lot about how killer.sh is much harder than real CKA exam, so when I scored 80% on my practice attempt so I was pretty confident going into the exam (maybe I was just lucky that the killer.sh questions suited me).
When I started the exam, oh boy: flaged 1st, flaged 2nd, flagged 3rd... I think the first question I started solving was 7 or 8th. I could've written down with what exactly I struggled, but I felt it was much harder than killer.sh. I think I can navigate the K8s docs pretty well, but I know I had some Gateway API questions, but I feel the docs were non existent for my questions, then also why use helm, and not allow helm docs? I remember I had to install and configure CNI, but why would you allow the docs/github for it? Does every Certified Kubernetes Admin know this from top of their head? Even when there is an update? I know there was somethings such as resource limits on the nodes I could've had and studied better for.
So after 2hours, I scored 45% (probably better than 60-65% as I would be more angry at myself but also more confident for the retake).
So I wanted to ask some who did the exam before and retook is after the February update: Was the exam harder? Or am I just stupid?
By end of this month I want to start revising again and do the retake in July/August. Do you guy have any other resources than KodeKloud, killercoda and killer.sh? I'm buying a hertner vps and going to host something in K8s to get more real-life experience.
End of my rant.
Edit: I'm not time traveller, fixed
https://redd.it/1kkv3ua
@r_devops
I waited to post this for a few months.
For context, I started my Kubernetes journey fresh in September 2024, having minimal experience (only with docker and docker-compose, but no orchestration, but I have sys admin/devops experience). I went through whole KodeKloud course, I did all 70+ killercoda scenarios and scored 80% on my killer.sh attempt. I probably spent 120+ hours studying and practicing for this exam.
I took the exam the updated exam on 1st of March 2025, so I knew about the updates and I went over the additional stuff as well. I took multiple kodekloud mock exams, with mixed results. But I read a lot about how killer.sh is much harder than real CKA exam, so when I scored 80% on my practice attempt so I was pretty confident going into the exam (maybe I was just lucky that the killer.sh questions suited me).
When I started the exam, oh boy: flaged 1st, flaged 2nd, flagged 3rd... I think the first question I started solving was 7 or 8th. I could've written down with what exactly I struggled, but I felt it was much harder than killer.sh. I think I can navigate the K8s docs pretty well, but I know I had some Gateway API questions, but I feel the docs were non existent for my questions, then also why use helm, and not allow helm docs? I remember I had to install and configure CNI, but why would you allow the docs/github for it? Does every Certified Kubernetes Admin know this from top of their head? Even when there is an update? I know there was somethings such as resource limits on the nodes I could've had and studied better for.
So after 2hours, I scored 45% (probably better than 60-65% as I would be more angry at myself but also more confident for the retake).
So I wanted to ask some who did the exam before and retook is after the February update: Was the exam harder? Or am I just stupid?
By end of this month I want to start revising again and do the retake in July/August. Do you guy have any other resources than KodeKloud, killercoda and killer.sh? I'm buying a hertner vps and going to host something in K8s to get more real-life experience.
End of my rant.
Edit: I'm not time traveller, fixed
https://redd.it/1kkv3ua
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
MacBook or Mac Mini for DevOps?
Basically the title says. Currently working as a DevOps Engineer and looking for laptop / desktop something stable and smooth for personal use. Want to know that going for MacBook Air or Mac Mini is worth and long-lasting. And appreciate if anyone have suggestions other than these with specs :)
https://redd.it/1kkycog
@r_devops
Basically the title says. Currently working as a DevOps Engineer and looking for laptop / desktop something stable and smooth for personal use. Want to know that going for MacBook Air or Mac Mini is worth and long-lasting. And appreciate if anyone have suggestions other than these with specs :)
https://redd.it/1kkycog
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
The first time I ran terraform destroy in the wrong workspace… was also the last 😅
Early Terraform days were rough. I didn’t really understand workspaces, so everything lived in default. One day, I switched projects and, thinking I was being “clean,” I ran terraform destroy .
Turns out I was still in the shared dev workspace. Goodbye, networking. Goodbye, EC2. Goodbye, 2 hours of my life restoring what I’d nuked.
Now I’m strict about:
Naming workspaces clearly
Adding safeguards in CLI scripts
Using terraform plan like it’s gospel
And never trusting myself at 5 PM on a Friday
Funny how one command can teach you the entire philosophy of infrastructure discipline.
Anyone else learned Terraform the hard way?
https://redd.it/1kkzo2h
@r_devops
Early Terraform days were rough. I didn’t really understand workspaces, so everything lived in default. One day, I switched projects and, thinking I was being “clean,” I ran terraform destroy .
Turns out I was still in the shared dev workspace. Goodbye, networking. Goodbye, EC2. Goodbye, 2 hours of my life restoring what I’d nuked.
Now I’m strict about:
Naming workspaces clearly
Adding safeguards in CLI scripts
Using terraform plan like it’s gospel
And never trusting myself at 5 PM on a Friday
Funny how one command can teach you the entire philosophy of infrastructure discipline.
Anyone else learned Terraform the hard way?
https://redd.it/1kkzo2h
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Discussion: Model level scaling for triton inference server
Hey folks, hope you’re all doing great!
I ran into an interesting scaling challenge today and wanted to get some thoughts. We’re currently running an ASG (g5.xlarge) setup hosting Triton Inference Server, using S3 as the model repository.
The issue is that when we want to scale up a specific model (due to increased load), we end up scaling the entire ASG, even though the demand is only for that one model. Obviously, that’s not very efficient.
So I’m exploring whether it’s feasible to move this setup to Kubernetes and use KEDA (Kubernetes Event-driven Autoscaling) to autoscale based on Triton server metrics — ideally in a way that allows scaling at a model level instead of scaling the whole deployment.
Has anyone here tried something similar with KEDA + Triton? Is there a way to tap into per-model metrics exposed by Triton (maybe via Prometheus) and use that as a KEDA trigger?
Appreciate any input or guidance!
https://redd.it/1kl1ctu
@r_devops
Hey folks, hope you’re all doing great!
I ran into an interesting scaling challenge today and wanted to get some thoughts. We’re currently running an ASG (g5.xlarge) setup hosting Triton Inference Server, using S3 as the model repository.
The issue is that when we want to scale up a specific model (due to increased load), we end up scaling the entire ASG, even though the demand is only for that one model. Obviously, that’s not very efficient.
So I’m exploring whether it’s feasible to move this setup to Kubernetes and use KEDA (Kubernetes Event-driven Autoscaling) to autoscale based on Triton server metrics — ideally in a way that allows scaling at a model level instead of scaling the whole deployment.
Has anyone here tried something similar with KEDA + Triton? Is there a way to tap into per-model metrics exposed by Triton (maybe via Prometheus) and use that as a KEDA trigger?
Appreciate any input or guidance!
https://redd.it/1kl1ctu
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Hellp/suggestions needed USA - Devops Engineer Interview
Hello All ,
I recently applied to a company
the below was its job description , I am familiar with many concepts , but some how I am worried about the interview. I got a screening call and awaiting response
Can anyone please help with suggestions on where to focus more , expected questions and any other tips please
thanks in Advance
**Required Skills:**
* **3+ years work experience in a DevOps or similar role**
* **Fluency in one or more scripting languages such as Python or Ruby**
* **In-depth, hands-on experience with Linux, networking, server, and cloud architectures**
* **Experience in configuration management technologies such as Chef, Puppet or Ansible**
* **Experience with AWS or another cloud PaaS provider**
* **Understanding of fundamental network technologies like DNS, Load Balancing, SSL, TCP/IP, SQL, HTTP**
* **Solid understanding of configuration, deployment, management and maintenance of large cloud-hosted systems; including auto-scaling, monitoring, performance tuning, troubleshooting, and disaster recovery**
* **Proficiency with source control, continuous integration, and testing pipelines**
* **Championing a culture and work environment that promotes diversity and inclusion**
* **Participate in the team’s on-call rotation to address complex problems in real-time and keep services operational and highly available**
**Preferred Skills:**
* **Experience with Containers and orchestration services like Kubernetes, Docker etc.**
* **Familiarity with Go**
* **Understand cloud security and best practices**
https://redd.it/1kl3lgg
@r_devops
Hello All ,
I recently applied to a company
the below was its job description , I am familiar with many concepts , but some how I am worried about the interview. I got a screening call and awaiting response
Can anyone please help with suggestions on where to focus more , expected questions and any other tips please
thanks in Advance
**Required Skills:**
* **3+ years work experience in a DevOps or similar role**
* **Fluency in one or more scripting languages such as Python or Ruby**
* **In-depth, hands-on experience with Linux, networking, server, and cloud architectures**
* **Experience in configuration management technologies such as Chef, Puppet or Ansible**
* **Experience with AWS or another cloud PaaS provider**
* **Understanding of fundamental network technologies like DNS, Load Balancing, SSL, TCP/IP, SQL, HTTP**
* **Solid understanding of configuration, deployment, management and maintenance of large cloud-hosted systems; including auto-scaling, monitoring, performance tuning, troubleshooting, and disaster recovery**
* **Proficiency with source control, continuous integration, and testing pipelines**
* **Championing a culture and work environment that promotes diversity and inclusion**
* **Participate in the team’s on-call rotation to address complex problems in real-time and keep services operational and highly available**
**Preferred Skills:**
* **Experience with Containers and orchestration services like Kubernetes, Docker etc.**
* **Familiarity with Go**
* **Understand cloud security and best practices**
https://redd.it/1kl3lgg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
❤🔥1