Have you built QA/Testing pipelines?
In my experience I built CI/CD pipelines for Dev, Stagging, Prod environments but I never really built a pipeline that did automated testing. It makes to not have it in the prod pipeline. But I’m curious, if you guys have built such pipelines. If yes, what can you share about it? How did it integrate with your CI/CD overall?
Edit: I only have 1.5 years of experience in DevOps and it was my first fulltime job
https://redd.it/1k6ijz2
@r_devops
In my experience I built CI/CD pipelines for Dev, Stagging, Prod environments but I never really built a pipeline that did automated testing. It makes to not have it in the prod pipeline. But I’m curious, if you guys have built such pipelines. If yes, what can you share about it? How did it integrate with your CI/CD overall?
Edit: I only have 1.5 years of experience in DevOps and it was my first fulltime job
https://redd.it/1k6ijz2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Tired of setting up the same pipelines? I'm building a CLI that deploys projects with natural language.
Starting a new service usually means hours of boilerplate: creating GitHub repos, setting up tests, Docker images, CD pipelines… What if you could just describe what you want?
I’m building 88tool, a terminal CLI that uses AI agents and LangChain to plan and execute full deployment pipelines.
It supports Go, Python, Java, etc., and connects to GitHub, AWS, Vercel, and more.
It’s not just generating code — it runs it.
Would love to hear from fellow devs who struggle with CI/CD fatigue.
https://datatricks.medium.com/building-in-public-from-terminal-to-deployment-with-ai-driven-ci-cd-fca220a63c58
https://redd.it/1k6kflk
@r_devops
Starting a new service usually means hours of boilerplate: creating GitHub repos, setting up tests, Docker images, CD pipelines… What if you could just describe what you want?
I’m building 88tool, a terminal CLI that uses AI agents and LangChain to plan and execute full deployment pipelines.
It supports Go, Python, Java, etc., and connects to GitHub, AWS, Vercel, and more.
It’s not just generating code — it runs it.
Would love to hear from fellow devs who struggle with CI/CD fatigue.
https://datatricks.medium.com/building-in-public-from-terminal-to-deployment-with-ai-driven-ci-cd-fca220a63c58
https://redd.it/1k6kflk
@r_devops
Medium
🚀 Building in Public: From Terminal to Deployment with AI-Driven CI/CD
You tell an AI assistant: “Create a web service to store information about a user in this <format> and showing in a web page”
pfsense ipsec tunnel aws issue
I know i can connect to two vpc via peer connection or transit but i need to get myself familiar with pfsense.
Current setup.
vpc1 (172.31.0.0/16)
pfsense1 (172.31.0.100) with public ip address
test1-ec2(172.31.0.101) no public ip address
vpc2(10.0.0.0/16)
pfsense (10.0.0.100) with public ip address
test2-ec2(10.0.0.101) no public ip address
1. Setup ipsec tunnel IKEv1 between the two pfsense. Both phase 1 and phase2 connection establish.
2. Both pfsense instance can ping each other (icmp) from their private ip address. So 172.31.0.100 can ping 10.0.0.100 without problem.
3. The route table attach to the subnet on vpc1 is routing traffic of 10.0.0.0/16 to the pfsense1 eni while the vpc2 route table routes traffic to 172.31.0.0/16 to the pfsense2 eni.
4. configured the firewall -> rules -> ipsec to have source and destination respectively. so for pfsense1 source is 172.31.0.0/16 to destination 10.0.0.0/16 all port and gateway. Vice verse for pfsense2
5. firewall -> nat -> outbound set to Automatic outbound NAT rule generation. (IPsec passthrough included)
6. the security group attached to both ec2 have icmp enable to 0.0.0.0/0
However test1-ec2 cannot ping test2-ec2 nor pfsense2 vice versa, `traceroute` gives me nothing but `* * *`
What am i missing here?
https://redd.it/1k6k5vg
@r_devops
I know i can connect to two vpc via peer connection or transit but i need to get myself familiar with pfsense.
Current setup.
vpc1 (172.31.0.0/16)
pfsense1 (172.31.0.100) with public ip address
test1-ec2(172.31.0.101) no public ip address
vpc2(10.0.0.0/16)
pfsense (10.0.0.100) with public ip address
test2-ec2(10.0.0.101) no public ip address
1. Setup ipsec tunnel IKEv1 between the two pfsense. Both phase 1 and phase2 connection establish.
2. Both pfsense instance can ping each other (icmp) from their private ip address. So 172.31.0.100 can ping 10.0.0.100 without problem.
3. The route table attach to the subnet on vpc1 is routing traffic of 10.0.0.0/16 to the pfsense1 eni while the vpc2 route table routes traffic to 172.31.0.0/16 to the pfsense2 eni.
4. configured the firewall -> rules -> ipsec to have source and destination respectively. so for pfsense1 source is 172.31.0.0/16 to destination 10.0.0.0/16 all port and gateway. Vice verse for pfsense2
5. firewall -> nat -> outbound set to Automatic outbound NAT rule generation. (IPsec passthrough included)
6. the security group attached to both ec2 have icmp enable to 0.0.0.0/0
However test1-ec2 cannot ping test2-ec2 nor pfsense2 vice versa, `traceroute` gives me nothing but `* * *`
What am i missing here?
https://redd.it/1k6k5vg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you learn new setup and then impart the knowledge to others in team?
This is a slightly different kind of question.
We're using EKS with KEDA to run agents in our Azure DevOps pipelines. This entire setup is deployed using Azure DevOps pipelines (executed via Azure agents) along with Helm, ArgoCD, and Terragrunt.
The challenge is that this setup and pipeline were created by someone who is no longer part of the team. I’ve now been assigned the task of understanding how everything works and then sharing that knowledge with the rest of the team. We have created a user story for this task :D
The issue is that none of us has much experience with Kubernetes, Helm, ArgoCD, or Terragrunt. So my question is: how would you approach a situation like this? If someone could break down their process for handling such scenarios, that would be really helpful.
My main concern is figuring out the most effective and efficient way to learn the setup on my own and then transfer the knowledge to my teammates once I’ve understood the setup myself.
Thanks
https://redd.it/1k6ozjy
@r_devops
This is a slightly different kind of question.
We're using EKS with KEDA to run agents in our Azure DevOps pipelines. This entire setup is deployed using Azure DevOps pipelines (executed via Azure agents) along with Helm, ArgoCD, and Terragrunt.
The challenge is that this setup and pipeline were created by someone who is no longer part of the team. I’ve now been assigned the task of understanding how everything works and then sharing that knowledge with the rest of the team. We have created a user story for this task :D
The issue is that none of us has much experience with Kubernetes, Helm, ArgoCD, or Terragrunt. So my question is: how would you approach a situation like this? If someone could break down their process for handling such scenarios, that would be really helpful.
My main concern is figuring out the most effective and efficient way to learn the setup on my own and then transfer the knowledge to my teammates once I’ve understood the setup myself.
Thanks
https://redd.it/1k6ozjy
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
how to pass env variables to docker container when using github actions
how to pass env variables to docker container when using github actions to build image and running the container on linux virtual machine
currently i am doing this -
is this correct or is there any better way to pass these env variables ?
https://redd.it/1k6q5m3
@r_devops
how to pass env variables to docker container when using github actions to build image and running the container on linux virtual machine
currently i am doing this -
docker run -d --name movieapiapp_container \-p 6000:80 \-e ConnectionStrings__DefaultConnection="${{ secrets.DB_CONNECTION_STRING }}" \-e Jwt__Key="${{ secrets.JWT_SECRET_KEY }}" \-e Jwt__Issuer="web.url\-e Jwt__Audience="web.url\-e ApiKeyOmDb="${{ secrets.OMDB_API_KEY }}" \-e GEMINI_API_KEY="${{ secrets.GEMINI_API_KEY }}" \-e Google__Client_Id="${{ secrets.GOOGLE_CLIENT_ID }}" \-e Google__Client_Secret="${{ secrets.GOOGLE_CLIENT_SECRET }}" \-e ASPNETCORE_URLS=https://+:80 \is this correct or is there any better way to pass these env variables ?
https://redd.it/1k6q5m3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
First AWS cert to go for ?
I’m a software development engineer with 3 years of backend experience and I’m looking to transition into cloud computing, specifically with AWS. Which AWS certification would be the most suitable to start with?
https://redd.it/1k6t5q4
@r_devops
I’m a software development engineer with 3 years of backend experience and I’m looking to transition into cloud computing, specifically with AWS. Which AWS certification would be the most suitable to start with?
https://redd.it/1k6t5q4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What happed to the DevOps Paradox podcast?
The DevOps Paradox podcast is my favorite and they haven't done a show since February.
Does anyone know why??
https://redd.it/1k6ujiv
@r_devops
The DevOps Paradox podcast is my favorite and they haven't done a show since February.
Does anyone know why??
https://redd.it/1k6ujiv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Exploring Serverless Stack Architecture – How Do You Manage Environments & Security?
Hey folks,
I’m experimenting with a serverless stack on AWS using S3 + CloudFront for static hosting, API Gateway + Lambda for backend, DynamoDB for data, and Cognito for auth.
It’s been great for learning, and I’m thinking ahead about how to scale and manage this more professionally.
Curious to hear from others:
* How do you structure environments (dev/staging/prod)? Separate accounts, or manage via IaC/tagging?
* Best practices for securing this kind of stack — IAM roles, access boundaries, etc.?
* Any underrated tools or AWS services that help you keep things maintainable and cost-effective?
Appreciate any insight — always looking to learn from real-world setups. Happy to share my setup later once it’s more polished.
https://redd.it/1k6sux8
@r_devops
Hey folks,
I’m experimenting with a serverless stack on AWS using S3 + CloudFront for static hosting, API Gateway + Lambda for backend, DynamoDB for data, and Cognito for auth.
It’s been great for learning, and I’m thinking ahead about how to scale and manage this more professionally.
Curious to hear from others:
* How do you structure environments (dev/staging/prod)? Separate accounts, or manage via IaC/tagging?
* Best practices for securing this kind of stack — IAM roles, access boundaries, etc.?
* Any underrated tools or AWS services that help you keep things maintainable and cost-effective?
Appreciate any insight — always looking to learn from real-world setups. Happy to share my setup later once it’s more polished.
https://redd.it/1k6sux8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Best Practices for Horizontally Scaling a Dockerized Backend on a VM
I need advice on scaling a Dockerized backend application hosted on a Google Compute Engine (GCE) VM.
# Current Setup:
* Backend runs in Docker containers on a single GCE VM.
* Nginx is installed on the **same VM** to route requests to the backend.
* Monitoring via Prometheus/Grafana shows backend CPU usage spiking to **200%**, indicating severe resource contention.
# Proposed Solution and Questions:
1. **Horizontal Scaling Within the Same VM**:
* Is adding more backend containers to the same VM a viable approach? Since the VM’s CPU is already saturated, won’t this exacerbate resource contention?
* If traffic grows further, would scaling require adding more VMs regardless?
2. **Nginx Placement**:
* Should Nginx be decoupled from the backend VM to avoid resource competition (e.g., moving it to a dedicated VM or managed load balancer)?
3. **Alternative Strategies**:
* How would you architect this system for scalability?
https://redd.it/1k6x7tp
@r_devops
I need advice on scaling a Dockerized backend application hosted on a Google Compute Engine (GCE) VM.
# Current Setup:
* Backend runs in Docker containers on a single GCE VM.
* Nginx is installed on the **same VM** to route requests to the backend.
* Monitoring via Prometheus/Grafana shows backend CPU usage spiking to **200%**, indicating severe resource contention.
# Proposed Solution and Questions:
1. **Horizontal Scaling Within the Same VM**:
* Is adding more backend containers to the same VM a viable approach? Since the VM’s CPU is already saturated, won’t this exacerbate resource contention?
* If traffic grows further, would scaling require adding more VMs regardless?
2. **Nginx Placement**:
* Should Nginx be decoupled from the backend VM to avoid resource competition (e.g., moving it to a dedicated VM or managed load balancer)?
3. **Alternative Strategies**:
* How would you architect this system for scalability?
https://redd.it/1k6x7tp
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Procore Technologies
I have cleared my rounds at Procore Technologies, if any of you guys are working in the company or have worked previously please let me know the work culture.
https://redd.it/1k6x1r8
@r_devops
I have cleared my rounds at Procore Technologies, if any of you guys are working in the company or have worked previously please let me know the work culture.
https://redd.it/1k6x1r8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Manager said “that doesn’t make any sense!”
…to which I reply: “well neither does me driving into the office every day to do a job I can literally do from anywhere with an Internet connection but here I am”
https://redd.it/1k70np7
@r_devops
…to which I reply: “well neither does me driving into the office every day to do a job I can literally do from anywhere with an Internet connection but here I am”
https://redd.it/1k70np7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Have only worked in Jenkins, Git, Docker and Linux as DevOps Engineer– What all Skills Should I Learn as DevOps to Get Hired? Can't find jobs in Naukri for this
I’ve worked in DevOps using these: Jenkins, Git, and Linux, but in Job Portals like Linkedin, Naukri I am not seeing job openings that match just these skills.
What should I focus on learning next to actually get hired?
https://redd.it/1k70qjb
@r_devops
I’ve worked in DevOps using these: Jenkins, Git, and Linux, but in Job Portals like Linkedin, Naukri I am not seeing job openings that match just these skills.
What should I focus on learning next to actually get hired?
https://redd.it/1k70qjb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Simplecontainer.io
In the past few months, I've been developing an orchestration platform to improve the experience of managing Docker deployments on VMs. It operates atop the container engine and takes over orchestration. It supports GitOps and plain old apply. The engine is open sourced.
Apart from the terminal CLI, I've also created a sleek UI dashboard to further ease the management. Dashboard is available as an app https://app.simplecontainer.io and can be used as it is. It is also possible to deploy the dashboard on-premises.
The dashboard can be a central platform to manage operations for multiple projects. Contexts are a way to authenticate against the simplecontainer node and can be shared with other users via organizations. The manager could choose which context is shared with which organization.
On the security side, the dashboard acts as a proxy, and no information about access is persisted on the app. Also, everywhere mTLS and TLS.
Demos on how to use the platform + dashboard can be found at:
- https://app.simplecontainer.io/demos/gitops
- https://app.simplecontainer.io/demos/declarative
Photos of container and gitops dashboards are attached. Currently it is alpha and sign ups will be opened soon. Interested in what you guys think and if someone wants to try it out you can hit me up in DM for more info.
https://redd.it/1k72nb3
@r_devops
In the past few months, I've been developing an orchestration platform to improve the experience of managing Docker deployments on VMs. It operates atop the container engine and takes over orchestration. It supports GitOps and plain old apply. The engine is open sourced.
Apart from the terminal CLI, I've also created a sleek UI dashboard to further ease the management. Dashboard is available as an app https://app.simplecontainer.io and can be used as it is. It is also possible to deploy the dashboard on-premises.
The dashboard can be a central platform to manage operations for multiple projects. Contexts are a way to authenticate against the simplecontainer node and can be shared with other users via organizations. The manager could choose which context is shared with which organization.
On the security side, the dashboard acts as a proxy, and no information about access is persisted on the app. Also, everywhere mTLS and TLS.
Demos on how to use the platform + dashboard can be found at:
- https://app.simplecontainer.io/demos/gitops
- https://app.simplecontainer.io/demos/declarative
Photos of container and gitops dashboards are attached. Currently it is alpha and sign ups will be opened soon. Interested in what you guys think and if someone wants to try it out you can hit me up in DM for more info.
https://redd.it/1k72nb3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Help Tool for managing helm charts
Hey everyone, current flow is keel,helm,github actions on gke.
We have a chart per app (unsustainable I know) and values file per environment. I am working on cutting down the chart number to be per application type.
Meanwhile I wanted to see if anyone came across an open source or paid tool that allows for helm chart management like a catalog. Where we could for example make env var changes to a selected number of charts and redeploy them all.
If this doesn’t exist i will probably have to write it in ruyaml myself,which I don’t want to
https://redd.it/1k6wnpm
@r_devops
Hey everyone, current flow is keel,helm,github actions on gke.
We have a chart per app (unsustainable I know) and values file per environment. I am working on cutting down the chart number to be per application type.
Meanwhile I wanted to see if anyone came across an open source or paid tool that allows for helm chart management like a catalog. Where we could for example make env var changes to a selected number of charts and redeploy them all.
If this doesn’t exist i will probably have to write it in ruyaml myself,which I don’t want to
https://redd.it/1k6wnpm
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
AI Agents real life usage
I am looking for real life examples of people using AI Agents in their daily DevOps tasks. I know that RooCode for example is useful to generate IaC code or scripts but I am looking for examples that go beyond the "code generation" tasks.
Any experience you guys would like to share?
https://redd.it/1k79u9a
@r_devops
I am looking for real life examples of people using AI Agents in their daily DevOps tasks. I know that RooCode for example is useful to generate IaC code or scripts but I am looking for examples that go beyond the "code generation" tasks.
Any experience you guys would like to share?
https://redd.it/1k79u9a
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Tailpipe - The Log Interrogation Game Changer
SQL has been the data access standard for decades, it levels the playing field, easily integrates with other systems and accelerates delivery. So why not leverage it for things other than the database, like querying APIs and Cloud services? Tailpipe follows along the same lines, this time by enabling SQL to query log files.
https://www.i-programmer.info/news/90-tools/17992-tailpipe-the-log-interrogation-game-changer.html
https://redd.it/1k7cbpm
@r_devops
SQL has been the data access standard for decades, it levels the playing field, easily integrates with other systems and accelerates delivery. So why not leverage it for things other than the database, like querying APIs and Cloud services? Tailpipe follows along the same lines, this time by enabling SQL to query log files.
https://www.i-programmer.info/news/90-tools/17992-tailpipe-the-log-interrogation-game-changer.html
https://redd.it/1k7cbpm
@r_devops
I Programmer
Tailpipe - The Log Interrogation Game Changer
By using the expressiveness of the SQL language, TailPipe makes querying log files as easy as doing "select * from logs;".
Career Advice: Is it beneficial for a Software Engineer to study CCNA, MCSA, and MCSE?
I'm a software engineer considering studying CCNA, MCSA, and MCSE. Would these certifications give me any advantages? My goal is to work in system-related roles in the future
https://redd.it/1k7e2dx
@r_devops
I'm a software engineer considering studying CCNA, MCSA, and MCSE. Would these certifications give me any advantages? My goal is to work in system-related roles in the future
https://redd.it/1k7e2dx
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Where to take UI DevOps courses online here? Does anyone know where can take these courses?
Hi there, I'm looking for learn UI DevOps but just I see DevOps courses, so I was wondering if anyone knows any courses where I can find?
I appreciated your response!
https://redd.it/1k7eqzn
@r_devops
Hi there, I'm looking for learn UI DevOps but just I see DevOps courses, so I was wondering if anyone knows any courses where I can find?
I appreciated your response!
https://redd.it/1k7eqzn
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Journey from Windows admin to k8s
From training with PowerShell to deploying Kubernetes clusters — here’s how I made the leap and how you can too.
The Starting Point: A Windows-Centric Foundation
In 2021, I began my journey as an IT Specialist in System Integration. My daily tools were PowerShell, Azure, Microsoft Server, and Terraform. I spent 2–3 years mastering these technologies during my training, followed by a year as a Junior DevOps Engineer at a company with around 1,000 employees, including a 200-person IT department. My role involved managing infrastructure, automating processes, and working with cloud technologies like Azure.
The Turning Point: Embracing a New Tech Stack
In January 2025, I made a significant career move. I transitioned from a familiar Windows-based environment to a new role that required me to work with macOS, Linux, Kubernetes (K8s), Docker, AWS, OTC Cloud, and the Atlassian Suite. This shift was both challenging and exhilarating.
The Learning Curve: Diving into New Technologies
Initially, I focused on Docker, Bash, and Kubernetes, as these tools were central to the new infrastructure. Gradually, I built on that foundation and delved deeper into the material.
A major milestone was taking on the role of project lead for a migration project for the Atlassian Suite. Our task was to transition the entire team and workflows to tools like Jira and Confluence. This experience allowed me to delve deep into software development and project management processes, highlighting the importance of choosing the right tools to improve team collaboration and communication.
Building Infrastructure: Hands-On Experience
I set up my own K3s cluster on a Proxmox host using Ansible and integrated ArgoCD to automate continuous delivery (CD). This process demonstrated the power of Kubernetes in managing containerized applications and the importance of a well-functioning CI/CD pipeline.
Additionally, I created five Terraform modules, including a network module, for the OTC Cloud. This opportunity allowed me to dive deeper into cloud infrastructure, ensuring everything was designed and built correctly. Terraform helped automate the infrastructure while adhering to best practices.
Optimizing Pipelines: Integrating AWS and Cloudflare
I worked on optimizing existing pipelines running in Bamboo, focusing on integrating AWS and Cloudflare. Adapting Bamboo to work seamlessly with our cloud infrastructure was an interesting challenge. It wasn’t just about automating build and deployment processes; it was about optimizing and ensuring the smooth flow of these processes to enhance team efficiency.
Embracing Change: Continuous Learning and Growth
Since joining this new role, I’ve learned a great deal and grown both professionally and personally. I’m taking on more responsibility and continuously growing in different areas. Optimizing pipelines, working with new technologies, and leading projects motivate me every day. I appreciate the challenge and look forward to learning even more in the coming months.
Lessons Learned and Tips for Aspiring DevOps Engineers
Start with the Basics: Familiarize yourself with core technologies like Docker, Bash, and Kubernetes.
Hands-On Practice: Set up your own environments and experiment with tools.
Take on Projects: Lead initiatives to gain practical experience.
Optimize Existing Systems: Work on improving current processes and pipelines.
Embrace Continuous Learning: Stay updated with new technologies and best practices.
Stay Connected
I’ll be regularly posting about my homelab and experiences with new technologies. Stay tuned — there’s much more to explore!
Inspired by real-world experiences and industry best practices, this blog aims to provide actionable insights for those looking to transition into DevOps roles. Check also my dev blog for more write ups and homelabbing content:
https://salad1n.dev/
https://redd.it/1k7fc50
@r_devops
From training with PowerShell to deploying Kubernetes clusters — here’s how I made the leap and how you can too.
The Starting Point: A Windows-Centric Foundation
In 2021, I began my journey as an IT Specialist in System Integration. My daily tools were PowerShell, Azure, Microsoft Server, and Terraform. I spent 2–3 years mastering these technologies during my training, followed by a year as a Junior DevOps Engineer at a company with around 1,000 employees, including a 200-person IT department. My role involved managing infrastructure, automating processes, and working with cloud technologies like Azure.
The Turning Point: Embracing a New Tech Stack
In January 2025, I made a significant career move. I transitioned from a familiar Windows-based environment to a new role that required me to work with macOS, Linux, Kubernetes (K8s), Docker, AWS, OTC Cloud, and the Atlassian Suite. This shift was both challenging and exhilarating.
The Learning Curve: Diving into New Technologies
Initially, I focused on Docker, Bash, and Kubernetes, as these tools were central to the new infrastructure. Gradually, I built on that foundation and delved deeper into the material.
A major milestone was taking on the role of project lead for a migration project for the Atlassian Suite. Our task was to transition the entire team and workflows to tools like Jira and Confluence. This experience allowed me to delve deep into software development and project management processes, highlighting the importance of choosing the right tools to improve team collaboration and communication.
Building Infrastructure: Hands-On Experience
I set up my own K3s cluster on a Proxmox host using Ansible and integrated ArgoCD to automate continuous delivery (CD). This process demonstrated the power of Kubernetes in managing containerized applications and the importance of a well-functioning CI/CD pipeline.
Additionally, I created five Terraform modules, including a network module, for the OTC Cloud. This opportunity allowed me to dive deeper into cloud infrastructure, ensuring everything was designed and built correctly. Terraform helped automate the infrastructure while adhering to best practices.
Optimizing Pipelines: Integrating AWS and Cloudflare
I worked on optimizing existing pipelines running in Bamboo, focusing on integrating AWS and Cloudflare. Adapting Bamboo to work seamlessly with our cloud infrastructure was an interesting challenge. It wasn’t just about automating build and deployment processes; it was about optimizing and ensuring the smooth flow of these processes to enhance team efficiency.
Embracing Change: Continuous Learning and Growth
Since joining this new role, I’ve learned a great deal and grown both professionally and personally. I’m taking on more responsibility and continuously growing in different areas. Optimizing pipelines, working with new technologies, and leading projects motivate me every day. I appreciate the challenge and look forward to learning even more in the coming months.
Lessons Learned and Tips for Aspiring DevOps Engineers
Start with the Basics: Familiarize yourself with core technologies like Docker, Bash, and Kubernetes.
Hands-On Practice: Set up your own environments and experiment with tools.
Take on Projects: Lead initiatives to gain practical experience.
Optimize Existing Systems: Work on improving current processes and pipelines.
Embrace Continuous Learning: Stay updated with new technologies and best practices.
Stay Connected
I’ll be regularly posting about my homelab and experiences with new technologies. Stay tuned — there’s much more to explore!
Inspired by real-world experiences and industry best practices, this blog aims to provide actionable insights for those looking to transition into DevOps roles. Check also my dev blog for more write ups and homelabbing content:
https://salad1n.dev/
https://redd.it/1k7fc50
@r_devops
salad1n
Home
Minimal Jekyll theme for storytellers
Making Sense of Cloud Spend
Hey y'all.. Wrote an article on sharing some throughts on Cloud Spend
https://medium.com/@mfundo/diagnosing-the-cloud-cost-mess-fe8e38c62bd3
https://redd.it/1k7eabl
@r_devops
Hey y'all.. Wrote an article on sharing some throughts on Cloud Spend
https://medium.com/@mfundo/diagnosing-the-cloud-cost-mess-fe8e38c62bd3
https://redd.it/1k7eabl
@r_devops
Medium
Making Sense of Cloud Spend
Thoughts on why cloud spend drifts, and what to do about it.
Devops workflow tips for a frontend application developer who needs to take on more ops responsibilities.
What is an efficient workflow/work environment setup to tackle an ops task that involves a Github 'Action', and a Bitrise build 'Workflow'.
I've written the GitHub Action as a bash script, and the Bitrise Workflow is a collection of pluggable Bitrise 'Steps' and some custom scripts in the repository that are triggered from the Bitrise Workflow.
The GitHub Action responds to the creation of a new tag with a name that matches, and the Bitrise Workflow runs build tasks that call our backend REST API for dynamic configuration specifics.
I find working on the ops stuff outside the monorepo slow and inefficient.
* Re-running scripts on remote machines/services is slower (I run the service using their local client to debug, but it's difficult to replicate the VM environment accurately in my local machine)
* They often break because I miss mistakes in the bash scripts (don't have editor/language based tools to help me here)
* The cloud based builds need time to execute because the VMs need to setup everything every time (I've cached some stuff but not all)
**Can I please get some tips on how to work more efficiently when working on processes that are distributed across systems?**
For context, I'm usually a frontend app developer and I've set up our monorepo to make our lives as easy as possible:
* Typed language (TS) and linter so we can see our errors in the editor as we work
* automated unit test runner with a 'watcher' that runs on 'save' to make sure our application logic doesn't get broken
* integrated testing pipeline that runs upon creation of pull requests
* hot module reloading so that we can visually see the results of our latests changes
* separation of presentational components and application logic with strict architectural guidelines to keep things modular
* monorepo tooling with task-runner to enable the above
**What are some devops techniques to achieve the same type of workflow efficiencies when configuring processes that run across distributed systems?**
I suspect that I need to look into:
* Modularizing logic into independent scripts
* Containers?
Anything else?
https://redd.it/1k7he32
@r_devops
What is an efficient workflow/work environment setup to tackle an ops task that involves a Github 'Action', and a Bitrise build 'Workflow'.
I've written the GitHub Action as a bash script, and the Bitrise Workflow is a collection of pluggable Bitrise 'Steps' and some custom scripts in the repository that are triggered from the Bitrise Workflow.
The GitHub Action responds to the creation of a new tag with a name that matches, and the Bitrise Workflow runs build tasks that call our backend REST API for dynamic configuration specifics.
I find working on the ops stuff outside the monorepo slow and inefficient.
* Re-running scripts on remote machines/services is slower (I run the service using their local client to debug, but it's difficult to replicate the VM environment accurately in my local machine)
* They often break because I miss mistakes in the bash scripts (don't have editor/language based tools to help me here)
* The cloud based builds need time to execute because the VMs need to setup everything every time (I've cached some stuff but not all)
**Can I please get some tips on how to work more efficiently when working on processes that are distributed across systems?**
For context, I'm usually a frontend app developer and I've set up our monorepo to make our lives as easy as possible:
* Typed language (TS) and linter so we can see our errors in the editor as we work
* automated unit test runner with a 'watcher' that runs on 'save' to make sure our application logic doesn't get broken
* integrated testing pipeline that runs upon creation of pull requests
* hot module reloading so that we can visually see the results of our latests changes
* separation of presentational components and application logic with strict architectural guidelines to keep things modular
* monorepo tooling with task-runner to enable the above
**What are some devops techniques to achieve the same type of workflow efficiencies when configuring processes that run across distributed systems?**
I suspect that I need to look into:
* Modularizing logic into independent scripts
* Containers?
Anything else?
https://redd.it/1k7he32
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community