Need your inputs herw
I'm currently working as a QA intern from last 8 months. I want quit this and start learning devops. I want to take 6-8 months of gap to learn Devops. After that can I able to get a job as a DevOps engineer?
My education details
Bachelors in CSE and 2024 passed out with 8 months of QA internship experience.
Please let me know whether I'm able to get a job after taking 8 months of gap to prepare devops. I'm really interested in DevOps.
Edit : Need Your inputs here. Typo*
https://redd.it/1lzmbg3
@r_devops
I'm currently working as a QA intern from last 8 months. I want quit this and start learning devops. I want to take 6-8 months of gap to learn Devops. After that can I able to get a job as a DevOps engineer?
My education details
Bachelors in CSE and 2024 passed out with 8 months of QA internship experience.
Please let me know whether I'm able to get a job after taking 8 months of gap to prepare devops. I'm really interested in DevOps.
Edit : Need Your inputs here. Typo*
https://redd.it/1lzmbg3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Introducing kat: A TUI and rule-based rendering engine for Kubernetes manifests
I don't know about you, but one of my favorite tools in the Kubernetes ecosystem is [`k9s`](https://k9scli.io/). At work I have it open pretty much all of the time. After I started using it, I felt like my productivity skyrocketed, since anything you could want is just a few keystrokes away.
However, when it comes to rendering and validating manifests locally, I found myself frustrated with the existing tools (or lack thereof). For me, I found that working with manifest generators like `helm` or `kustomize` often involved a repetitive cycle: run a command, try to parse a huge amount of output to find some issue, make a change to the source, run the command again, and so on, losing context with each iteration.
So, I set out to build something that would make this process easier and more efficient. After a few months of work, I'm excited to introduce you to `kat`!
**Introducing** [`kat`](https://github.com/macropower/kat):
`kat` automatically invokes manifest generators like `helm` or `kustomize`, and provides a persistent, navigable view of rendered resources, with support for live reloading, integrated validation, and more. It is completely free and open-source, licensed under Apache 2.0.
It is made of two main components, which can be used together or independently:
1. A **rule-based engine** for automatically rendering and validating manifests
2. A **terminal UI** for browsing and debugging rendered Kubernetes manifests
Together, these deliver a seamless development experience that maintains context and focus while iterating on Helm charts, Kustomize overlays, and other manifest generators.
Notable features include:
* **Manifest Browsing**: Rather than outputting a single long stream of YAML, `kat` organizes the output into a browsable list structure. Navigate through any number of rendered resources using their group/kind/ns/name metadata.
* **Live Reload**: Just use the `-w` flag to automatically re-render when you modify source files, without losing your current position or context when the output changes. Any diffs are highlighted as well, so you can easily see what changed between renders.
* **Integrated Validation**: Run tools like `kubeconform`, `kyverno`, or custom validators automatically on rendered output through configurable hooks. Additionally, you can define custom "plugins", which function the same way as k9s plugins (i.e. commands invoked with a keybind).
* **Flexible Configuration**: `kat` allows you to define profiles for different manifest generators (like Helm, Kustomize, etc.). Profiles can be automatically selected based on output of CEL expressions, allowing `kat` to adapt to your project structure.
* **And Customization**: `kat` can be configured with your own keybindings, as well as [custom themes](https://github.com/MacroPower/kat/raw/v0.20.0/docs/assets/themes.gif)!
And more, but this post is already too long. :)
To conclude, `kat` solved my specific workflow problems when working with Kubernetes manifests locally. And while it may not be a perfect fit for everyone, I hope it can help others who find themselves in a similar situation.
If you're interested in giving `kat` a try, check out the repo here:
[https://github.com/macropower/kat](https://github.com/macropower/kat)
>I'd also love to hear your feedback! If you have any suggestions or issues, feel free to open an issue on GitHub, leave a comment, or send me a DM.
https://redd.it/1lzketl
@r_devops
I don't know about you, but one of my favorite tools in the Kubernetes ecosystem is [`k9s`](https://k9scli.io/). At work I have it open pretty much all of the time. After I started using it, I felt like my productivity skyrocketed, since anything you could want is just a few keystrokes away.
However, when it comes to rendering and validating manifests locally, I found myself frustrated with the existing tools (or lack thereof). For me, I found that working with manifest generators like `helm` or `kustomize` often involved a repetitive cycle: run a command, try to parse a huge amount of output to find some issue, make a change to the source, run the command again, and so on, losing context with each iteration.
So, I set out to build something that would make this process easier and more efficient. After a few months of work, I'm excited to introduce you to `kat`!
**Introducing** [`kat`](https://github.com/macropower/kat):
`kat` automatically invokes manifest generators like `helm` or `kustomize`, and provides a persistent, navigable view of rendered resources, with support for live reloading, integrated validation, and more. It is completely free and open-source, licensed under Apache 2.0.
It is made of two main components, which can be used together or independently:
1. A **rule-based engine** for automatically rendering and validating manifests
2. A **terminal UI** for browsing and debugging rendered Kubernetes manifests
Together, these deliver a seamless development experience that maintains context and focus while iterating on Helm charts, Kustomize overlays, and other manifest generators.
Notable features include:
* **Manifest Browsing**: Rather than outputting a single long stream of YAML, `kat` organizes the output into a browsable list structure. Navigate through any number of rendered resources using their group/kind/ns/name metadata.
* **Live Reload**: Just use the `-w` flag to automatically re-render when you modify source files, without losing your current position or context when the output changes. Any diffs are highlighted as well, so you can easily see what changed between renders.
* **Integrated Validation**: Run tools like `kubeconform`, `kyverno`, or custom validators automatically on rendered output through configurable hooks. Additionally, you can define custom "plugins", which function the same way as k9s plugins (i.e. commands invoked with a keybind).
* **Flexible Configuration**: `kat` allows you to define profiles for different manifest generators (like Helm, Kustomize, etc.). Profiles can be automatically selected based on output of CEL expressions, allowing `kat` to adapt to your project structure.
* **And Customization**: `kat` can be configured with your own keybindings, as well as [custom themes](https://github.com/MacroPower/kat/raw/v0.20.0/docs/assets/themes.gif)!
And more, but this post is already too long. :)
To conclude, `kat` solved my specific workflow problems when working with Kubernetes manifests locally. And while it may not be a perfect fit for everyone, I hope it can help others who find themselves in a similar situation.
If you're interested in giving `kat` a try, check out the repo here:
[https://github.com/macropower/kat](https://github.com/macropower/kat)
>I'd also love to hear your feedback! If you have any suggestions or issues, feel free to open an issue on GitHub, leave a comment, or send me a DM.
https://redd.it/1lzketl
@r_devops
k9scli.io
K9s - Manage Your Kubernetes Clusters In Style
K9s provides a terminal UI to interact with your Kubernetes clusters. The aim of this project is to make it easier to navigate, observe and manage your Kuber...
Final Year btech CS trying to do something with life.
I am a final year CS student with very basic knowledge of programming languages and no proper skills , everything i tried failed , now cloud devops caught my eye and i want to do this with my full dedication so that i can get atleast internship in upcomming 3 months and placement after that.
RN i am very confused with my life and i want to secure a placement and i dont want to let down my parents as they already spent lots of money in my studies.
please guide me to build my future, your guidance and tips be very much helpful:}
https://redd.it/1lzono4
@r_devops
I am a final year CS student with very basic knowledge of programming languages and no proper skills , everything i tried failed , now cloud devops caught my eye and i want to do this with my full dedication so that i can get atleast internship in upcomming 3 months and placement after that.
RN i am very confused with my life and i want to secure a placement and i dont want to let down my parents as they already spent lots of money in my studies.
please guide me to build my future, your guidance and tips be very much helpful:}
https://redd.it/1lzono4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Are notifications a solved problem for DevOps?
I am a programmer who also does DevOps. Like many, I use GitHub, Datadog, Sentry, and other tools to keep development and deployment running smoothly. I've spent the last few years working on a notifications API (multi-channel, preference management, etc.), and I seek feedback on a product that re-imagines notifications from these products.
I've had a realization—most first-party notifications suck. GitHub is probably a prime example, but it's far from easy to configure SNS or Datadog notifications or to refine your resulting notifications. My ideal notification system would:
1. Accept web-hooks from services like GitHub, Datadog, and others, and provide a way to subscribe to notifications at different levels of granularities, with a way to opt out or tweak the frequency of notifications.
2. Use the git commit sha to tie notifications across services, thread them in topics, and notify the person responsible for the commit or deployment.
3. Update or archive any notifications that are no longer relevant - resolved incidents, error rates that have returned to normal, etc.
4. Offer a VSCode extension to show the most pressing notifications and send them to other channels (like Slack only if necessary). The extension also enables the user to switch to code or a terminal with the context needed to solve any issues.
I've always built tools based on my needs, but I'd sincerely appreciate any feedback, insights, or criticism of my ideas. One blind spot I have is my internal view of large engineering organizations. Are there any other pressing notification problems that current notification tools don't serve at larger organizations?
Thank you so much for your time!
https://redd.it/1lzmmgz
@r_devops
I am a programmer who also does DevOps. Like many, I use GitHub, Datadog, Sentry, and other tools to keep development and deployment running smoothly. I've spent the last few years working on a notifications API (multi-channel, preference management, etc.), and I seek feedback on a product that re-imagines notifications from these products.
I've had a realization—most first-party notifications suck. GitHub is probably a prime example, but it's far from easy to configure SNS or Datadog notifications or to refine your resulting notifications. My ideal notification system would:
1. Accept web-hooks from services like GitHub, Datadog, and others, and provide a way to subscribe to notifications at different levels of granularities, with a way to opt out or tweak the frequency of notifications.
2. Use the git commit sha to tie notifications across services, thread them in topics, and notify the person responsible for the commit or deployment.
3. Update or archive any notifications that are no longer relevant - resolved incidents, error rates that have returned to normal, etc.
4. Offer a VSCode extension to show the most pressing notifications and send them to other channels (like Slack only if necessary). The extension also enables the user to switch to code or a terminal with the context needed to solve any issues.
I've always built tools based on my needs, but I'd sincerely appreciate any feedback, insights, or criticism of my ideas. One blind spot I have is my internal view of large engineering organizations. Are there any other pressing notification problems that current notification tools don't serve at larger organizations?
Thank you so much for your time!
https://redd.it/1lzmmgz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Image Migration
Hey So I am in bit of a situation were I am tasked to Replicate a build scale set on Azure.
So I have 2 Subscriptions. Subscription A has the Image I want.
Subscription B needs the build scale set.
I am not allowed to create a shared image gallery on azure but I want to Migrate that image from subscription A to Subscription B.
I tried GPT, It kept recommending the shared image gallery for this But I don't have the permissions to do that.
Only method it showed was converting to vhd and then uploading to storage account then on subscription B fetch it and create a VM etc.
Is there a way to safely create a VM atleast on subscriptions B using the image on subscriptions A. My account has contributor on the image.
https://redd.it/1lzs92w
@r_devops
Hey So I am in bit of a situation were I am tasked to Replicate a build scale set on Azure.
So I have 2 Subscriptions. Subscription A has the Image I want.
Subscription B needs the build scale set.
I am not allowed to create a shared image gallery on azure but I want to Migrate that image from subscription A to Subscription B.
I tried GPT, It kept recommending the shared image gallery for this But I don't have the permissions to do that.
Only method it showed was converting to vhd and then uploading to storage account then on subscription B fetch it and create a VM etc.
Is there a way to safely create a VM atleast on subscriptions B using the image on subscriptions A. My account has contributor on the image.
https://redd.it/1lzs92w
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Article on Quick ELK setup
Hi, I just published an article on medium. Lately I have been working on ELK at my firm and thought that I should explore it's setup on kubernetes.
Here's my article. Let me know your thoughts
https://medium.com/@joeldsouza28/one-minute-elk-stack-on-kubernetes-full-logging-setup-with-a-single-script-ba92aecb4379
https://redd.it/1lzu6pb
@r_devops
Hi, I just published an article on medium. Lately I have been working on ELK at my firm and thought that I should explore it's setup on kubernetes.
Here's my article. Let me know your thoughts
https://medium.com/@joeldsouza28/one-minute-elk-stack-on-kubernetes-full-logging-setup-with-a-single-script-ba92aecb4379
https://redd.it/1lzu6pb
@r_devops
Medium
One-Minute ELK Stack on Kubernetes — Full Logging Setup with a Single Script
🧠 Why ELK + Kubernetes?
OpenLIT: Self-hosted observability dashboards built on ClickHouse — now with full drag-and-drop custom dashboard creation
We just added custom dashboards to OpenLIT, our open-source engineering analytics tool.
✅ Create folders, drag & drop widgets
✅ Use any SDK to send data to ClickHouse
✅ No vendor lock-in
✅ Auto-refresh, filters, time intervals
📺 Tutorials: YouTube Playlist
📘 Docs: OpenLIT Dashboards
GitHub: https://github.com/openlit/openlit
Would love to hear what you think or how you’d use it!
https://redd.it/1lzvlbu
@r_devops
We just added custom dashboards to OpenLIT, our open-source engineering analytics tool.
✅ Create folders, drag & drop widgets
✅ Use any SDK to send data to ClickHouse
✅ No vendor lock-in
✅ Auto-refresh, filters, time intervals
📺 Tutorials: YouTube Playlist
📘 Docs: OpenLIT Dashboards
GitHub: https://github.com/openlit/openlit
Would love to hear what you think or how you’d use it!
https://redd.it/1lzvlbu
@r_devops
YouTube
OpenLIT Dashboard Builder Series – Create Custom Dashboards Easily
Learn how to build powerful, custom dashboards on OpenLIT.io with this step-by-step video tutorial series. Whether you're a developer, data analyst, or produ...
KubeDiagrams
**KubeDiagrams**, an open source Apache 2.0 License project hosted on GitHub, is a tool to generate Kubernetes architecture diagrams from Kubernetes manifest files, kustomization files, Helm charts, helmfile descriptors, and actual cluster state. **KubeDiagrams** supports most of all Kubernetes built-in resources, any custom resources, namespace/label/annotation-based resource clustering, and declarative custom diagrams. **KubeDiagrams** is available as a Python package in PyPI, a container image in DockerHub, a
Try it on your own Kubernetes manifests, Helm charts, helmfiles, and actual cluster state!
https://redd.it/1lzvsb7
@r_devops
**KubeDiagrams**, an open source Apache 2.0 License project hosted on GitHub, is a tool to generate Kubernetes architecture diagrams from Kubernetes manifest files, kustomization files, Helm charts, helmfile descriptors, and actual cluster state. **KubeDiagrams** supports most of all Kubernetes built-in resources, any custom resources, namespace/label/annotation-based resource clustering, and declarative custom diagrams. **KubeDiagrams** is available as a Python package in PyPI, a container image in DockerHub, a
kubectl plugin, a Nix flake, and a GitHub Action.Try it on your own Kubernetes manifests, Helm charts, helmfiles, and actual cluster state!
https://redd.it/1lzvsb7
@r_devops
GitHub
GitHub - philippemerle/KubeDiagrams: Generate Kubernetes architecture diagrams from Kubernetes manifest files, kustomization files…
Generate Kubernetes architecture diagrams from Kubernetes manifest files, kustomization files, Helm charts, helmfiles, and actual cluster state - philippemerle/KubeDiagrams
Kubernetes Homelab Rescue: Troubleshooting with AI (and the Lessons Learned)
Although the post is about my homelab I have previously had similar types of issues happen at work. The troubleshooting steps would have been similar and other than the freedom to simply paste logs/terminal output directly to Claude 4 for "assistance" I can easily see AI-assisted troubleshooting go down this route.
The suggestions Claude gave for figuring out what was wrong started out sensibly but fairly quickly turned into suggestions that would have left me redeploying at least a portion of the cluster and possibly restoring data from backups.
I ended up going on a tangent and thinking about just how dangerous following troubleshooting suggestions from an AI can be if you don't have at least some knowledge as to the possible consequences. Even Claude admitted (when asked afterwards in the conversation) that the suggestions quickly became destructive and that it never reset even when new information and context was introduced.
Kubernetes Homelab Rescue: Troubleshooting with AI (and the Lessons Learned)
https://redd.it/1lzz0db
@r_devops
Although the post is about my homelab I have previously had similar types of issues happen at work. The troubleshooting steps would have been similar and other than the freedom to simply paste logs/terminal output directly to Claude 4 for "assistance" I can easily see AI-assisted troubleshooting go down this route.
The suggestions Claude gave for figuring out what was wrong started out sensibly but fairly quickly turned into suggestions that would have left me redeploying at least a portion of the cluster and possibly restoring data from backups.
I ended up going on a tangent and thinking about just how dangerous following troubleshooting suggestions from an AI can be if you don't have at least some knowledge as to the possible consequences. Even Claude admitted (when asked afterwards in the conversation) that the suggestions quickly became destructive and that it never reset even when new information and context was introduced.
Kubernetes Homelab Rescue: Troubleshooting with AI (and the Lessons Learned)
https://redd.it/1lzz0db
@r_devops
JLP's blog
Kubernetes Homelab Rescue: Troubleshooting with AI (and the Lessons Learned)
Recently I woke up to a fun set of issues with my homelab. In an effort to make
more use of LLMs I turned to Claude for troubleshooting assistance, which did
help but also once again reminded me of the risks of following AI instructions
without appropriate…
more use of LLMs I turned to Claude for troubleshooting assistance, which did
help but also once again reminded me of the risks of following AI instructions
without appropriate…
Paid courses to move from Full Stack to DevOps.
Hi, i am currently working as a Full Stack dev, but after years in company feels like i do every single role a little bit. UI React.js / Backend Node.js and java/ Pipelines a bit / sonarcube / code scanners etc.
I want to move to Devops fully because want some career shift and new knowledge.
( i did similar prior, i was QA Automaton Architect and moved to Full Stack Development )
So i want to focus DevOps and Security.
Can someone suggest courses? Paid courses are fine. Or what is the best path to move from one role to another?
Or what certifications to take.
Yes i can use AI for that knowledge, but i wonder if there is a structured patch to take so i wont miss things which are must have for that role.
Or if you had similar experience, how did you shifted roles?
Thanks all for suggestions and tips.
https://redd.it/1m05m0u
@r_devops
Hi, i am currently working as a Full Stack dev, but after years in company feels like i do every single role a little bit. UI React.js / Backend Node.js and java/ Pipelines a bit / sonarcube / code scanners etc.
I want to move to Devops fully because want some career shift and new knowledge.
( i did similar prior, i was QA Automaton Architect and moved to Full Stack Development )
So i want to focus DevOps and Security.
Can someone suggest courses? Paid courses are fine. Or what is the best path to move from one role to another?
Or what certifications to take.
Yes i can use AI for that knowledge, but i wonder if there is a structured patch to take so i wont miss things which are must have for that role.
Or if you had similar experience, how did you shifted roles?
Thanks all for suggestions and tips.
https://redd.it/1m05m0u
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
IAM in DevOps
To all DevOps/SecOps engineers interested in IAM:
I’ve just published a blog on integrating Keycloak as an Idp with GitLab via SAML and Kubernetes via OpenID Connect. SAML and OIDC are two modern protocols for secure authentication. It’s a technical guide that walks through setting up centralized authentication across your DevOps stack.
Check it out!
https://medium.com/@aymanegharrabou/integrating-keycloak-with-gitlab-saml-and-kubernetes-openid-connect-da036d3b8f3c
https://redd.it/1m06ysv
@r_devops
To all DevOps/SecOps engineers interested in IAM:
I’ve just published a blog on integrating Keycloak as an Idp with GitLab via SAML and Kubernetes via OpenID Connect. SAML and OIDC are two modern protocols for secure authentication. It’s a technical guide that walks through setting up centralized authentication across your DevOps stack.
Check it out!
https://medium.com/@aymanegharrabou/integrating-keycloak-with-gitlab-saml-and-kubernetes-openid-connect-da036d3b8f3c
https://redd.it/1m06ysv
@r_devops
Medium
Integrating Keycloak with GitLab (SAML) and Kubernetes (OpenID Connect)
Introduction
Karpenter - Protecting batch jobs from consolidation/disruption
An approach to ensuring Karpenter doesn't interrupt your long-running or critical batch jobs during node consolidation in an Amazon EKS cluster. Karpenter’s consolidation feature is designed to optimize cluster costs by terminating underutilized nodes—but if not configured carefully, it can inadvertently evict active pods, including those running important batch workloads.
To address this, use a custom `do_not_disrupt: "true"` annotation on your batch jobs. This simple yet effective technique tells Karpenter to avoid disrupting specific pods during consolidation, giving you granular control over which workloads can safely be interrupted and which must be preserved until completion. This is especially useful in data processing pipelines, ML training jobs, or any compute-intensive tasks where premature termination could lead to data loss, wasted compute time, or failed workflows
https://youtu.be/ZoYKi9GS1rw
https://redd.it/1m09umg
@r_devops
An approach to ensuring Karpenter doesn't interrupt your long-running or critical batch jobs during node consolidation in an Amazon EKS cluster. Karpenter’s consolidation feature is designed to optimize cluster costs by terminating underutilized nodes—but if not configured carefully, it can inadvertently evict active pods, including those running important batch workloads.
To address this, use a custom `do_not_disrupt: "true"` annotation on your batch jobs. This simple yet effective technique tells Karpenter to avoid disrupting specific pods during consolidation, giving you granular control over which workloads can safely be interrupted and which must be preserved until completion. This is especially useful in data processing pipelines, ML training jobs, or any compute-intensive tasks where premature termination could lead to data loss, wasted compute time, or failed workflows
https://youtu.be/ZoYKi9GS1rw
https://redd.it/1m09umg
@r_devops
YouTube
Karpenter - Protecting batch jobs from consolidation/disruption
In this video, I walk through a practical approach to ensuring Karpenter doesn't interrupt your long-running or critical batch jobs during node consolidation in an Amazon EKS cluster. Karpenter’s consolidation feature is designed to optimize cluster costs…
How do I highlight my work without sounding bitter in an exec email?
Hi everyone. I posted here a while back about a newly acquired global team trying to reverse-engineer a solution I built for my region (corporate). They were instructed by a senior executive to replicate my work and roll it out globally as one of their first projects. However, they couldn't do it so they contacted me to handover everything (as per my regional manager's approval) due to higher hierarchy politics. The general advice was to stay cooperative, which is essentially what I did.
I've now completed the full handover. My manager is about to send an email update to execs and asked me to write a draft email with everything I want to include. So I want to make sure the email strikes the right tone and not sound too bitter or boastful, but also not overly humble, since in the end I had to give them everything and walk them through it line-by-line because they couldn’t figure out how to implement on their own at first without the hand-holding. It took a lot of time away from my actual work too. Anyways, here's my draft email which I'm planning to send to my manager. I would appreciate any thoughts on things I should add or remove. Thank you.
> Following our earlier alignment with Team G, we've successfully completed the full technical handover of our engineered solution XYZ.
>
> Over the past weeks, we worked closely with them to provide everything needed to support global replication and scaling. This included:
>
> Complete export and transfer of the entire engineered solution as per their request, including source code, application packages, automated workflows, schemas, dashboards, and assets
> Comprehensive documentation detailing the architecture, data models, and deployment procedures
> Direct access to the Region A development environments and source materials
> A solution designed for streamlined deployment, requiring only minimal configuration (simple changes to IDs and endpoint references)
>
> With this, Team G is now well-equipped to roll out the solution across regions efficiently without the engineering overhead or need for rebuilding. Our team remains available for support while continuing to advance in other priorities.
>
> We're pleased to see our work serving as the foundation for broader improvements and look forward to the positive impact across all regions.
https://redd.it/1m0b0rv
@r_devops
Hi everyone. I posted here a while back about a newly acquired global team trying to reverse-engineer a solution I built for my region (corporate). They were instructed by a senior executive to replicate my work and roll it out globally as one of their first projects. However, they couldn't do it so they contacted me to handover everything (as per my regional manager's approval) due to higher hierarchy politics. The general advice was to stay cooperative, which is essentially what I did.
I've now completed the full handover. My manager is about to send an email update to execs and asked me to write a draft email with everything I want to include. So I want to make sure the email strikes the right tone and not sound too bitter or boastful, but also not overly humble, since in the end I had to give them everything and walk them through it line-by-line because they couldn’t figure out how to implement on their own at first without the hand-holding. It took a lot of time away from my actual work too. Anyways, here's my draft email which I'm planning to send to my manager. I would appreciate any thoughts on things I should add or remove. Thank you.
> Following our earlier alignment with Team G, we've successfully completed the full technical handover of our engineered solution XYZ.
>
> Over the past weeks, we worked closely with them to provide everything needed to support global replication and scaling. This included:
>
> Complete export and transfer of the entire engineered solution as per their request, including source code, application packages, automated workflows, schemas, dashboards, and assets
> Comprehensive documentation detailing the architecture, data models, and deployment procedures
> Direct access to the Region A development environments and source materials
> A solution designed for streamlined deployment, requiring only minimal configuration (simple changes to IDs and endpoint references)
>
> With this, Team G is now well-equipped to roll out the solution across regions efficiently without the engineering overhead or need for rebuilding. Our team remains available for support while continuing to advance in other priorities.
>
> We're pleased to see our work serving as the foundation for broader improvements and look forward to the positive impact across all regions.
https://redd.it/1m0b0rv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DevOps learning - How do I continue from the spot I am at?
Hello, I recently took a DevOps course within my college curriculum.
Sadly it was also a very short DevOps course but it taught me all the essentials - Github actions & workflows, CI/CD, Docker, working in Linux environment.
I do feel like I have very weak knowledge when it comes to working with the largest cloud providers - AWS, Azure, GCP.
The CD process I learned was how to deploy to a Render server, Which honestly was pretty easy and painless.
Which online technical information do you advice me so I can continue and deepen my devops knowledge from the spot I am at? Thank you very much for reading.
https://redd.it/1m0aopk
@r_devops
Hello, I recently took a DevOps course within my college curriculum.
Sadly it was also a very short DevOps course but it taught me all the essentials - Github actions & workflows, CI/CD, Docker, working in Linux environment.
I do feel like I have very weak knowledge when it comes to working with the largest cloud providers - AWS, Azure, GCP.
The CD process I learned was how to deploy to a Render server, Which honestly was pretty easy and painless.
Which online technical information do you advice me so I can continue and deepen my devops knowledge from the spot I am at? Thank you very much for reading.
https://redd.it/1m0aopk
@r_devops
Live challenge: building a data pipeline in under 15 minutes
hey follks, RB from hevo here!
This Thursday, I’m going live with a challenge: build and deploy a fully automated data pipeline in under 15 minutes, without writing code. So if you're spending hours writing custom scripts or debugging broken syncs, you might want to check this out :)
What I’ll cover live:
Ingesting from sources like S3, SQL Server, or internal APIs
Streaming into destinations like Snowflake, Redshift, or BigQuery
Auto-scaling, schema drift handling, and built-in alerting/monitoring
Live Q&A where you can throw us the hard questions
When: Thursday, July 17 @ 1PM EST
You can sign up here: Reserve your spot here!
Happy to answer any qs!
https://redd.it/1m0d0wn
@r_devops
hey follks, RB from hevo here!
This Thursday, I’m going live with a challenge: build and deploy a fully automated data pipeline in under 15 minutes, without writing code. So if you're spending hours writing custom scripts or debugging broken syncs, you might want to check this out :)
What I’ll cover live:
Ingesting from sources like S3, SQL Server, or internal APIs
Streaming into destinations like Snowflake, Redshift, or BigQuery
Auto-scaling, schema drift handling, and built-in alerting/monitoring
Live Q&A where you can throw us the hard questions
When: Thursday, July 17 @ 1PM EST
You can sign up here: Reserve your spot here!
Happy to answer any qs!
https://redd.it/1m0d0wn
@r_devops
Hevodata
Hevo Demo Days: Getting Started with Hevo
Learn how to build a production-ready, no-code data pipeline in under 15 minutes with Hevo
What Security & Integration Features Matter Most for Enterprise Teams?
Hi everyone,
we're a group of Master's students in Information Systems at the University Münster (Germany) developing SqueelGPT, a SaaS that converts plain-English questions into production-ready SQL queries with a focus on enterprises (API, IT-Admin Dashboard).
Goal: Let non-technical team members generate ad-hoc reports without bothering your developers or DBAs
Current features: Multi-step query processing pipeline, schema analysis, sandboxed query validation
Questions for you:
Would you prefer a Chat Interface or an API that can be used to translate English into SQL?
What database security controls would be absolutely critical? (row-level security, query limits, audit logs)
Which enterprise integrations are must-haves? (SAML, OIDC, Slack, User Dashboard)
How do you currently handle ad-hoc data requests from business teams?
We'd love to learn from your experiences managing enterprises at scale. We are looking for any insights we can get, but also have a website with a waitlist if you are intrested: https://squeelgpt.com/
Thanks for any insights!
https://redd.it/1m0co1e
@r_devops
Hi everyone,
we're a group of Master's students in Information Systems at the University Münster (Germany) developing SqueelGPT, a SaaS that converts plain-English questions into production-ready SQL queries with a focus on enterprises (API, IT-Admin Dashboard).
Goal: Let non-technical team members generate ad-hoc reports without bothering your developers or DBAs
Current features: Multi-step query processing pipeline, schema analysis, sandboxed query validation
Questions for you:
Would you prefer a Chat Interface or an API that can be used to translate English into SQL?
What database security controls would be absolutely critical? (row-level security, query limits, audit logs)
Which enterprise integrations are must-haves? (SAML, OIDC, Slack, User Dashboard)
How do you currently handle ad-hoc data requests from business teams?
We'd love to learn from your experiences managing enterprises at scale. We are looking for any insights we can get, but also have a website with a waitlist if you are intrested: https://squeelgpt.com/
Thanks for any insights!
https://redd.it/1m0co1e
@r_devops
Squeelgpt
SqueelGPT - Transform Natural Language into SQL Queries
Stop writing SQL by hand. Join 1000+ developers and analysts using natural language to generate accurate SQL queries in seconds.
Get $50 free credit on signup at Any Router! 🚀
Access Claude Code AI, no credit card needed.
Perfect for devs, learners, and hobbyists.
Sign up now: https://anyrouter.top/register?aff=7ilr
#AI #ClaudeCode
https://redd.it/1m0emyh
@r_devops
Access Claude Code AI, no credit card needed.
Perfect for devs, learners, and hobbyists.
Sign up now: https://anyrouter.top/register?aff=7ilr
#AI #ClaudeCode
https://redd.it/1m0emyh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Fail the workflow based on conditions
Hey there,
Trying to tackle a scenario in which an third-party action fails cause of two reasons (call them X and Y), thereby failing the whole job.
Is there any we can check whether error X or Y has happened, in consecutive step(s) - so as to deal with failure appropriately.
PS: the third-party action doesn't set any output that we can use, it simply returns 127 exit code
Thanks.
https://redd.it/1m0ajpi
@r_devops
Hey there,
Trying to tackle a scenario in which an third-party action fails cause of two reasons (call them X and Y), thereby failing the whole job.
Is there any we can check whether error X or Y has happened, in consecutive step(s) - so as to deal with failure appropriately.
PS: the third-party action doesn't set any output that we can use, it simply returns 127 exit code
Thanks.
https://redd.it/1m0ajpi
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How are you deploying to Azure from Bitbucket without OpenID Connect support?
I'm curious to know how teams are handling deployments to Azure from Bitbucket, especially since Bitbucket doesn't currently support OIDC integration for Azure like GitHub or GitLab does.
How are you managing Azure credentials securely in your pipelines?
Are you relying on service principals with client secrets or certificates?
Have you implemented any workarounds or third-party tools to simulate federated identity/OIDC flows?
Are there any best practices or security considerations you'd recommend in this setup?
Would love to hear how others are handling this.
https://redd.it/1m0a1w5
@r_devops
I'm curious to know how teams are handling deployments to Azure from Bitbucket, especially since Bitbucket doesn't currently support OIDC integration for Azure like GitHub or GitLab does.
How are you managing Azure credentials securely in your pipelines?
Are you relying on service principals with client secrets or certificates?
Have you implemented any workarounds or third-party tools to simulate federated identity/OIDC flows?
Are there any best practices or security considerations you'd recommend in this setup?
Would love to hear how others are handling this.
https://redd.it/1m0a1w5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Tried AWS Kiro IDE: A Spec-First, AI-Powered IDE That Feels Surprisingly Practical
Unlike most AI tools that generate quick code from prompts, Kiro starts by generating structured specs, user stories, design docs, and database schemas, before writing any code. It also supports automation hooks and task breakdowns, which makes it feel more like a true engineering tool.
I’ve been exploring ways to bring AI into real DevOps workflows, and Kiro's structured approach feels a lot closer to production-grade engineering than the usual vibe coding.
Read it here: https://blog.prateekjain.dev/kiro-ide-by-aws-ai-coding-with-specs-and-structure-8ae696d43638?sk=f2024fa4dc080e105f73f21d57d1c81d
https://redd.it/1m0kflo
@r_devops
Unlike most AI tools that generate quick code from prompts, Kiro starts by generating structured specs, user stories, design docs, and database schemas, before writing any code. It also supports automation hooks and task breakdowns, which makes it feel more like a true engineering tool.
I’ve been exploring ways to bring AI into real DevOps workflows, and Kiro's structured approach feels a lot closer to production-grade engineering than the usual vibe coding.
Read it here: https://blog.prateekjain.dev/kiro-ide-by-aws-ai-coding-with-specs-and-structure-8ae696d43638?sk=f2024fa4dc080e105f73f21d57d1c81d
https://redd.it/1m0kflo
@r_devops
Medium
Kiro IDE by AWS: AI Coding with Specs and Structure
Build smarter with specs, hooks, and DevOps-ready AI in AWS Kiro IDE.
SRP and SoC (Separation of Concerns) in DevOps/GitOps
Puppet Best Practices does a great job explaining design patterns that still hold up, especially as config management shifts from convergence loops (Puppet, Chef) to reconciliation loops (Kubernetes).
In both models, success or failure often hinges on how well you apply SRP (Single Responsibility Principle) and SoC (Separation of Concerns).
I’ve seen GitOps repos crash and burn because config and code were tangled together (config artifacts tethered to code artifacts and vice-versa): making both harder to test, reuse, or scale. In this setting, when they needed to make a small configuration change, such as adding a new region, the application with untested code would be pushed out. A clean structure, where each module handles a single concern (e.g., a service, config file, or policy), is more maintainable.
# Summary of Key Principles
Single Responsibility Principle (SRP): Each module, class, or function should have one and only one reason to change. In Puppet, this means writing modules that perform a single, well-defined task, such as managing a service, user, or config file, without overreaching into unrelated areas.
Separation of Concerns (SoC): Avoid bundling unrelated responsibilities into the same module. Delegate distinct concerns to their own modules. For example, a module that manages a web server shouldn't also manage firewall rules or deploy application code, those concerns belong elsewhere.
TL;DR:
SRP: A module should have one reason to change.
SoC: Don’t mix unrelated tasks in the same module, delegate.
https://redd.it/1m0m3b3
@r_devops
Puppet Best Practices does a great job explaining design patterns that still hold up, especially as config management shifts from convergence loops (Puppet, Chef) to reconciliation loops (Kubernetes).
In both models, success or failure often hinges on how well you apply SRP (Single Responsibility Principle) and SoC (Separation of Concerns).
I’ve seen GitOps repos crash and burn because config and code were tangled together (config artifacts tethered to code artifacts and vice-versa): making both harder to test, reuse, or scale. In this setting, when they needed to make a small configuration change, such as adding a new region, the application with untested code would be pushed out. A clean structure, where each module handles a single concern (e.g., a service, config file, or policy), is more maintainable.
# Summary of Key Principles
Single Responsibility Principle (SRP): Each module, class, or function should have one and only one reason to change. In Puppet, this means writing modules that perform a single, well-defined task, such as managing a service, user, or config file, without overreaching into unrelated areas.
Separation of Concerns (SoC): Avoid bundling unrelated responsibilities into the same module. Delegate distinct concerns to their own modules. For example, a module that manages a web server shouldn't also manage firewall rules or deploy application code, those concerns belong elsewhere.
TL;DR:
SRP: A module should have one reason to change.
SoC: Don’t mix unrelated tasks in the same module, delegate.
https://redd.it/1m0m3b3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community