QA with security testing background looking to transition to DevSecOps
Hello,
I am a QA with more than 11 years of experience in the software industry and I have acquired skills related to cybersecurity by doing pentesting for my employers and doing public bug bounties(but never professionally or with a job title related to security). I want to move into a DevSecOps role and my motive is purely financial as I have reached the tipping point as a QA.
What should be my transition plan/path? Is there any certification you can recommend me for this role specifically?
Below is what chatgpt recommended me and a plan to acquire the skills listed. Is this the right path or the right set of skills?
π§° Key Responsibilities:
Area Responsibilities
CI/CD Security Automate security scanning in pipelines (SAST, DAST, secrets detection, dependency scanning)
Cloud Security Implement IAM best practices, manage cloud security policies (e.g., AWS IAM, KMS, GuardDuty)
Infrastructure as Code (IaC) Secure Terraform/CloudFormation scripts using tools like Checkov, tfsec
Container/K8s Security Harden Docker images, manage security in Kubernetes clusters
Secrets Management Use tools like Vault, AWS Secrets Manager, or Sealed Secrets
Monitoring & Compliance Implement runtime security, SIEM integration, compliance audits (e.g., CIS Benchmarks)
Security-as-Code Apply policies using tools like OPA/Gatekeeper, Conftest
π§ Skills Required:
Strong scripting knowledge (Bash, Python, or similar)
Hands-on experience with CI/CD tools (GitHub Actions, GitLab, Jenkins)
Familiarity with cloud providers (AWS, Azure, GCP)
IaC experience (Terraform, Ansible, etc.)
Container tools: Docker, Kubernetes, Falco, Trivy
Security toolchains: Snyk, Anchore, Checkov, etc.
https://redd.it/1lukcdi
@r_devops
Hello,
I am a QA with more than 11 years of experience in the software industry and I have acquired skills related to cybersecurity by doing pentesting for my employers and doing public bug bounties(but never professionally or with a job title related to security). I want to move into a DevSecOps role and my motive is purely financial as I have reached the tipping point as a QA.
What should be my transition plan/path? Is there any certification you can recommend me for this role specifically?
Below is what chatgpt recommended me and a plan to acquire the skills listed. Is this the right path or the right set of skills?
π§° Key Responsibilities:
Area Responsibilities
CI/CD Security Automate security scanning in pipelines (SAST, DAST, secrets detection, dependency scanning)
Cloud Security Implement IAM best practices, manage cloud security policies (e.g., AWS IAM, KMS, GuardDuty)
Infrastructure as Code (IaC) Secure Terraform/CloudFormation scripts using tools like Checkov, tfsec
Container/K8s Security Harden Docker images, manage security in Kubernetes clusters
Secrets Management Use tools like Vault, AWS Secrets Manager, or Sealed Secrets
Monitoring & Compliance Implement runtime security, SIEM integration, compliance audits (e.g., CIS Benchmarks)
Security-as-Code Apply policies using tools like OPA/Gatekeeper, Conftest
π§ Skills Required:
Strong scripting knowledge (Bash, Python, or similar)
Hands-on experience with CI/CD tools (GitHub Actions, GitLab, Jenkins)
Familiarity with cloud providers (AWS, Azure, GCP)
IaC experience (Terraform, Ansible, etc.)
Container tools: Docker, Kubernetes, Falco, Trivy
Security toolchains: Snyk, Anchore, Checkov, etc.
https://redd.it/1lukcdi
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Who is responsible for setting up and maintaining CI/CD pipelines in your org?
In my experience, setting up and maintaining CI/CD pipelines has typically been a joint effort between DevOps and Developers. But Iβve recently come across teams where QAs play a major role in owning and maintaining these pipelines.
Weβre currently exploring how to structure this in our organisation, whether it should be Developers, DevOps or QAs who take ownership of the CI/CD process.
Iβd love to hear how it works in your company. Also please comment what's working and what's not working with the current process.
View Poll
https://redd.it/1lunc34
@r_devops
In my experience, setting up and maintaining CI/CD pipelines has typically been a joint effort between DevOps and Developers. But Iβve recently come across teams where QAs play a major role in owning and maintaining these pipelines.
Weβre currently exploring how to structure this in our organisation, whether it should be Developers, DevOps or QAs who take ownership of the CI/CD process.
Iβd love to hear how it works in your company. Also please comment what's working and what's not working with the current process.
View Poll
https://redd.it/1lunc34
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Anyone else tried Bash 5.3 yet? Some actually useful improvements for once
Been testing Bash 5.3 in our staging environment and honestly didn't expect much, but there are some solid quality-of-life improvements that actually matter for day-to-day work.
The ones I'm finding most useful:
Better error messages \- Parameter expansion errors actually tell you what's wrong now instead of just "bad substitution". Saved me 20 minutes of debugging yesterday.
Built-in microsecond timestamps \-
Process substitution debugging \- When complex pipelines break, it actually tells you which part failed. Game changer for troubleshooting.
Improved job control \- The
Faster tab completion \- Noticeable improvement in directories with thousands of files.
The performance improvements are real too. Startup time and memory usage both improved, especially with large scripts.
Most of these solve actual problems I hit weekly in CI/CD pipelines and deployment automation. Not just theoretical improvements.
Has anyone else been testing it? Curious what other practical improvements people are finding.
Also wondering about compatibility - so far everything's been backward compatible but want to hear if anyone's hit issues.
Been documenting all my findings if anyone wants a deeper dive - happy to share here: https://medium.com/@heinancabouly/bash-5-3-is-here-the-shell-update-that-actually-matters-97433bc5556c?source=friends\_link&sk=2f7a69f424f80e856716d256ca1ca3b9
https://redd.it/1luoqk3
@r_devops
Been testing Bash 5.3 in our staging environment and honestly didn't expect much, but there are some solid quality-of-life improvements that actually matter for day-to-day work.
The ones I'm finding most useful:
Better error messages \- Parameter expansion errors actually tell you what's wrong now instead of just "bad substitution". Saved me 20 minutes of debugging yesterday.
Built-in microsecond timestamps \-
$EPOCHREALTIME gives you epoch time with decimal precision. Great for timing deployment steps without needing external tools.Process substitution debugging \- When complex pipelines break, it actually tells you which part failed. Game changer for troubleshooting.
Improved job control \- The
wait builtin can handle multiple PIDs properly now. Makes parallel deployment scripts way more reliable.Faster tab completion \- Noticeable improvement in directories with thousands of files.
The performance improvements are real too. Startup time and memory usage both improved, especially with large scripts.
Most of these solve actual problems I hit weekly in CI/CD pipelines and deployment automation. Not just theoretical improvements.
Has anyone else been testing it? Curious what other practical improvements people are finding.
Also wondering about compatibility - so far everything's been backward compatible but want to hear if anyone's hit issues.
Been documenting all my findings if anyone wants a deeper dive - happy to share here: https://medium.com/@heinancabouly/bash-5-3-is-here-the-shell-update-that-actually-matters-97433bc5556c?source=friends\_link&sk=2f7a69f424f80e856716d256ca1ca3b9
https://redd.it/1luoqk3
@r_devops
Medium
Bash 5.3 is Here: The Shell Update That Actually Matters
Bash 5.3 brings genuinely useful improvements: read more to learn how to utilize!
Creating customer specific builds out of a template that holds multiple repos
I hope the title makes sense. I only recently started working with Azure DevOps (pipeline)
Trying my best to make sense:
My infrastructure looks like this:
I have a product (`Banana!Supreme`) that is composed of 4 submodules:
- Banana.Vision @ 1a2b3c4d5e6f7g8h9i0j
- Banana.WPF @ a1b2c3d4e5f6a7b8c9d0
- Banana.Logging @ abcdef1234567890abcd
- Banana.License @ 123456abcdef7890abcd
Now, for each customer, I basically *rebrand the program*, so I might have:
- `Jackfruit!Supreme v1.0` using current module commits
- `Blueberry!Supreme v1.0` a week later, possibly using newer module commits
I want to:
- Lock in which submodule versions were used for a specific customer build (so I can rebuild it in the future).
What I currently trying to build // hallucinated as framework of thought:
```
SupremeBuilder/
βββ Banana.Vision β¬ οΈ submodule
βββ Banana.WPF/ β¬ οΈ submodule
βββ Banana.Logging/ β¬ οΈ submodule
βββ Banana.License/ β¬ οΈ submodule
βββ customers/
β βββ Jackfruit/
β β βββ requirements.yml β¬ οΈ which module versions to use
β βββ Blueberry/
β β βββ requirements.yml
β β βββ branding.config β¬ οΈ optional: name, icons, colors
βββ build.ps1 β¬ οΈ build script reading requirements
βββ azure-pipelines.yml β¬ οΈ pipeline entry
```
The requirements.txt locking in which submodules are used for the build and which version
Example `requirements.yml`:
```yaml
app_name: Jackfruit!Supreme
version: 1.0
modules:
Banana.Vision @ 1a2b3c4d5e6f7g8h9i0j
Banana.WPF @ a1b2c3d4e5f6a7b8c9d0
Banana.Logging @ abcdef1234567890abcd
Banana.License @ 123456abcdef7890abcd
```
Is this even viable?
I wanna stay in Azure DevOps and work with .yaml.
Happy for any insight or examples
Similar reddit post by u/mike_testing:
[https://www.reddit.com/r/devops/comments/18eo4g5/how_do_you_handle_cicd_for_multiple_repos_that/](https://www.reddit.com/r/devops/comments/18eo4g5/how_do_you_handle_cicd_for_multiple_repos_that/)
edit: I keep wirting versions instead of commits. Updated
https://redd.it/1lupz73
@r_devops
I hope the title makes sense. I only recently started working with Azure DevOps (pipeline)
Trying my best to make sense:
My infrastructure looks like this:
I have a product (`Banana!Supreme`) that is composed of 4 submodules:
- Banana.Vision @ 1a2b3c4d5e6f7g8h9i0j
- Banana.WPF @ a1b2c3d4e5f6a7b8c9d0
- Banana.Logging @ abcdef1234567890abcd
- Banana.License @ 123456abcdef7890abcd
Now, for each customer, I basically *rebrand the program*, so I might have:
- `Jackfruit!Supreme v1.0` using current module commits
- `Blueberry!Supreme v1.0` a week later, possibly using newer module commits
I want to:
- Lock in which submodule versions were used for a specific customer build (so I can rebuild it in the future).
What I currently trying to build // hallucinated as framework of thought:
```
SupremeBuilder/
βββ Banana.Vision β¬ οΈ submodule
βββ Banana.WPF/ β¬ οΈ submodule
βββ Banana.Logging/ β¬ οΈ submodule
βββ Banana.License/ β¬ οΈ submodule
βββ customers/
β βββ Jackfruit/
β β βββ requirements.yml β¬ οΈ which module versions to use
β βββ Blueberry/
β β βββ requirements.yml
β β βββ branding.config β¬ οΈ optional: name, icons, colors
βββ build.ps1 β¬ οΈ build script reading requirements
βββ azure-pipelines.yml β¬ οΈ pipeline entry
```
The requirements.txt locking in which submodules are used for the build and which version
Example `requirements.yml`:
```yaml
app_name: Jackfruit!Supreme
version: 1.0
modules:
Banana.Vision @ 1a2b3c4d5e6f7g8h9i0j
Banana.WPF @ a1b2c3d4e5f6a7b8c9d0
Banana.Logging @ abcdef1234567890abcd
Banana.License @ 123456abcdef7890abcd
```
Is this even viable?
I wanna stay in Azure DevOps and work with .yaml.
Happy for any insight or examples
Similar reddit post by u/mike_testing:
[https://www.reddit.com/r/devops/comments/18eo4g5/how_do_you_handle_cicd_for_multiple_repos_that/](https://www.reddit.com/r/devops/comments/18eo4g5/how_do_you_handle_cicd_for_multiple_repos_that/)
edit: I keep wirting versions instead of commits. Updated
https://redd.it/1lupz73
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Notificator Alertmanager GUI
Hello !
Itβs been a while I was using Karma as a Alert viewer for Alertmanager.
After so many trouble using the WebUI I decide to create my own project
Notificator : a GUI for Alertmanager with sound and notification on your laptop !
Developed with Go
Here is the GitHub hope you will like it π
https://github.com/SoulKyu/notificator
https://redd.it/1lusprq
@r_devops
Hello !
Itβs been a while I was using Karma as a Alert viewer for Alertmanager.
After so many trouble using the WebUI I decide to create my own project
Notificator : a GUI for Alertmanager with sound and notification on your laptop !
Developed with Go
Here is the GitHub hope you will like it π
https://github.com/SoulKyu/notificator
https://redd.it/1lusprq
@r_devops
GitHub
GitHub - SoulKyu/notificator: Notificator is a GUI for alertmanager with sounds and notifications
Notificator is a GUI for alertmanager with sounds and notifications - SoulKyu/notificator
π1
We built this project to increase LLM throughput by 3x. Now it has been adopted by IBM in their LLM serving stack!
Hi guys, our team has built this open source project, LMCache, to reduce repetitive computation in LLM inference and make systems serve more people (3x more throughput in chat applications) and it has been used in IBM's open source LLM inference stack.
In LLM serving, the input is computed into intermediate states called KV cache to further provide answers. These data are relatively large (\~1-2GB for long context) and are often evicted when GPU memory is not enough. In these cases, when users ask a follow up question, the software needs to recompute for the same KV Cache. LMCache is designed to combat that by efficiently offloading and loading these KV cache to and from DRAM and disk. This is particularly helpful in multi-round QA settings when context reuse is important but GPU memory is not enough.
Ask us anything!
Github: https://github.com/LMCache/LMCache
https://redd.it/1luumz3
@r_devops
Hi guys, our team has built this open source project, LMCache, to reduce repetitive computation in LLM inference and make systems serve more people (3x more throughput in chat applications) and it has been used in IBM's open source LLM inference stack.
In LLM serving, the input is computed into intermediate states called KV cache to further provide answers. These data are relatively large (\~1-2GB for long context) and are often evicted when GPU memory is not enough. In these cases, when users ask a follow up question, the software needs to recompute for the same KV Cache. LMCache is designed to combat that by efficiently offloading and loading these KV cache to and from DRAM and disk. This is particularly helpful in multi-round QA settings when context reuse is important but GPU memory is not enough.
Ask us anything!
Github: https://github.com/LMCache/LMCache
https://redd.it/1luumz3
@r_devops
GitHub
GitHub - LMCache/LMCache: Supercharge Your LLM with the Fastest KV Cache Layer
Supercharge Your LLM with the Fastest KV Cache Layer - LMCache/LMCache
Very simple GitHub Action to detect changed files (with grep support, no dependencies)
I built a minimal GitHub composite action to detect which files have changed in a PR with no external dependencies, just plain Bash! Writing here to share a simple solution to something I commonly bump into.
Use case: trigger steps only when certain files change (e.g.
Below you will find important bits of the action, feel free to use, give feedback or ignore!
I explain more around it in my blog post
runs:
using: composite
steps:
\- uses: actions/checkout@v4
with:
fetch-depth: 0
\- id: changed-files
shell: bash
run: |
git fetch origin ${{ github.event.pull_request.base.ref }}
files=$(git diff --name-only origin/${{ github.event.pull_request.base.ref }} HEAD)
if [ "${{ inputs.file-grep }}" != "" \]; then
files=$(echo "$files" | grep -E "${{ inputs.file-grep }}" || true)
fi
echo "changed-files<<EOF" >> $GITHUB_OUTPUT
echo "$files" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
https://redd.it/1luv6fs
@r_devops
I built a minimal GitHub composite action to detect which files have changed in a PR with no external dependencies, just plain Bash! Writing here to share a simple solution to something I commonly bump into.
Use case: trigger steps only when certain files change (e.g.
*.py, *.json, etc.), without relying on third-party actions. Inspired by tj-actions/changed-files, but rebuilt from scratch after recent security concerns.Below you will find important bits of the action, feel free to use, give feedback or ignore!
I explain more around it in my blog post
runs:
using: composite
steps:
\- uses: actions/checkout@v4
with:
fetch-depth: 0
\- id: changed-files
shell: bash
run: |
git fetch origin ${{ github.event.pull_request.base.ref }}
files=$(git diff --name-only origin/${{ github.event.pull_request.base.ref }} HEAD)
if [ "${{ inputs.file-grep }}" != "" \]; then
files=$(echo "$files" | grep -E "${{ inputs.file-grep }}" || true)
fi
echo "changed-files<<EOF" >> $GITHUB_OUTPUT
echo "$files" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
https://redd.it/1luv6fs
@r_devops
GitHub
GitHub - tj-actions/changed-files: :octocat: Github action to retrieve all (added, copied, modified, deleted, renamed, type changedβ¦
:octocat: Github action to retrieve all (added, copied, modified, deleted, renamed, type changed, unmerged, unknown) files and directories. - tj-actions/changed-files
PagerDuty Pros/Cons
Our team is considering about using PD. How was it for your team? Issues? Alternatives?
https://redd.it/1luzfbu
@r_devops
Our team is considering about using PD. How was it for your team? Issues? Alternatives?
https://redd.it/1luzfbu
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Why do providers only charge for egress + other networking questions
Hi!
I have a few networking questions, have of course used AI & surfed around, but cannot find concrete answers.
1. Why do cloud providers only charge for egress? Is it because the customer has already paid for the ingress via their ISP? Does the ISP ( Say AT&T ) pay internet exchange routes in the area or how does this work, or do they usually just have their own lines everywhere around the country? [ US \]
2. How much egress do you think you can send out via your ISP before they shut you off for the month? Usually ISPs when I have signed on have just stated the speed ( 100MBS ) for example, but nothing about egress.
https://redd.it/1lv2re5
@r_devops
Hi!
I have a few networking questions, have of course used AI & surfed around, but cannot find concrete answers.
1. Why do cloud providers only charge for egress? Is it because the customer has already paid for the ingress via their ISP? Does the ISP ( Say AT&T ) pay internet exchange routes in the area or how does this work, or do they usually just have their own lines everywhere around the country? [ US \]
2. How much egress do you think you can send out via your ISP before they shut you off for the month? Usually ISPs when I have signed on have just stated the speed ( 100MBS ) for example, but nothing about egress.
https://redd.it/1lv2re5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Does anyone choose devops? I somehow ended up as the only devops person in my team and canβt figure things out most of the timeβ¦ when does it get better?
I feel lost. I am dealing with deploying old codebases. I know my way around AWS for the most part. I feel like most of my deployments fail. I considered myself a somewhat good engineer before when I was doing development work but now I feel kinda dumb. My bosses seems to be happy with me but idk what Iβm doing most time, things break all the time and it takes me forever to fix and figure out these stacks and technologies. Does this ever get better?
https://redd.it/1lv4sfe
@r_devops
I feel lost. I am dealing with deploying old codebases. I know my way around AWS for the most part. I feel like most of my deployments fail. I considered myself a somewhat good engineer before when I was doing development work but now I feel kinda dumb. My bosses seems to be happy with me but idk what Iβm doing most time, things break all the time and it takes me forever to fix and figure out these stacks and technologies. Does this ever get better?
https://redd.it/1lv4sfe
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Wasps With Bazookas v2 - A Distributed http/https load testing system
# What the Heck is This?
Wasps With Bazookas is a distributed swarm-based load testing tool made up of two parts:
Hive: the central coordinator (think: command center)
Wasps: individual agents that generate HTTP/S traffic from wherever you deploy them
You can install wasps on as many machines as you want β across your LAN, across the world β and aim the swarm at any API or infrastructure you want to stress test.
Itβs built to help you measure actual performance limits, find real bottlenecks, and uncover high-overhead services in your stack β without the testing tool becoming the bottleneck itself.
# Why I built it
As you can tell, I came up with the name as a nod towards its inspiration bees with machine guns
I spent months debugging performance bottlenecks in production systems. Every time I thought I found the issue, it turned out the load testing tool itself was the bottleneck, not my infrastructure.
This project actually started 6+ years ago as a Node.js wrapper around wrk, but that had limits. I eventually rewrote it entirely in Rust, ditched wrk, and built the load engine natively into the tool for better control and raw speed.
# What Makes This Special?
# The Hive Architecture
π HIVE (Command Center)
βοΈ
ππππππππ
Wasp Army Spread Out Across the World (or not)
βοΈ
π― TARGET SERVER
Hive: Your command center that coordinates all wasps
Wasps: Individual load testing agents that do the heavy lifting
Distributed: Each wasp runs independently, maximizing throughput
Millions of RPS: Scale to millions of requests per second
Sub-microsecond Latency: Precise timing measurements
Real-time Reporting: Get results as they happen
I hope you enjoy WaspsWithBazookas! I frequently create open-source projects to simplify my life and, ideally, help others simplify theirs as well. Right now, the interface is quite basic, and there's plenty of room for improvement. I'm excited to share this project with the community in hopes that others will contribute and help enhance it further. Thanks for checking it out and I truly appreciate your support!
https://redd.it/1lv5r5q
@r_devops
# What the Heck is This?
Wasps With Bazookas is a distributed swarm-based load testing tool made up of two parts:
Hive: the central coordinator (think: command center)
Wasps: individual agents that generate HTTP/S traffic from wherever you deploy them
You can install wasps on as many machines as you want β across your LAN, across the world β and aim the swarm at any API or infrastructure you want to stress test.
Itβs built to help you measure actual performance limits, find real bottlenecks, and uncover high-overhead services in your stack β without the testing tool becoming the bottleneck itself.
# Why I built it
As you can tell, I came up with the name as a nod towards its inspiration bees with machine guns
I spent months debugging performance bottlenecks in production systems. Every time I thought I found the issue, it turned out the load testing tool itself was the bottleneck, not my infrastructure.
This project actually started 6+ years ago as a Node.js wrapper around wrk, but that had limits. I eventually rewrote it entirely in Rust, ditched wrk, and built the load engine natively into the tool for better control and raw speed.
# What Makes This Special?
# The Hive Architecture
π HIVE (Command Center)
βοΈ
ππππππππ
Wasp Army Spread Out Across the World (or not)
βοΈ
π― TARGET SERVER
Hive: Your command center that coordinates all wasps
Wasps: Individual load testing agents that do the heavy lifting
Distributed: Each wasp runs independently, maximizing throughput
Millions of RPS: Scale to millions of requests per second
Sub-microsecond Latency: Precise timing measurements
Real-time Reporting: Get results as they happen
I hope you enjoy WaspsWithBazookas! I frequently create open-source projects to simplify my life and, ideally, help others simplify theirs as well. Right now, the interface is quite basic, and there's plenty of room for improvement. I'm excited to share this project with the community in hopes that others will contribute and help enhance it further. Thanks for checking it out and I truly appreciate your support!
https://redd.it/1lv5r5q
@r_devops
GitHub
GitHub - Phara0h/WaspsWithBazookas: Its like bees with machine guns but way more power
Its like bees with machine guns but way more power - Phara0h/WaspsWithBazookas
Release cycles, ci/cd and branching strategies
For all mid sized companies out there with monolithic and legacy code, how do you release?
I work at a company where the release cycle is daily releases with a confusing branching strategy(a combination of trunk based and gitflow strategies). A release will often have hot fixes and ready to deploy features. The release process has been tedious lately
For now, we mainly 2 main branches (apart from feature branches and bug fixes). Code changes are first merged to dev after unit Tests run and qa tests if necessary, then we deploy code changes to an environment daily and run e2es and a pr is created to the release branch. If the pr is reviewed and all is well with the tests and the code exceptions, we merge the pr and deploy to staging where we run e2es again and then deploy to prod.
Is there a way to improve this process? I'm curious about the release cycle of big companies
https://redd.it/1lv6brv
@r_devops
For all mid sized companies out there with monolithic and legacy code, how do you release?
I work at a company where the release cycle is daily releases with a confusing branching strategy(a combination of trunk based and gitflow strategies). A release will often have hot fixes and ready to deploy features. The release process has been tedious lately
For now, we mainly 2 main branches (apart from feature branches and bug fixes). Code changes are first merged to dev after unit Tests run and qa tests if necessary, then we deploy code changes to an environment daily and run e2es and a pr is created to the release branch. If the pr is reviewed and all is well with the tests and the code exceptions, we merge the pr and deploy to staging where we run e2es again and then deploy to prod.
Is there a way to improve this process? I'm curious about the release cycle of big companies
https://redd.it/1lv6brv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Advice Needed Robust PII Detection Directly in the Browser (WASM / JS)
Hi everyone,
I'm currently building a feature where we execute SQL queries using DuckDB-WASM directly in the user's browser. Before displaying or sending the results, I want to detect any potential PII (Personally Identifiable Information) and warn the user accordingly.
Current Goal:
- Run PII detection entirely on the client-side, without sending data to the server.
- Integrate seamlessly into existing confirmation dialogs to warn users if potential PII is detected.
Issue I'm facing:
My existing codebase is primarily Node.js/TypeScript. I initially attempted integrating Microsoft Presidio (Python library) via Pyodide in-browser, but this approach failed due to Presidioβs native dependencies and reliance on large spaCy models, making it impractical for browser usage.
Given this context (Node.js/TypeScript-based environment), how could I achieve robust, accurate, client-side PII detection directly in the browser?
Thanks in advance for your advice!
https://redd.it/1lv72bs
@r_devops
Hi everyone,
I'm currently building a feature where we execute SQL queries using DuckDB-WASM directly in the user's browser. Before displaying or sending the results, I want to detect any potential PII (Personally Identifiable Information) and warn the user accordingly.
Current Goal:
- Run PII detection entirely on the client-side, without sending data to the server.
- Integrate seamlessly into existing confirmation dialogs to warn users if potential PII is detected.
Issue I'm facing:
My existing codebase is primarily Node.js/TypeScript. I initially attempted integrating Microsoft Presidio (Python library) via Pyodide in-browser, but this approach failed due to Presidioβs native dependencies and reliance on large spaCy models, making it impractical for browser usage.
Given this context (Node.js/TypeScript-based environment), how could I achieve robust, accurate, client-side PII detection directly in the browser?
Thanks in advance for your advice!
https://redd.it/1lv72bs
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DataDog synthetics are the best but way over priced. Made something better and free
After seeing DataDog Synthetics pricing, I built a distributed synthetic monitoring solution that we've been using internally for about a year. It's scalable, performant, and completely free.
Current features:
Distributed monitoring nodes
Multi-step browser checks
API monitoring
Custom assertions
Coming soon:
Email notifications (next few days)
Internal network synthetics
Additional integrations
Open sourcing most of the codebase
If you need synthetic monitoring but can't justify enterprise pricing, check it out: https://synthmon.io/
Would love feedback from the community on what features you'd find most useful.
https://redd.it/1lv8xlz
@r_devops
After seeing DataDog Synthetics pricing, I built a distributed synthetic monitoring solution that we've been using internally for about a year. It's scalable, performant, and completely free.
Current features:
Distributed monitoring nodes
Multi-step browser checks
API monitoring
Custom assertions
Coming soon:
Email notifications (next few days)
Internal network synthetics
Additional integrations
Open sourcing most of the codebase
If you need synthetic monitoring but can't justify enterprise pricing, check it out: https://synthmon.io/
Would love feedback from the community on what features you'd find most useful.
https://redd.it/1lv8xlz
@r_devops
Best way to continue moving into devops from helpdesk?
Iβve looked over some of the roadmaps, and I know I already have some of the knowledge, so I was curious what I have already done/what I should do to continue to move down the career path to get into devops. Below are some of the things I am considering as I am moving down this career path.
1) I have graduated about a year ago with a degree in computer science. During this time I was exposed to several coding languages including C, Java, and most importantly (in my opinion) python
2) I have an A+ certification and am almost finished studying for my network+
3) As stated in the title, I currently work in a helpdesk position. I have only been there about 4 months, but during that time I have been writing some basic powershell scripts to help automate tasks in Active Directory, and Iβve written one major script in python that helps ticket creation go a bit smoother (nothing fancy, itβs really just a way to format text as a lot of what we do is copying and pasting information, but it works)
4) I currently have a homelab. A lot of what I do is based around docker containers that each run their own web application. I wonβt pretend I am super familiar with docker but it is something I have used a decent amount
5) I have used sql, as well as some nosql languages such as neo4j. Iβve also hosted a sql database on aws but that was a while ago and it would take me a while to do it again.
Is there anything else that I could do to further my knowledge? Any other certifications or intermediate career jumps I could make before landing a dev ops position? Iβm a little bit lost so any help would be appreciated
https://redd.it/1lvbncd
@r_devops
Iβve looked over some of the roadmaps, and I know I already have some of the knowledge, so I was curious what I have already done/what I should do to continue to move down the career path to get into devops. Below are some of the things I am considering as I am moving down this career path.
1) I have graduated about a year ago with a degree in computer science. During this time I was exposed to several coding languages including C, Java, and most importantly (in my opinion) python
2) I have an A+ certification and am almost finished studying for my network+
3) As stated in the title, I currently work in a helpdesk position. I have only been there about 4 months, but during that time I have been writing some basic powershell scripts to help automate tasks in Active Directory, and Iβve written one major script in python that helps ticket creation go a bit smoother (nothing fancy, itβs really just a way to format text as a lot of what we do is copying and pasting information, but it works)
4) I currently have a homelab. A lot of what I do is based around docker containers that each run their own web application. I wonβt pretend I am super familiar with docker but it is something I have used a decent amount
5) I have used sql, as well as some nosql languages such as neo4j. Iβve also hosted a sql database on aws but that was a while ago and it would take me a while to do it again.
Is there anything else that I could do to further my knowledge? Any other certifications or intermediate career jumps I could make before landing a dev ops position? Iβm a little bit lost so any help would be appreciated
https://redd.it/1lvbncd
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
My aws ubuntu instance status checks failed twice
I did-not set any cloud watch restarts. Last week all of a sudden my aws instance status checks failed.
After restarting the instance it started working.
And then when i checked the logs. I found this
βββ
amazon-ssm-agent405: ... dial tcp 169.254.169.254:80: connect: network is unreachable
systemd-networkd-wait-online: Timeout occurred while waiting for network connectivity
βββ
It was working fine. Then last night the same instance it failed again. This time the errors
βββ
Jul 8 15:36:25 systemd-networkd352: ens5: Could not set DHCPv4 address: Connection timed out
Jul 8 15:36:25 systemd-networkd352: ens5: Failed
βββ
This is the command i used to get the logs:
grep -iE "oom|panic|killed process|segfault|unreachable|network|link down|i/o error|xfs|ext4|nvme" /var/log/syslog | tail -n 100
Why is this happening?
https://redd.it/1lvbqq3
@r_devops
I did-not set any cloud watch restarts. Last week all of a sudden my aws instance status checks failed.
After restarting the instance it started working.
And then when i checked the logs. I found this
βββ
amazon-ssm-agent405: ... dial tcp 169.254.169.254:80: connect: network is unreachable
systemd-networkd-wait-online: Timeout occurred while waiting for network connectivity
βββ
It was working fine. Then last night the same instance it failed again. This time the errors
βββ
Jul 8 15:36:25 systemd-networkd352: ens5: Could not set DHCPv4 address: Connection timed out
Jul 8 15:36:25 systemd-networkd352: ens5: Failed
βββ
This is the command i used to get the logs:
grep -iE "oom|panic|killed process|segfault|unreachable|network|link down|i/o error|xfs|ext4|nvme" /var/log/syslog | tail -n 100
Why is this happening?
https://redd.it/1lvbqq3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Do you prefer fixed-cost cloud services or a hybrid pay-as-you-grow model?
Hey everyone,
Iβm curious about how people feel when it comes to pricing models for cloud services.
For context:
Some platforms offer a fixed-cost, SaaS-like approach. You pay a predictable monthly fee that covers a set amount of resources (CPU, RAM, bandwidth, storage, etc.), and you donβt have to think much about scaling until you hit hard limits.
Others may offer a hybrid model. You pay a base fee for a certain resource allocation, but you can add more resources on demand (extra CPU, RAM, storage, bandwidth, etc.), and pay for that usage incrementally.
My questions:
As a developer or business owner, which model do you prefer and why?
Any horror stories or success stories with either approach?
Iβd love to hear real-world experiences - whether youβre running personal projects, SaaS apps, or large-scale deployments.
Thanks in advance for your thoughts!
https://redd.it/1lvdtd1
@r_devops
Hey everyone,
Iβm curious about how people feel when it comes to pricing models for cloud services.
For context:
Some platforms offer a fixed-cost, SaaS-like approach. You pay a predictable monthly fee that covers a set amount of resources (CPU, RAM, bandwidth, storage, etc.), and you donβt have to think much about scaling until you hit hard limits.
Others may offer a hybrid model. You pay a base fee for a certain resource allocation, but you can add more resources on demand (extra CPU, RAM, storage, bandwidth, etc.), and pay for that usage incrementally.
My questions:
As a developer or business owner, which model do you prefer and why?
Any horror stories or success stories with either approach?
Iβd love to hear real-world experiences - whether youβre running personal projects, SaaS apps, or large-scale deployments.
Thanks in advance for your thoughts!
https://redd.it/1lvdtd1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What does the cloud infrastructure costs at every stage of startup look like?
So, I am writing a blog about what happens to the infrastructure costs as startups scale up. This is not the exact topic, as I'm still researching and exploring. But I needed help from you to understand what, as a startup, the infrastructure costs look like at every stage. At early, growth, and mature stages. It would be great if I could get a detailed explanation of everything that happened.
Also, if you know of any research that took place on this topic, pls share that with me.
And if someone is willing to do so, help me structure this blog properly. Suggest other sections that should definitely be there.
https://redd.it/1lvf23u
@r_devops
So, I am writing a blog about what happens to the infrastructure costs as startups scale up. This is not the exact topic, as I'm still researching and exploring. But I needed help from you to understand what, as a startup, the infrastructure costs look like at every stage. At early, growth, and mature stages. It would be great if I could get a detailed explanation of everything that happened.
Also, if you know of any research that took place on this topic, pls share that with me.
And if someone is willing to do so, help me structure this blog properly. Suggest other sections that should definitely be there.
https://redd.it/1lvf23u
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Has anyone taken this AI-readiness infra quiz?
Found this 10-question quiz that gives you a report on how AI-ready your infrastructure is.
Questionaire link : https://lnk.ink/bKmPl
It touches on things like developer self-service and platform engineering β felt like it's leaning a bit in that direction. Curious if anyone else took it and what you thought of your results. Are these kinds of frameworks useful or just more trend-chasing?
https://redd.it/1lvhaea
@r_devops
Found this 10-question quiz that gives you a report on how AI-ready your infrastructure is.
Questionaire link : https://lnk.ink/bKmPl
It touches on things like developer self-service and platform engineering β felt like it's leaning a bit in that direction. Curious if anyone else took it and what you thought of your results. Are these kinds of frameworks useful or just more trend-chasing?
https://redd.it/1lvhaea
@r_devops
Any tools to automatically diagram cloud infra?
Are there any tools that will automatically scan AWS, GCP, Azure and diagram what is deployed?
So far, I have found CloudCraft from Datadog, but this only supports AWS and its automatically diagraming is still in beta (AFAIK).
I am considering building something custom for this - but judging from the lack of tools that support multi-cloud, or only support manual diagraming, I wonder if I am missing some technical limitation that prevent such tools form being possible.
https://redd.it/1lvjpwo
@r_devops
Are there any tools that will automatically scan AWS, GCP, Azure and diagram what is deployed?
So far, I have found CloudCraft from Datadog, but this only supports AWS and its automatically diagraming is still in beta (AFAIK).
I am considering building something custom for this - but judging from the lack of tools that support multi-cloud, or only support manual diagraming, I wonder if I am missing some technical limitation that prevent such tools form being possible.
https://redd.it/1lvjpwo
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Terraform at Scale: Smart Practices That Save You Headaches Later
https://medium.com/@DynamoDevOps/terraform-at-scale-smart-practices-that-save-you-headaches-later-part-1-7054a11e99db
https://redd.it/1lvkwa0
@r_devops
https://medium.com/@DynamoDevOps/terraform-at-scale-smart-practices-that-save-you-headaches-later-part-1-7054a11e99db
https://redd.it/1lvkwa0
@r_devops
Medium
Terraform at Scale: Smart Practices That Save You Headaches Later (Part 1)
You donβt need more theory; what you really need is the practical stuff that counts when youβre building and scaling infrastructure withβ¦