Is Judge0 the right way to run user code for a hobby site?
I’m making a website where i need to let untrusted user code hit public APIs during execution while blocking everything else (internal IPs, metadata endpoints, crypto mining pools, blah blah blah….). Looking for proven patterns / tools.
Best thing I've found online that’s open-source is Judge0, so i was wondering. Have any if you have used it, or anything similar?
I’d really appreciate pointers to blog posts, GitHub examples, or your own configs. Trying to ship publicly soonish without waking up to a surprise AWS bill or a CVE headline, because someone has tried to mine crypto on my servers.
https://redd.it/1lsxdkf
@r_devops
I’m making a website where i need to let untrusted user code hit public APIs during execution while blocking everything else (internal IPs, metadata endpoints, crypto mining pools, blah blah blah….). Looking for proven patterns / tools.
Best thing I've found online that’s open-source is Judge0, so i was wondering. Have any if you have used it, or anything similar?
I’d really appreciate pointers to blog posts, GitHub examples, or your own configs. Trying to ship publicly soonish without waking up to a surprise AWS bill or a CVE headline, because someone has tried to mine crypto on my servers.
https://redd.it/1lsxdkf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
is learning devops a good ideal for data science and llm engineering?
i was first thinking of learning mlops, but if we gonna learn ops, why not learn it all, I think a lot of llm and data science project would need some type of deployment and maintaining it, that's why I am thinking about it
https://redd.it/1lt0jjq
@r_devops
i was first thinking of learning mlops, but if we gonna learn ops, why not learn it all, I think a lot of llm and data science project would need some type of deployment and maintaining it, that's why I am thinking about it
https://redd.it/1lt0jjq
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Maybe humans don't need to write documentation for humans anymore?
With tools like Devin wiki starting to generate human-readable documentation from code, shouldn't we shift our focus? Instead of humans writing docs for other humans, we could have AI generate those on-demand when needed.
What humans should focus on is creating documentation for AI - the stuff that can't be extracted from GitHub repos alone. Things like design rationale, decision-making processes, considerations that were explored, task contexts, etc. We should be building environments where humans can effectively pass this kind of contextual knowledge to AI systems.
Thoughts?
https://redd.it/1lt2g73
@r_devops
With tools like Devin wiki starting to generate human-readable documentation from code, shouldn't we shift our focus? Instead of humans writing docs for other humans, we could have AI generate those on-demand when needed.
What humans should focus on is creating documentation for AI - the stuff that can't be extracted from GitHub repos alone. Things like design rationale, decision-making processes, considerations that were explored, task contexts, etc. We should be building environments where humans can effectively pass this kind of contextual knowledge to AI systems.
Thoughts?
https://redd.it/1lt2g73
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Self Hosted Artifactory Alternative for Large Repositories?
Hi,
We recently upgraded our self hosted Artifactory instance and it has become woefully unstable. Support has been a massive miss for us. Of the 12 people assigned to our case over the course of the month only one of them have been helpful. Likewise, during outages Jfrog support was not able to fulfill our live support requests (we pay for the highest tier of support). We got strung along with "a support engineer will be with you in about 30 minutes" until we figured the problem out ourselves. Additionally, once we were in a support call, the support rep would try everything they could to "move the conversation offline" and have us send logs, enable secret logging, increase resources, send more logs, and continue in this cycle. Our instance is so over-provisioned at this point that it is taking up egregious amounts of compute/memory that is not being utilized. This also seemingly has no affect with our stability.
Our Artifact Registry is large around 40tb+ of data. Likewise, due to regulatory constraints some of the data must be kept on-prem. Are there any alternatives that are not Jfrog or Sonatype? We need a registry that is type agnostic (put a .zip file in a maven repo etc) and that can work efficiently while being quite large. It also must support remote registries.
https://redd.it/1lt295z
@r_devops
Hi,
We recently upgraded our self hosted Artifactory instance and it has become woefully unstable. Support has been a massive miss for us. Of the 12 people assigned to our case over the course of the month only one of them have been helpful. Likewise, during outages Jfrog support was not able to fulfill our live support requests (we pay for the highest tier of support). We got strung along with "a support engineer will be with you in about 30 minutes" until we figured the problem out ourselves. Additionally, once we were in a support call, the support rep would try everything they could to "move the conversation offline" and have us send logs, enable secret logging, increase resources, send more logs, and continue in this cycle. Our instance is so over-provisioned at this point that it is taking up egregious amounts of compute/memory that is not being utilized. This also seemingly has no affect with our stability.
Our Artifact Registry is large around 40tb+ of data. Likewise, due to regulatory constraints some of the data must be kept on-prem. Are there any alternatives that are not Jfrog or Sonatype? We need a registry that is type agnostic (put a .zip file in a maven repo etc) and that can work efficiently while being quite large. It also must support remote registries.
https://redd.it/1lt295z
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
GitOps with ArgoCD Introduction
Hey, I wrote an introduction about GitOps with ArgoCD. Take a look if you are interested in. What is your deployment process? Are you writing CI/CD pipelines with GitHub Actions or something similar?
If you have a medium account:
https://medium.com/@erwinschleier/gitops-introduction-with-argo-cd-51f81302e013
Personal blog:
https://erwin-schleier.com/2025/07/04/gitops-introduction-with-argo-cd/
https://redd.it/1lt053g
@r_devops
Hey, I wrote an introduction about GitOps with ArgoCD. Take a look if you are interested in. What is your deployment process? Are you writing CI/CD pipelines with GitHub Actions or something similar?
If you have a medium account:
https://medium.com/@erwinschleier/gitops-introduction-with-argo-cd-51f81302e013
Personal blog:
https://erwin-schleier.com/2025/07/04/gitops-introduction-with-argo-cd/
https://redd.it/1lt053g
@r_devops
Medium
GitOps Introduction with Argo CD
There is a new term in the Dev Ops world called GitOps everyone is talking about. If you are wondering what is it about, this article is…
Unlock the Truth Behind Kubernetes Production Topologies
When it comes to production-ready Kubernetes, most blogs offer superficial guidance. But this 40+ page guide dives into what actually matters, cloud provider behavior under failure, real-world availability tradeoffs, and the architectural consequences of choosing zonal vs regional vs multi-cluster setups.
Whether you're using EKS, GKE, AKS or Self hosted you’ll walk away with clarity on:
Which control plane models are truly fault-tolerant
Why your node pool topology is silently sabotaging uptime
How pricing tiers map (or don’t) to SLA guarantees
What “high availability” really means across AWS, GCP, and Azure
How to scale safely — without overengineering or overspending
This is not a beginner’s overview. It’s a decision framework for platform engineers, SREs, and cloud architects who want to build resilient, production-grade infrastructure and stop relying on vendor defaults.
👉 If your team is running Kubernetes in production or planning to, this is essential reading.
# Table of Contents
Introduction: Choosing the Right Topology for Production
Control Plane Architectures
Amazon EKS
Google GKE
Azure AKS
Worker Node Deployment Models
AWS EKS: Node Groups and Multi-AZ Strategy
Google GKE: Zonal, Multi-Zonal and Regional Node Pools
Azure AKS: Node Pool Zoning and Placement Flexibility
Summary: Comparing Node Deployment Models Across Providers
Designing for High Availability Within a Region
AWS EKS
Google GKE
Azure AKS
Summary: Regional HA Comparison
Upgrade and Maintenance Strategy
AWS EKS: Upgrade Mechanics and Control
Google GKE: Automated Channels and Controlled Upgrades
Azure AKS: Scheduled Windows and Tier-Aware Resilience
Summary: Upgrade Strategy Comparison
Multi-Region Topologies (and Limitations)
AWS EKS: Multi-Cluster Resilience via Global Services
Google GKE: Regional Isolation and Federation via Anthos
Azure AKS: Cross-Region Resilience Through Paired Clusters
Summary: Multi-Region Kubernetes Strategy Comparison
Availability, Fault Tolerance, and SLA Considerations
AWS EKS: SLA Commitments and Fault Domain Strategies
Google GKE: Tiered SLAs and Built-In Regional Redundancy
Azure AKS: Availability by Tier and Zone Awareness
Summary: Platform SLAs and Real-World Resilience
Managed vs User-Configured Topology Options
AWS EKS: Operations Freedom with Opt-In Management
Google GKE: Operational Modes from Manual to Fully Managed
Azure AKS: Gradual Abstraction and Tiered Node Management
Summary: Choosing the Right Topology Ownership Model
For Self-Hosted Kubernetes – Provisioning Tools and Topology Models
kubeadm: The Foundation for Custom Clusters
kOps: Opinionated HA Clusters for AWS and Beyond
Kubespray: Flexible, Ansible-Based Multi-Environment Provisioning
Cluster API: Declarative Lifecycle Management Across Environments
Summary: Choosing a Self-Hosted Tool Based on Environment and Control
Free Copy: https://www.patreon.com/posts/chapter-1-guide-131966208
Paid Guide: https://www.patreon.com/posts/unlock-truth-133516014
https://redd.it/1lt61ec
@r_devops
When it comes to production-ready Kubernetes, most blogs offer superficial guidance. But this 40+ page guide dives into what actually matters, cloud provider behavior under failure, real-world availability tradeoffs, and the architectural consequences of choosing zonal vs regional vs multi-cluster setups.
Whether you're using EKS, GKE, AKS or Self hosted you’ll walk away with clarity on:
Which control plane models are truly fault-tolerant
Why your node pool topology is silently sabotaging uptime
How pricing tiers map (or don’t) to SLA guarantees
What “high availability” really means across AWS, GCP, and Azure
How to scale safely — without overengineering or overspending
This is not a beginner’s overview. It’s a decision framework for platform engineers, SREs, and cloud architects who want to build resilient, production-grade infrastructure and stop relying on vendor defaults.
👉 If your team is running Kubernetes in production or planning to, this is essential reading.
# Table of Contents
Introduction: Choosing the Right Topology for Production
Control Plane Architectures
Amazon EKS
Google GKE
Azure AKS
Worker Node Deployment Models
AWS EKS: Node Groups and Multi-AZ Strategy
Google GKE: Zonal, Multi-Zonal and Regional Node Pools
Azure AKS: Node Pool Zoning and Placement Flexibility
Summary: Comparing Node Deployment Models Across Providers
Designing for High Availability Within a Region
AWS EKS
Google GKE
Azure AKS
Summary: Regional HA Comparison
Upgrade and Maintenance Strategy
AWS EKS: Upgrade Mechanics and Control
Google GKE: Automated Channels and Controlled Upgrades
Azure AKS: Scheduled Windows and Tier-Aware Resilience
Summary: Upgrade Strategy Comparison
Multi-Region Topologies (and Limitations)
AWS EKS: Multi-Cluster Resilience via Global Services
Google GKE: Regional Isolation and Federation via Anthos
Azure AKS: Cross-Region Resilience Through Paired Clusters
Summary: Multi-Region Kubernetes Strategy Comparison
Availability, Fault Tolerance, and SLA Considerations
AWS EKS: SLA Commitments and Fault Domain Strategies
Google GKE: Tiered SLAs and Built-In Regional Redundancy
Azure AKS: Availability by Tier and Zone Awareness
Summary: Platform SLAs and Real-World Resilience
Managed vs User-Configured Topology Options
AWS EKS: Operations Freedom with Opt-In Management
Google GKE: Operational Modes from Manual to Fully Managed
Azure AKS: Gradual Abstraction and Tiered Node Management
Summary: Choosing the Right Topology Ownership Model
For Self-Hosted Kubernetes – Provisioning Tools and Topology Models
kubeadm: The Foundation for Custom Clusters
kOps: Opinionated HA Clusters for AWS and Beyond
Kubespray: Flexible, Ansible-Based Multi-Environment Provisioning
Cluster API: Declarative Lifecycle Management Across Environments
Summary: Choosing a Self-Hosted Tool Based on Environment and Control
Free Copy: https://www.patreon.com/posts/chapter-1-guide-131966208
Paid Guide: https://www.patreon.com/posts/unlock-truth-133516014
https://redd.it/1lt61ec
@r_devops
Patreon
Chapter 1: Decision Guide: Cluster topology (Free Post) | Abhimanyu Saharan
Get more from Abhimanyu Saharan on Patreon
Do you guys use pure C anywhere?
Wondering if you guys use C anywhere, or just bash,python,go. Or is C only for Systems Performance and Linux books
https://redd.it/1lt9w5g
@r_devops
Wondering if you guys use C anywhere, or just bash,python,go. Or is C only for Systems Performance and Linux books
https://redd.it/1lt9w5g
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Resume Review - Recent Grad with an MSCS
As the title goes, I'm a recent Master's graduate with an MS in CS. I haven't had any luck getting interviews with the last one coming 3 months ago, thanks to a recruiter I had established a connection with. I would love some extremely honest, brutal feedback. Also, I have applied to over 500-600 jobs at least since, and have not had any interviews.
Here's my resume - https://at-d.tiiny.site
https://redd.it/1ltcaen
@r_devops
As the title goes, I'm a recent Master's graduate with an MS in CS. I haven't had any luck getting interviews with the last one coming 3 months ago, thanks to a recruiter I had established a connection with. I would love some extremely honest, brutal feedback. Also, I have applied to over 500-600 jobs at least since, and have not had any interviews.
Here's my resume - https://at-d.tiiny.site
https://redd.it/1ltcaen
@r_devops
I got slammed with a $3,200 AWS bill because of a misconfigured Lambda, how are you all catching these before they hit?
I was building a simple ingestion pipeline with Lambda + S3.
Somewhere along the way, I accidentally created an event loop, each Lambda wrote to S3, which triggered the Lambda again. It ran for 3 days.
No alerts. No thresholds. Just a $3,200 surprise when I opened the billing dashboard.
AWS support forgave some of it, but I realized we had **zero guardrails** to catch this kind of thing early.
My question to the community:
* How do *you* monitor for unexpected infra costs?
* Do you treat cost anomalies like real incidents?
* Is this an SRE/DevOps responsibility or something you push to engineers or managers?
https://redd.it/1ltdt4q
@r_devops
I was building a simple ingestion pipeline with Lambda + S3.
Somewhere along the way, I accidentally created an event loop, each Lambda wrote to S3, which triggered the Lambda again. It ran for 3 days.
No alerts. No thresholds. Just a $3,200 surprise when I opened the billing dashboard.
AWS support forgave some of it, but I realized we had **zero guardrails** to catch this kind of thing early.
My question to the community:
* How do *you* monitor for unexpected infra costs?
* Do you treat cost anomalies like real incidents?
* Is this an SRE/DevOps responsibility or something you push to engineers or managers?
https://redd.it/1ltdt4q
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What issues do you usually have with splunk or other alerting platforms?
Yo software developer here wanted to know what kind of issues people might have with splunk are there any pain points you are facing? One issue my team is having is not being able to get alerts on time due to our internal splunk team limiting alerts to a 15 minute delay. Doesn't seem like much but our production support team flips out every time it happens
https://redd.it/1lteuuf
@r_devops
Yo software developer here wanted to know what kind of issues people might have with splunk are there any pain points you are facing? One issue my team is having is not being able to get alerts on time due to our internal splunk team limiting alerts to a 15 minute delay. Doesn't seem like much but our production support team flips out every time it happens
https://redd.it/1lteuuf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DevOps Azure Checkbox Custom Field
I feel I am losing my nut...
I want to add Custom Fields to my Bug Tickets & User Story tickets, but I want them to be checkboxes. The only option I have found is this one:
https://stackoverflow.com/questions/74994552/azure-devops-work-item-custom-field-as-checkbox
But it has really odd behaviour that is outside of simply checkboxes.
The reason I do not want toggles is because I do not want an "Off" or "False" state as a visible option, I want users to update the checkbox to be checked if the option is applicable.
Surely there is a way to have a simple checkbox custom field on a work type item?
I am sure this has likely been asked a billion times, but my googling skills are letting me down, as I either get the same responses, or irrelevant responses.
Cheers
https://redd.it/1ltdg2p
@r_devops
I feel I am losing my nut...
I want to add Custom Fields to my Bug Tickets & User Story tickets, but I want them to be checkboxes. The only option I have found is this one:
https://stackoverflow.com/questions/74994552/azure-devops-work-item-custom-field-as-checkbox
But it has really odd behaviour that is outside of simply checkboxes.
The reason I do not want toggles is because I do not want an "Off" or "False" state as a visible option, I want users to update the checkbox to be checked if the option is applicable.
Surely there is a way to have a simple checkbox custom field on a work type item?
I am sure this has likely been asked a billion times, but my googling skills are letting me down, as I either get the same responses, or irrelevant responses.
Cheers
https://redd.it/1ltdg2p
@r_devops
Stack Overflow
Azure DevOps Work item custom field as checkbox
I am setting up Azure DevOps to track Features and other work items. One of the things I want to track is what environment bug was found in.
My preference is to have a list of checkboxes user can c...
My preference is to have a list of checkboxes user can c...
Advice for CI/CD with Relational DBs
Hey there folks!
Most of the the Dbs I've worked with in the past have been either non relational or laughably small PG DBs. I'm starting on a project that's going to be reliant on a much heavier PG db in AWS. I don't think my current approaches are really viable for a big boy relational setup.
So if any of you could shed some light on how you approach handling your DB's I'd very much appreciate it.
Currently I use Prisma, which works but I don't think is optimal. I'd like to move away from ORMs. I've been eying Liquibase.
https://redd.it/1ltcylo
@r_devops
Hey there folks!
Most of the the Dbs I've worked with in the past have been either non relational or laughably small PG DBs. I'm starting on a project that's going to be reliant on a much heavier PG db in AWS. I don't think my current approaches are really viable for a big boy relational setup.
So if any of you could shed some light on how you approach handling your DB's I'd very much appreciate it.
Currently I use Prisma, which works but I don't think is optimal. I'd like to move away from ORMs. I've been eying Liquibase.
https://redd.it/1ltcylo
@r_devops
Separate pipeline for application configuration? Or all in IaC?
I'm working in the AWS world, and using CloudFormation + SAM Templates, and have API endpoints, Lambda functions, S3 Buckets and configuration all in the one big template.
Initially was working with a configuration file in DEV and now want to move these parameters over to Param Store in AWS, but the thought of adding these + tagging (required in our company) for about 30 parameters just makes me feel like I'm catastrophically flooding the template with my configuration.
The configuration may change semi regularly, outside of the code or any other infra, and would be pushed through the pipeline to release.
Is anyone out there running a configuration pipeline to release config changes? On one side it feels like overkill, on the other side it makes sense to me.
What's your opinions please brains trust?
https://redd.it/1ltjqmz
@r_devops
I'm working in the AWS world, and using CloudFormation + SAM Templates, and have API endpoints, Lambda functions, S3 Buckets and configuration all in the one big template.
Initially was working with a configuration file in DEV and now want to move these parameters over to Param Store in AWS, but the thought of adding these + tagging (required in our company) for about 30 parameters just makes me feel like I'm catastrophically flooding the template with my configuration.
The configuration may change semi regularly, outside of the code or any other infra, and would be pushed through the pipeline to release.
Is anyone out there running a configuration pipeline to release config changes? On one side it feels like overkill, on the other side it makes sense to me.
What's your opinions please brains trust?
https://redd.it/1ltjqmz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Canary Deployment Strategy with Third-Party Webhooks
We're setting up canary deployments in our multi-tenant architecture and looking for advice.
Our current understanding is that we deploy a v2 of our code and route some portion of traffic to it. Since we're multi-tenant, our initial plan was to route entire tenants' traffic to the v2 deployment.
However, we have a challenge: third-party tools send webhooks to our Azure function apps, which then create jobs in Redis that are processed by our workers. Since we can't keep changing the webhook endpoints at the third-party services, this creates a problem for our canary strategy.
Our architecture looks like:
* Third-party services → Webhooks → Azure Function Apps → Redis jobs → Worker processing
How do you handle canary deployments when you have external webhook dependencies? Any strategies for ensuring both v1 and v2 can properly process these incoming webhook events?Canary Deployment Strategy with Third-Party Webhooks
Thanks for any insights or experiences you can share!
https://redd.it/1ltmjre
@r_devops
We're setting up canary deployments in our multi-tenant architecture and looking for advice.
Our current understanding is that we deploy a v2 of our code and route some portion of traffic to it. Since we're multi-tenant, our initial plan was to route entire tenants' traffic to the v2 deployment.
However, we have a challenge: third-party tools send webhooks to our Azure function apps, which then create jobs in Redis that are processed by our workers. Since we can't keep changing the webhook endpoints at the third-party services, this creates a problem for our canary strategy.
Our architecture looks like:
* Third-party services → Webhooks → Azure Function Apps → Redis jobs → Worker processing
How do you handle canary deployments when you have external webhook dependencies? Any strategies for ensuring both v1 and v2 can properly process these incoming webhook events?Canary Deployment Strategy with Third-Party Webhooks
Thanks for any insights or experiences you can share!
https://redd.it/1ltmjre
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Can lambda inside a vpc get internet access without nat gateway?
Guys, I have a doubt in devops.
Can a lambda inside a vpc get internet access without nat gateway
Note:I need to connect my private rds and I can't make it public and I can't use nat instance as well
https://redd.it/1ltpqvu
@r_devops
Guys, I have a doubt in devops.
Can a lambda inside a vpc get internet access without nat gateway
Note:I need to connect my private rds and I can't make it public and I can't use nat instance as well
https://redd.it/1ltpqvu
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Struggling to put two instances in targetid for alb module?
Do i need to create a different albtargetgroupattachment resource block associating it with the alb module?
https://redd.it/1ltssey
@r_devops
Do i need to create a different albtargetgroupattachment resource block associating it with the alb module?
https://redd.it/1ltssey
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Made a huge mistake that cost my company a LOT – What’s your biggest DevOps fuckup?
Hey all,
Recently, we did a huge load test at my company. We wrote a script to clean up all the resources we tagged at the end of the test. We ran the test on a Thursday and went home, thinking we had nailed it.
Come Sunday, we realized the script failed almost immediately, and none of the resources were deleted. We ended up burning $20,000 in just three days.
Honestly, my first instinct was to see if I can shift the blame somehow or make it ambiguous, but it was quite obviously my fuckup so I had to own up to it. I thought it'd be cleansing to hear about other DevOps' biggest fuckups that cost their companies money? How much did it cost? Did you get away with it?
https://redd.it/1ltuz99
@r_devops
Hey all,
Recently, we did a huge load test at my company. We wrote a script to clean up all the resources we tagged at the end of the test. We ran the test on a Thursday and went home, thinking we had nailed it.
Come Sunday, we realized the script failed almost immediately, and none of the resources were deleted. We ended up burning $20,000 in just three days.
Honestly, my first instinct was to see if I can shift the blame somehow or make it ambiguous, but it was quite obviously my fuckup so I had to own up to it. I thought it'd be cleansing to hear about other DevOps' biggest fuckups that cost their companies money? How much did it cost? Did you get away with it?
https://redd.it/1ltuz99
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Is there some way to get 10$ AWS credits as a student?
Hey everyone!
I'm a student currently learning AWS and working on DevOps projects like Jenkins pipelines, Elastic Load Balancers, and EKS. I've already used up my AWS Free Tier, and I just need around $10 in credits to test my deployments for an hour or two and take screenshots for my resume/blog.
I’ve tried AWS Educate, but unfortunately it didn’t work out in my case. I also applied twice for the AWS Community Builders program, but got rejected both times.
Is there any other way (like student programs, sponsorships, or community grants) to receive a small amount of credits to continue building and learning?
I'd be really grateful for any suggestions — even a little support would go a long way in helping me continue this journey.
Thanks so much in advance! 🙏
https://redd.it/1ltuqjm
@r_devops
Hey everyone!
I'm a student currently learning AWS and working on DevOps projects like Jenkins pipelines, Elastic Load Balancers, and EKS. I've already used up my AWS Free Tier, and I just need around $10 in credits to test my deployments for an hour or two and take screenshots for my resume/blog.
I’ve tried AWS Educate, but unfortunately it didn’t work out in my case. I also applied twice for the AWS Community Builders program, but got rejected both times.
Is there any other way (like student programs, sponsorships, or community grants) to receive a small amount of credits to continue building and learning?
I'd be really grateful for any suggestions — even a little support would go a long way in helping me continue this journey.
Thanks so much in advance! 🙏
https://redd.it/1ltuqjm
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Set up real-time logging for AWS ECS using FireLens and Grafana Loki
I recently set up a logging pipeline for ECS Fargate using FireLens (Fluent Bit) and Grafana Loki. It's fully serverless, uses S3 as the backend, and connects to Grafana Cloud for visualisation.
I’ve documented the full setup, including task definitions, IAM roles, and Loki config, plus a demo app to generate logs.
Full details here if anyone’s interested: https://medium.com/@prateekjain.dev/logging-aws-ecs-workloads-with-grafana-loki-and-firelens-2a02d760f041?sk=cf291691186255071cf127d33f637446
https://redd.it/1ltxvni
@r_devops
I recently set up a logging pipeline for ECS Fargate using FireLens (Fluent Bit) and Grafana Loki. It's fully serverless, uses S3 as the backend, and connects to Grafana Cloud for visualisation.
I’ve documented the full setup, including task definitions, IAM roles, and Loki config, plus a demo app to generate logs.
Full details here if anyone’s interested: https://medium.com/@prateekjain.dev/logging-aws-ecs-workloads-with-grafana-loki-and-firelens-2a02d760f041?sk=cf291691186255071cf127d33f637446
https://redd.it/1ltxvni
@r_devops
Medium
Logging AWS ECS Workloads with Grafana Loki and FireLens
A hands-on guide to building a real-time log pipeline on AWS using FireLens and Grafana Loki.
requesting advice for Personal Project - Scaling to DevOps
TL;DR - I've built something on my own server, and could use a vector-check if what I believe my dev roadmap looks like makes sense. Is this a 'pretty good order' to do things, and is there anything I'm forgetting/don't know about.
---------------------------------
Hey all,
I've never done anything in a commercial environment, but I do know there is difference between what's hacked together at home and what good industry code/practices should look like. In that vein, I'm going along the best I can, teaching myself and trying to design a personal project of mine according to industry best practices as I interpret what I find via the web and other github projects.
Currently, in my own time I've setup an Ubuntu server on an old laptop I have (with SSH config'd for remote work from anywhere), and have designed a web-app using python, flask, nginx, gunicorn, and postgreSQL (with basic HTML/CSS), using Gitlab for version control (updating via branches, and when it's good, merging to master with a local CI/CD runner already configured and working), and weekly DB backups to an S3 bucket, and it's secured/exposed to the internet through my personal router with duckDNS. I've containerized everything, and it all comes up and down seamlessly with docker-compose.
The advice I could really use is if everything that follows seems like a cohesive roadmap of things to implement/develop:
Currently my database is empty, but the real thing I want to build next will involve populating it with data from API calls to various other websites/servers based on user inputs and automated scraping.
Currently, it only operates off HTTP and not HTTPS yet because my understanding is I can't associate an HTTPS certificate with my personal server since I go through my router IP. I do already have a website URL registered with Cloudflare, and I'll put it there (with a valid cert) after I finish a little more of my dev roadmap.
Next I want to transition to a Dev/Test/Prod pipeline using GitLab. Obviously the environment I've been working off has been exclusively Dev, but the goal is doing a DevEnv push which then triggers moving the code to a TestEnv to do the following testing:
Unit, Integration, Regression, Acceptance, Performance, Security, End-to-End, and Smoke.
Is there anything I'm forgetting?
My understanding is a good choice for this is using pytest, and results displayed via allure.
Should I also setup a Staging Env for DAST before prod?
If everything passes TestEnv, it then either goes to StagingEnv for the next set of tests, or is primed for manual release to ProdEnv.
In terms of best practices, should I .gitlab-ci.yml to automatically spin up a new development container whenever a new branch is created?
My understanding is this is how dev is done with teams.
Also, Im guessing theres "always" (at least) one DevEnv running obviously for development, and only one ProdEnv running, but should a TestEnv always be running too, or does this only get spun up when there's a push?
And since everything is (currently) running off my personal server, should I just separate each env via individual .env.dev, .env.test, and .env.prod files that swap up the ports/secrets/vars/etc... used for each?
Eventually when I move to cloud, I'm guessing the ports can stay the same, and instead I'll go off IP addresses advertised during creation.
When I do move to the cloud (AWS), the plan is terraform (which I'm already kinda familiar with) to spin up the resources (via gitlab-ci) to load the containers onto. Then I'm guessing environment separation is done via IP addresses (advertised during creation), and not ports anymore.
I am aware there's a whole other batch of skills to learn regarding roles/permissions/AWS Services (alerts/cloudwatch/cloudtrails/cost monitoring/etc...) in this, maybe some AWS certs (Solutions Architect > DevOps Pro)
I also plan on migrating everything to kubernetes, and manage the spin up and deployment via helm charts into the cloud, and get into load
TL;DR - I've built something on my own server, and could use a vector-check if what I believe my dev roadmap looks like makes sense. Is this a 'pretty good order' to do things, and is there anything I'm forgetting/don't know about.
---------------------------------
Hey all,
I've never done anything in a commercial environment, but I do know there is difference between what's hacked together at home and what good industry code/practices should look like. In that vein, I'm going along the best I can, teaching myself and trying to design a personal project of mine according to industry best practices as I interpret what I find via the web and other github projects.
Currently, in my own time I've setup an Ubuntu server on an old laptop I have (with SSH config'd for remote work from anywhere), and have designed a web-app using python, flask, nginx, gunicorn, and postgreSQL (with basic HTML/CSS), using Gitlab for version control (updating via branches, and when it's good, merging to master with a local CI/CD runner already configured and working), and weekly DB backups to an S3 bucket, and it's secured/exposed to the internet through my personal router with duckDNS. I've containerized everything, and it all comes up and down seamlessly with docker-compose.
The advice I could really use is if everything that follows seems like a cohesive roadmap of things to implement/develop:
Currently my database is empty, but the real thing I want to build next will involve populating it with data from API calls to various other websites/servers based on user inputs and automated scraping.
Currently, it only operates off HTTP and not HTTPS yet because my understanding is I can't associate an HTTPS certificate with my personal server since I go through my router IP. I do already have a website URL registered with Cloudflare, and I'll put it there (with a valid cert) after I finish a little more of my dev roadmap.
Next I want to transition to a Dev/Test/Prod pipeline using GitLab. Obviously the environment I've been working off has been exclusively Dev, but the goal is doing a DevEnv push which then triggers moving the code to a TestEnv to do the following testing:
Unit, Integration, Regression, Acceptance, Performance, Security, End-to-End, and Smoke.
Is there anything I'm forgetting?
My understanding is a good choice for this is using pytest, and results displayed via allure.
Should I also setup a Staging Env for DAST before prod?
If everything passes TestEnv, it then either goes to StagingEnv for the next set of tests, or is primed for manual release to ProdEnv.
In terms of best practices, should I .gitlab-ci.yml to automatically spin up a new development container whenever a new branch is created?
My understanding is this is how dev is done with teams.
Also, Im guessing theres "always" (at least) one DevEnv running obviously for development, and only one ProdEnv running, but should a TestEnv always be running too, or does this only get spun up when there's a push?
And since everything is (currently) running off my personal server, should I just separate each env via individual .env.dev, .env.test, and .env.prod files that swap up the ports/secrets/vars/etc... used for each?
Eventually when I move to cloud, I'm guessing the ports can stay the same, and instead I'll go off IP addresses advertised during creation.
When I do move to the cloud (AWS), the plan is terraform (which I'm already kinda familiar with) to spin up the resources (via gitlab-ci) to load the containers onto. Then I'm guessing environment separation is done via IP addresses (advertised during creation), and not ports anymore.
I am aware there's a whole other batch of skills to learn regarding roles/permissions/AWS Services (alerts/cloudwatch/cloudtrails/cost monitoring/etc...) in this, maybe some AWS certs (Solutions Architect > DevOps Pro)
I also plan on migrating everything to kubernetes, and manage the spin up and deployment via helm charts into the cloud, and get into load
balancing, with a canary instance and blue/green rolling deployments. I've done some preliminary messing around with minikube, but will probably also use this time to dive into CKA also.
I know this is a lot of time and work ahead of me, but I wanted to ask those of you with real skin-in-the-game if this looks like a solid gameplan moving forward, or you have any advice/recommendations.
https://redd.it/1ltz902
@r_devops
I know this is a lot of time and work ahead of me, but I wanted to ask those of you with real skin-in-the-game if this looks like a solid gameplan moving forward, or you have any advice/recommendations.
https://redd.it/1ltz902
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community