CI/CD pipeline testing with file uploads - how do you generate consistent test data?
Running into an annoying issue with our CI/CD pipeline. We have microservices that handle file processing (image resizing, video transcoding, document parsing), and our tests keep failing inconsistently because of test data problems.
Current setup:
* Tests run in Docker containers
* Need various file types/sizes for boundary testing
* Some tests need exactly 10MB files, others need 100MB+
* Can't commit large binary files to repo (obvs)
What we've tried:
* wget random files from internet (unreliable, different sizes)
* Storing test files in S3 (works but adds external dependency)
* dd commands (creates files but wrong headers/formats)
The S3 approach works but feels heavy for simple unit tests. Plus some environments don't have internet access.
Built a simple solution that generates files in-browser with exact specs:
[https://filemock.com?utm\_source=reddit&utm\_medium=social&utm\_campaign=devops](https://filemock.com?utm_source=reddit&utm_medium=social&utm_campaign=devops)
Now thinking about integrating it into our pipeline with headless Chrome to generate test files on-demand. Anyone done something similar?
How do you handle test file generation in your pipelines? Looking for cleaner approaches that don't require external dependencies or huge repo sizes.
https://redd.it/1md8u5p
@r_devops
Running into an annoying issue with our CI/CD pipeline. We have microservices that handle file processing (image resizing, video transcoding, document parsing), and our tests keep failing inconsistently because of test data problems.
Current setup:
* Tests run in Docker containers
* Need various file types/sizes for boundary testing
* Some tests need exactly 10MB files, others need 100MB+
* Can't commit large binary files to repo (obvs)
What we've tried:
* wget random files from internet (unreliable, different sizes)
* Storing test files in S3 (works but adds external dependency)
* dd commands (creates files but wrong headers/formats)
The S3 approach works but feels heavy for simple unit tests. Plus some environments don't have internet access.
Built a simple solution that generates files in-browser with exact specs:
[https://filemock.com?utm\_source=reddit&utm\_medium=social&utm\_campaign=devops](https://filemock.com?utm_source=reddit&utm_medium=social&utm_campaign=devops)
Now thinking about integrating it into our pipeline with headless Chrome to generate test files on-demand. Anyone done something similar?
How do you handle test file generation in your pipelines? Looking for cleaner approaches that don't require external dependencies or huge repo sizes.
https://redd.it/1md8u5p
@r_devops
FileMock
FileMock - Free Mock File Generator for Testing
Generate test files instantly. Perfect for testing file uploads, storage systems, and application performance.
"Have you ever done any contributions to open source projects?"
No. I got a family and kids. Welp. Failed that interview.
Anybody got any open source projects I can add two or three features to so I can tick that off my bucket and have something to talk about in interviews?
These things feel like flippin marathons man! So many stages, so many non relevant questions,
https://redd.it/1mdbkh1
@r_devops
No. I got a family and kids. Welp. Failed that interview.
Anybody got any open source projects I can add two or three features to so I can tick that off my bucket and have something to talk about in interviews?
These things feel like flippin marathons man! So many stages, so many non relevant questions,
https://redd.it/1mdbkh1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What enterprise firewall would you go with?
We’re evaluating enterprise firewalls and I’d love to hear the community’s current opinions.
If you were selecting a next gen firewall for a medium to large organization today, which vendor would you go with and why?
Some key factors we’re weighing:
Security capabilities: threat prevention, IDS/IPS, sandboxing, SSL inspection
Performance and scalability
Ease of management / policy deployment
Integration with existing infrastructure (SIEM, EDR, etc.)
Licensing and support quality
Cloud/hybrid environment compatibility
Vendors on our radar include Palo Alto, Fortinet, Cisco (FTD), Check Point, and maybe Juniper or Sophos.
Would love to hear what’s working or not in real world environments. Bonus points if you share insights on cost effectiveness and vendor support. All help appreciated!
https://redd.it/1mdd3ex
@r_devops
We’re evaluating enterprise firewalls and I’d love to hear the community’s current opinions.
If you were selecting a next gen firewall for a medium to large organization today, which vendor would you go with and why?
Some key factors we’re weighing:
Security capabilities: threat prevention, IDS/IPS, sandboxing, SSL inspection
Performance and scalability
Ease of management / policy deployment
Integration with existing infrastructure (SIEM, EDR, etc.)
Licensing and support quality
Cloud/hybrid environment compatibility
Vendors on our radar include Palo Alto, Fortinet, Cisco (FTD), Check Point, and maybe Juniper or Sophos.
Would love to hear what’s working or not in real world environments. Bonus points if you share insights on cost effectiveness and vendor support. All help appreciated!
https://redd.it/1mdd3ex
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
SOC2 auditor wants us to log literally everything
Our compliance team just handed down new requirements: log every single API call, database query, file access, user action, etc. for 7 years.
CloudTrail bill is going to be astronomical. S3 storage costs are going to be wild. And they want real-time alerting on "suspicious activity" which apparently means everything.
Pretty sure our logging costs are going to exceed our actual compute costs at this point. Anyone dealt with ridiculous compliance requirements? How do you push back without getting the "you don't care about security" lecture
https://redd.it/1mdhs0a
@r_devops
Our compliance team just handed down new requirements: log every single API call, database query, file access, user action, etc. for 7 years.
CloudTrail bill is going to be astronomical. S3 storage costs are going to be wild. And they want real-time alerting on "suspicious activity" which apparently means everything.
Pretty sure our logging costs are going to exceed our actual compute costs at this point. Anyone dealt with ridiculous compliance requirements? How do you push back without getting the "you don't care about security" lecture
https://redd.it/1mdhs0a
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Spectral: The API Linting Tool You Need in Your Workflow (Blog + Free GitHub Demo)
Hey 👋 folks. I don’t want to be another guy just shamelessly plugging content but I genuinely think this is an awesome tool. If you’re not aware, or using it yet, or even just wanting to learn something new that’s free I figured it’s worth a share.
I’ve written up about why it’s useful, and a run through on how it works in practice. (Even linked Adidas spectral config they open sourced, which is pretty cool to draw inspiration from your own styling governance for APIs.)
https://rios.engineer/spectral-the-api-linting-tool-you-need-in-your-workflow-🔎/
But if reading isn’t your thing, you can just check the GitHub repo demo I’ve setup to check out as instead: https://github.com/riosengineer/spectral-demo
Anyone else using Spectral in anger? I love tools like this.
https://redd.it/1mdktb2
@r_devops
Hey 👋 folks. I don’t want to be another guy just shamelessly plugging content but I genuinely think this is an awesome tool. If you’re not aware, or using it yet, or even just wanting to learn something new that’s free I figured it’s worth a share.
I’ve written up about why it’s useful, and a run through on how it works in practice. (Even linked Adidas spectral config they open sourced, which is pretty cool to draw inspiration from your own styling governance for APIs.)
https://rios.engineer/spectral-the-api-linting-tool-you-need-in-your-workflow-🔎/
But if reading isn’t your thing, you can just check the GitHub repo demo I’ve setup to check out as instead: https://github.com/riosengineer/spectral-demo
Anyone else using Spectral in anger? I love tools like this.
https://redd.it/1mdktb2
@r_devops
rios.engineer
Spectral: The API Linting Tool You Need in Your Workflow 🔎 - Rios Engineer
What is something you'd like to see built?
Im a bored and experienced developer with a lot of free time on my hands.
Is there anything you'd want to see built or something you wished existed?
Edit: idc about money. Just wanna spend my time productively by helping out wherever i can
https://redd.it/1mdmhy4
@r_devops
Im a bored and experienced developer with a lot of free time on my hands.
Is there anything you'd want to see built or something you wished existed?
Edit: idc about money. Just wanna spend my time productively by helping out wherever i can
https://redd.it/1mdmhy4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Prototyping a tool to simplify deploying to cloud and deliver apps globally with high availability
TL;DR: I'm protoyping tool that simplifies provisioning and managing cloud compute nodes (called "Scales"), letting you take local applications to the cloud quickly without dealing with IPs, VPNs, SSH keys, or load balancers. It bridges the gap between local development and production.
I'm looking for feedback from developers and devops engineers. I'm looking to have a discussion about this.
Checkout a demo: https://youtu.be/XbIAI5SzG3A
# The Problem I'm Trying to Solve
Deploying to and managing cloud VMs on platforms like DigitalOcean and EC2 is pretty complex with many challenges like:
Managing IPs, SSH keys, VPNs, and firewalls.
Vastly different development environment and production environment.
Global and highly available ingress for application deployments.
# What I'm Trying to Make
Provision cloud compute nodes in the regions closest to your users.
Connect to nodes for development and management without needing VPNs, public IPs, or open SSH ports.
Deploy apps to nodes from **localhost** quickly, whether it’s a web app, API, or self-hosted tool.
Expose apps on nodes with an out-of-the-box application load balancer and regional routing to nodes closest to your users. A proxy with points of presence sits in front of your nodes and handles failover and routing.
Easily network nodes together for micro services.
# Examples
p scale create --region us-west --name my-node --size small
# SSH into the node.
p my-node connect
> echo "hello world"
> ls ./
# Bring your local container stack to the cloud.
p my-node docker compose up -d
# Copy local files and build artifacts to cloud with SCP, SFTP, etc.
# Run remote commands quickly without a full SSH session.
p my-node transfer ./local-app /app
p my-node exec npm run test
# Deploy app templates
p my-node deploy postgres
p my-node deploy grafana
# Use the built in proxy which provides load balancing, caching, rate limiting, and SSL certificates.
# Expose your apps with a domain name, high availability, and regional routing.
# Looking for Feedback!
Would a tool like this solve problems for you? What features would you like to see? Let me know your thoughts!
https://redd.it/1mdmlxw
@r_devops
TL;DR: I'm protoyping tool that simplifies provisioning and managing cloud compute nodes (called "Scales"), letting you take local applications to the cloud quickly without dealing with IPs, VPNs, SSH keys, or load balancers. It bridges the gap between local development and production.
I'm looking for feedback from developers and devops engineers. I'm looking to have a discussion about this.
Checkout a demo: https://youtu.be/XbIAI5SzG3A
# The Problem I'm Trying to Solve
Deploying to and managing cloud VMs on platforms like DigitalOcean and EC2 is pretty complex with many challenges like:
Managing IPs, SSH keys, VPNs, and firewalls.
Vastly different development environment and production environment.
Global and highly available ingress for application deployments.
# What I'm Trying to Make
Provision cloud compute nodes in the regions closest to your users.
Connect to nodes for development and management without needing VPNs, public IPs, or open SSH ports.
Deploy apps to nodes from **localhost** quickly, whether it’s a web app, API, or self-hosted tool.
Expose apps on nodes with an out-of-the-box application load balancer and regional routing to nodes closest to your users. A proxy with points of presence sits in front of your nodes and handles failover and routing.
Easily network nodes together for micro services.
# Examples
p scale create --region us-west --name my-node --size small
# SSH into the node.
p my-node connect
> echo "hello world"
> ls ./
# Bring your local container stack to the cloud.
p my-node docker compose up -d
# Copy local files and build artifacts to cloud with SCP, SFTP, etc.
# Run remote commands quickly without a full SSH session.
p my-node transfer ./local-app /app
p my-node exec npm run test
# Deploy app templates
p my-node deploy postgres
p my-node deploy grafana
# Use the built in proxy which provides load balancing, caching, rate limiting, and SSL certificates.
# Expose your apps with a domain name, high availability, and regional routing.
# Looking for Feedback!
Would a tool like this solve problems for you? What features would you like to see? Let me know your thoughts!
https://redd.it/1mdmlxw
@r_devops
YouTube
Pangolin Scales Demo
Serverless architecture or a simple EC2?
Hey everyone!
I'm starting a new project with two other devs, and we're currently in the infrastructure planning phase. We're considering going fully serverless using AWS Lambda and the Serverless Framework, and we're weighing the risks and benefits. Our main questions are:
* Do you have a mature project built entirely with this stack? What kind of headaches have you experienced?
* How does CI/CD, workflow management, and environment separation typically work? I noticed the Serverless Framework dashboard offers some of that, but I haven’t fully grasped how it works yet.
* From a theoretical standpoint, what are the key questions one should answer before choosing between EC2 and Lambda?
Any insights beyond these questions are also more than welcome!
https://redd.it/1mdojld
@r_devops
Hey everyone!
I'm starting a new project with two other devs, and we're currently in the infrastructure planning phase. We're considering going fully serverless using AWS Lambda and the Serverless Framework, and we're weighing the risks and benefits. Our main questions are:
* Do you have a mature project built entirely with this stack? What kind of headaches have you experienced?
* How does CI/CD, workflow management, and environment separation typically work? I noticed the Serverless Framework dashboard offers some of that, but I haven’t fully grasped how it works yet.
* From a theoretical standpoint, what are the key questions one should answer before choosing between EC2 and Lambda?
Any insights beyond these questions are also more than welcome!
https://redd.it/1mdojld
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Buying Kodkloud Subscription
If anyone is interested in buying Kodkloud pro Subscription together, ping me up.
We can buy together and share the credentials.
https://redd.it/1mdqsmj
@r_devops
If anyone is interested in buying Kodkloud pro Subscription together, ping me up.
We can buy together and share the credentials.
https://redd.it/1mdqsmj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What are the most useful WSLg applications you use at work?
I am running docker on WSL2 and I was wondering what are the best applications to use that allows you to run a GUI app on Windows? I downloaded GitKraken, but I realized it wasn't open source and had to find something else. Aside git tools, is there anything else I should get?
https://redd.it/1mdom12
@r_devops
I am running docker on WSL2 and I was wondering what are the best applications to use that allows you to run a GUI app on Windows? I downloaded GitKraken, but I realized it wasn't open source and had to find something else. Aside git tools, is there anything else I should get?
https://redd.it/1mdom12
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Looking for advice about cloud setup for start
We tried free tier 1 vCPU and 1 GB RAM, that was bad. We decided to find cheap and powerful VPS and found one. This setup we selected and we don't sure that this is enough for start: 4 vCPU, 8 GB RAM, 80 GB disk. Will it be good for production for complex API, App build, DB, cache, message broker and web server (5 containers at all)? We wish to accept hundreds of users per first days, maybe more. If it would be not enough in the future, we gonna migrate to bigger one.
https://redd.it/1mdxkyo
@r_devops
We tried free tier 1 vCPU and 1 GB RAM, that was bad. We decided to find cheap and powerful VPS and found one. This setup we selected and we don't sure that this is enough for start: 4 vCPU, 8 GB RAM, 80 GB disk. Will it be good for production for complex API, App build, DB, cache, message broker and web server (5 containers at all)? We wish to accept hundreds of users per first days, maybe more. If it would be not enough in the future, we gonna migrate to bigger one.
https://redd.it/1mdxkyo
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Keeping up with new technologies
I am a 26M working as a devops engineer from 5 years on On premise platform. I have never worked on cloud , I have experience with sonarqube, git , artifactory,etc. But with AI coming into picture nowadays and cloud is also everywhere. Lately , I am feeling like a lot behind . Please tell me what to do and where to start
https://redd.it/1mdyp58
@r_devops
I am a 26M working as a devops engineer from 5 years on On premise platform. I have never worked on cloud , I have experience with sonarqube, git , artifactory,etc. But with AI coming into picture nowadays and cloud is also everywhere. Lately , I am feeling like a lot behind . Please tell me what to do and where to start
https://redd.it/1mdyp58
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Migrating from Docker Content Trust to Sigstore
Starting on August 8th, 2025, the oldest of Docker Official Images (DOI) Docker Content Trust (DCT) signing certificates will begin to expire. If you publish images on Docker Hub using DCT today, the team at Docker are advising users to start planning their transition to a different image signing and verification solution (like Sigstore or Notation). The below blog should provide some additional information specific to Sigstore:
https://cloudsmith.com/blog/migrating-from-docker-content-trust-to-sigstore
https://redd.it/1me1237
@r_devops
Starting on August 8th, 2025, the oldest of Docker Official Images (DOI) Docker Content Trust (DCT) signing certificates will begin to expire. If you publish images on Docker Hub using DCT today, the team at Docker are advising users to start planning their transition to a different image signing and verification solution (like Sigstore or Notation). The below blog should provide some additional information specific to Sigstore:
https://cloudsmith.com/blog/migrating-from-docker-content-trust-to-sigstore
https://redd.it/1me1237
@r_devops
Is there an ansible courses on internet?
I was looking for an ansible course on internet It covers advanced topics like ansible galaxy and i did not find anything
https://redd.it/1me3hxv
@r_devops
I was looking for an ansible course on internet It covers advanced topics like ansible galaxy and i did not find anything
https://redd.it/1me3hxv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What do you think of a less corporate resume?
I've been toying with the Idea of a less corporate resume. I've learned a lot about copywriting (persuasion through text) and its all about getting the most value out of the least, easy to understand words.
My resume has turned into some corporate jargon bs to hit all the parsing algo key words, and its so boring to read even for myself.
Here are my now two resumes, one with all the buzzwords and one with plain english describing outcomes.
Which one would you prefer?
Plain English RESUME
--------------------------
Professional Experience
Site Reliability Engineer - USDA DISC | Company Sept 2024 - Present
Built a reusable Terraform setup to deploy EKS clusters in highly secure (FedRAMP High) AWS environments. Teams only need to add a terraform.tfvars file to their project. GitLab CI handles the rest, getting secrets from Vault and running the deployment.
Replaced manual Linux patching across 4,000 servers with an automated Ansible process in Ansible Automation Platform. Saved about 40 hours of work each month and cut patching downtime from 6 hours to 2.
Automated the creation of VM images in AWS and Azure using Packer. Cut image build time by 40% and saved around $4,000/month in labor.
Set up CI/CD pipelines with built-in testing to speed up deployments and reduce human error across on-prem infrastructure.
Used Datadog to track system health and alert on problems early before they caused downtime.
Platform Engineer | Company Jan 2022 - Sept 2024
Trained 3 junior engineers and helped them become fully independent contributors on client projects.
Led cloud infrastructure work for a Microsoft Azure data platform holding 100+ TB of sensitive healthcare data (PHI, PII, CUI).
Wrote a Terraform modules to deploy Azure Data Factory and Synapse Analytics behind a VPN with custom DNS access.
Built Terraform setups for Azure ML across dev, test, and prod environments, including all networking, IAM, and workspace setup.
Created and maintained a shared Terraform module library to speed up Azure deployments. Added automated tests to catch issues before rollout.
Comanaged GitHub Cloud for the company. Enforced security practices like signed commits, protected branches, secret scanning, and approval rules.
Built an AI-driven app on AWS that listens to doctor-patient conversations and generates SOAP notes automatically, saving doctors time on paperwork.
Data Scientist Intern | Company Jun 2020 - Jan 2022
Maintained and improved a full-stack demo app that ran machine learning models in Docker containers on AWS Lambda.
Built a Kubernetes-based simulation of an emergency room using JavaScript, Python, and synthetic data. Deployed with Helm on EKS.
Secured internal web apps on Kubernetes using OKTA (OIDC) and APISIX to handle user logins and keep data private.
Certifications, Education, & Clearance
AWS Solutions Architect Associate 003 (AWS SAA-003)
Bachelor’s, Computer Science, Rowan University Sept 2018 - Dec 2021
High Risk Public Trust Clearance (T4)
Projects
EKS (https://github.com/jgeissler14/eks ) - Leveraged Terraform for provisioning EKS cluster running ArgoCD App-of-apps to be used as a lab. Set up Prometheus and Grafana to monitor service health.
Homelab (https://github.com/jgeissler14/homelab ) - Virtualized an old laptop with Proxmox hypervisor then installed k3s with ansible. Currently running - fluxcd, cloudnativepg, kube-prometheus-stack, homepage, external-dns
Youtube [https://www.youtube.com/@joshgeissler](https://www.youtube.com/@joshgeissler) \- Documenting Interesting things I build to share with others
----------------------------
Corporate Normal Resume
------------------------------
Professional Experience
Site Reliability Engineer - USDA DISC | Company Sept 2024 - Present
Designed a templated EKS deployment for our MSP to deploy an EKS Cluster in FEDRAMP high environments with VPC CNI configured with custom networking. Deployments require a
I've been toying with the Idea of a less corporate resume. I've learned a lot about copywriting (persuasion through text) and its all about getting the most value out of the least, easy to understand words.
My resume has turned into some corporate jargon bs to hit all the parsing algo key words, and its so boring to read even for myself.
Here are my now two resumes, one with all the buzzwords and one with plain english describing outcomes.
Which one would you prefer?
Plain English RESUME
--------------------------
Professional Experience
Site Reliability Engineer - USDA DISC | Company Sept 2024 - Present
Built a reusable Terraform setup to deploy EKS clusters in highly secure (FedRAMP High) AWS environments. Teams only need to add a terraform.tfvars file to their project. GitLab CI handles the rest, getting secrets from Vault and running the deployment.
Replaced manual Linux patching across 4,000 servers with an automated Ansible process in Ansible Automation Platform. Saved about 40 hours of work each month and cut patching downtime from 6 hours to 2.
Automated the creation of VM images in AWS and Azure using Packer. Cut image build time by 40% and saved around $4,000/month in labor.
Set up CI/CD pipelines with built-in testing to speed up deployments and reduce human error across on-prem infrastructure.
Used Datadog to track system health and alert on problems early before they caused downtime.
Platform Engineer | Company Jan 2022 - Sept 2024
Trained 3 junior engineers and helped them become fully independent contributors on client projects.
Led cloud infrastructure work for a Microsoft Azure data platform holding 100+ TB of sensitive healthcare data (PHI, PII, CUI).
Wrote a Terraform modules to deploy Azure Data Factory and Synapse Analytics behind a VPN with custom DNS access.
Built Terraform setups for Azure ML across dev, test, and prod environments, including all networking, IAM, and workspace setup.
Created and maintained a shared Terraform module library to speed up Azure deployments. Added automated tests to catch issues before rollout.
Comanaged GitHub Cloud for the company. Enforced security practices like signed commits, protected branches, secret scanning, and approval rules.
Built an AI-driven app on AWS that listens to doctor-patient conversations and generates SOAP notes automatically, saving doctors time on paperwork.
Data Scientist Intern | Company Jun 2020 - Jan 2022
Maintained and improved a full-stack demo app that ran machine learning models in Docker containers on AWS Lambda.
Built a Kubernetes-based simulation of an emergency room using JavaScript, Python, and synthetic data. Deployed with Helm on EKS.
Secured internal web apps on Kubernetes using OKTA (OIDC) and APISIX to handle user logins and keep data private.
Certifications, Education, & Clearance
AWS Solutions Architect Associate 003 (AWS SAA-003)
Bachelor’s, Computer Science, Rowan University Sept 2018 - Dec 2021
High Risk Public Trust Clearance (T4)
Projects
EKS (https://github.com/jgeissler14/eks ) - Leveraged Terraform for provisioning EKS cluster running ArgoCD App-of-apps to be used as a lab. Set up Prometheus and Grafana to monitor service health.
Homelab (https://github.com/jgeissler14/homelab ) - Virtualized an old laptop with Proxmox hypervisor then installed k3s with ansible. Currently running - fluxcd, cloudnativepg, kube-prometheus-stack, homepage, external-dns
Youtube [https://www.youtube.com/@joshgeissler](https://www.youtube.com/@joshgeissler) \- Documenting Interesting things I build to share with others
----------------------------
Corporate Normal Resume
------------------------------
Professional Experience
Site Reliability Engineer - USDA DISC | Company Sept 2024 - Present
Designed a templated EKS deployment for our MSP to deploy an EKS Cluster in FEDRAMP high environments with VPC CNI configured with custom networking. Deployments require a
GitHub
GitHub - Jgeissler14/eks
Contribute to Jgeissler14/eks development by creating an account on GitHub.
single terraform.tfvars file to be placed in any of over 50 customer repositories, then Gitlab CI would retrieve credentials from Hashicorp Vault and deploy the EKS cluster automatically.
Enhanced USDA DISC’s patching process across 4,000 linux servers in a multicloud environment by developing a scheduled ansible template in Ansible Automation Platform(AAP), saving 40 labor hours per month and downtime from 6 hours to 2 hours on average
Automated VM image creation on Azure and AWS with Hashicorp Packer, reducing PaaS build times by 40% while saving \~$4000/month in labor hours
Established CI/CD pipelines with integrated automated testing, increasing deployment velocity, reducing toil, and improving consistency across data center operations
Utilized Datadog for comprehensive system monitoring and alerting, enabling proactive issue resolution and minimizing downtime
Platform Engineer | Company Jan 2022 - Sept 2024
Led modern data platform efforts on Microsoft Azure and Terraform, storing 100TB+ of sensitive data (PHI, PII, CUI)
Developed a terraform module to automate deployments of azure data factory and synapse analytics accessible only via VPN integrated directly with enterprise custom DNS
Created terraform deployments for multi env (dev, qat, uat, prod) of Azure ML for multiple teams including networking topology, access control, notebook development
Mentor and provide technical leadership to a team of engineers, growing multiple individuals into independent contributors serving clients
Established and managed an enterprise innersource Terraform library, accelerating deployment speed and reducing IT workload by standardizing Azure modules for development teams. Implemented terraform test to ensure module reliability and scalability across deployments
Shared admin responsibilities of enterprise github cloud organization, enforcing and educating on best practices including gpg signed commits, branch protections, secret management, and approval workflows
Created an event-driven transcription application on AWS, utilizing AI services to automatically generate SOAP summaries and transcriptions from patient-doctor conversations. This streamlined process reduced manual documentation time for healthcare practitioners, enhancing operational efficiency and data accuracy
Data Scientist Intern | Company Jun 2020 - Jan 2022
Operated and enhanced full stack web application hosting client demos consisting of various machine learning models run as docker containers in a fully serverless environment on AWS
Leveraged AWS and Kubernetes to provision a digital twin of an emergency room using Javascript, Python API server, and synthetic data generator on EKS as Helm charts
Secured multiple Single-Page Applications (SPAs) on kubernetes with OKTA OIDC via APISIX, ensuring robust user authentication and data security
Certifications, Education, & Clearance
AWS Solutions Architect Associate 003 (AWS SAA-003)
Bachelor’s, Computer Science, Rowan University Sept 2018 - Dec 2021
High Risk Public Trust Clearance (T4)
Projects
EKS (https://github.com/jgeissler14/eks) - Leveraged Terraform for provisioning EKS cluster running ArgoCD App-of-apps to be used as a lab. Set up Prometheus and Grafana to monitor service health.
Homelab (https://github.com/jgeissler14/homelab) - Virtualized an old laptop with Proxmox hypervisor then installed k3s with ansible. Currently running - fluxcd, cloudnativepg, kube-prometheus-stack, homepage, external-dns
Youtube https://www.youtube.com/@joshgeissler \- Documenting Interesting things I build to share with others
https://redd.it/1me4dko
@r_devops
Enhanced USDA DISC’s patching process across 4,000 linux servers in a multicloud environment by developing a scheduled ansible template in Ansible Automation Platform(AAP), saving 40 labor hours per month and downtime from 6 hours to 2 hours on average
Automated VM image creation on Azure and AWS with Hashicorp Packer, reducing PaaS build times by 40% while saving \~$4000/month in labor hours
Established CI/CD pipelines with integrated automated testing, increasing deployment velocity, reducing toil, and improving consistency across data center operations
Utilized Datadog for comprehensive system monitoring and alerting, enabling proactive issue resolution and minimizing downtime
Platform Engineer | Company Jan 2022 - Sept 2024
Led modern data platform efforts on Microsoft Azure and Terraform, storing 100TB+ of sensitive data (PHI, PII, CUI)
Developed a terraform module to automate deployments of azure data factory and synapse analytics accessible only via VPN integrated directly with enterprise custom DNS
Created terraform deployments for multi env (dev, qat, uat, prod) of Azure ML for multiple teams including networking topology, access control, notebook development
Mentor and provide technical leadership to a team of engineers, growing multiple individuals into independent contributors serving clients
Established and managed an enterprise innersource Terraform library, accelerating deployment speed and reducing IT workload by standardizing Azure modules for development teams. Implemented terraform test to ensure module reliability and scalability across deployments
Shared admin responsibilities of enterprise github cloud organization, enforcing and educating on best practices including gpg signed commits, branch protections, secret management, and approval workflows
Created an event-driven transcription application on AWS, utilizing AI services to automatically generate SOAP summaries and transcriptions from patient-doctor conversations. This streamlined process reduced manual documentation time for healthcare practitioners, enhancing operational efficiency and data accuracy
Data Scientist Intern | Company Jun 2020 - Jan 2022
Operated and enhanced full stack web application hosting client demos consisting of various machine learning models run as docker containers in a fully serverless environment on AWS
Leveraged AWS and Kubernetes to provision a digital twin of an emergency room using Javascript, Python API server, and synthetic data generator on EKS as Helm charts
Secured multiple Single-Page Applications (SPAs) on kubernetes with OKTA OIDC via APISIX, ensuring robust user authentication and data security
Certifications, Education, & Clearance
AWS Solutions Architect Associate 003 (AWS SAA-003)
Bachelor’s, Computer Science, Rowan University Sept 2018 - Dec 2021
High Risk Public Trust Clearance (T4)
Projects
EKS (https://github.com/jgeissler14/eks) - Leveraged Terraform for provisioning EKS cluster running ArgoCD App-of-apps to be used as a lab. Set up Prometheus and Grafana to monitor service health.
Homelab (https://github.com/jgeissler14/homelab) - Virtualized an old laptop with Proxmox hypervisor then installed k3s with ansible. Currently running - fluxcd, cloudnativepg, kube-prometheus-stack, homepage, external-dns
Youtube https://www.youtube.com/@joshgeissler \- Documenting Interesting things I build to share with others
https://redd.it/1me4dko
@r_devops
GitHub
GitHub - Jgeissler14/eks
Contribute to Jgeissler14/eks development by creating an account on GitHub.
Should I Accept DevOps Role to Break into Cloud Dev???
I am a new grad and my manager gave me the choice of two teams, a devops team and a development(full stack) team. I didnt want to do devops at first because it doesn't sound like too much coding to me, but I did hear the devops manages a lot of cloud stuff. My goal is to be a cloud engineer, so is devops a good way to break into that and get cloud roles?
https://redd.it/1me9caf
@r_devops
I am a new grad and my manager gave me the choice of two teams, a devops team and a development(full stack) team. I didnt want to do devops at first because it doesn't sound like too much coding to me, but I did hear the devops manages a lot of cloud stuff. My goal is to be a cloud engineer, so is devops a good way to break into that and get cloud roles?
https://redd.it/1me9caf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Can I make it into Devops
I am a 24F currently working in a MNC since 2 years. I work and support an application which runs on old technology for a Canadian based company. Recently our client decided to move all the jobs running on an age old platform to AWS. I was choosen to be the POC and also the testing support for the migration. My job has pretty much been to communicate our application requirements to the AWS devops team and also to test multiple scenarios based on what is required from us and what they have developed. Ours is a very huge application it has been there IDK for almost 30years or something. So this a pretty good experience I am gaining both to know my application deeper, also to explore AWS.
After working with the team and devops people, I liked what they're doing and how they're able to find solution for almost every requirement I bring up. Now my question is, can I make a transition into Devops career. If yes, how? And would this experience I am working would actually help me if I move into AWS. Also can you please provide me some insights based on the job market situation that is currently there.
https://redd.it/1mech0e
@r_devops
I am a 24F currently working in a MNC since 2 years. I work and support an application which runs on old technology for a Canadian based company. Recently our client decided to move all the jobs running on an age old platform to AWS. I was choosen to be the POC and also the testing support for the migration. My job has pretty much been to communicate our application requirements to the AWS devops team and also to test multiple scenarios based on what is required from us and what they have developed. Ours is a very huge application it has been there IDK for almost 30years or something. So this a pretty good experience I am gaining both to know my application deeper, also to explore AWS.
After working with the team and devops people, I liked what they're doing and how they're able to find solution for almost every requirement I bring up. Now my question is, can I make a transition into Devops career. If yes, how? And would this experience I am working would actually help me if I move into AWS. Also can you please provide me some insights based on the job market situation that is currently there.
https://redd.it/1mech0e
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
PR reviews got smoother when we started writing our PR descriptions like a changelog
Noticed that our team gave better feedback when we formatted pull request like a changelog entry: headline, context, rationale, and what to watch for.
It takes an extra few minutes, but reduces back-and-forth and gets reviewers aligned faster.
Curious if others do something similar. How do you write helpful PRs?
https://redd.it/1meeev4
@r_devops
Noticed that our team gave better feedback when we formatted pull request like a changelog entry: headline, context, rationale, and what to watch for.
It takes an extra few minutes, but reduces back-and-forth and gets reviewers aligned faster.
Curious if others do something similar. How do you write helpful PRs?
https://redd.it/1meeev4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
AI Knows What Happened But Only Culture Explains Why
Blameless culture isn’t soft, it’s how real problems get solved.
A blameless retro culture isn’t about being “soft” or avoiding accountability. It’s about creating an environment where individuals feel safe to be completely honest about what went wrong, without fear of personal repercussions. When engineers don’t feel safe during retros, self-protection takes priority over transparency.
Now layer in AI.
We’re in a world where incident timelines, contributing factors, and retro documents are automatically generated based on context, timelines, telemetry, and PRs. So here’s the big question we’re thinking about: how does someone hide in that world?
Easy - they omit context. They avoid Slack threads. They stay out of the incident room. They rewrite tickets or summaries after the fact. If people don’t feel safe, they’ll find new ways to disappear from the narrative, even if the tooling says otherwise.
This is why blameless culture matters more in an AI-assisted environment, not less. If AI helps surface the “what,” your teams still need to provide the “why.”
https://redd.it/1meeqxu
@r_devops
Blameless culture isn’t soft, it’s how real problems get solved.
A blameless retro culture isn’t about being “soft” or avoiding accountability. It’s about creating an environment where individuals feel safe to be completely honest about what went wrong, without fear of personal repercussions. When engineers don’t feel safe during retros, self-protection takes priority over transparency.
Now layer in AI.
We’re in a world where incident timelines, contributing factors, and retro documents are automatically generated based on context, timelines, telemetry, and PRs. So here’s the big question we’re thinking about: how does someone hide in that world?
Easy - they omit context. They avoid Slack threads. They stay out of the incident room. They rewrite tickets or summaries after the fact. If people don’t feel safe, they’ll find new ways to disappear from the narrative, even if the tooling says otherwise.
This is why blameless culture matters more in an AI-assisted environment, not less. If AI helps surface the “what,” your teams still need to provide the “why.”
https://redd.it/1meeqxu
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community