Struggling to find a data store that works for my use case Longhorn/Minio/Something else?
Hi folks, for some background information I started a video game server hosting service for a particular game over 2 years ago. Since then the service has grown to store hundreds of video game servers-- this may seem like a lot but the overall size of all the servers combined is around 300GB, so not too large.
The service runs atop Hetzner on a rancher K8s cluster. The lifecycle of a server works as follows:
1. Someone starts their server. We copy the files from the data store (currently Minio, previously a RWX longhorn volume) to the node that the server will be running on
2. While the server is running it writes data to its local SSD which provides a smooth gameplay experience. A sidecar container mirrors the data back to the original data store every 60 seconds to prevent data loss if the game crashes.
3. When the user is done playing on their server we write the data from the node the server was running on back to the original data store.
My biggest struggles have revolved around this initial data store that I've been mentioning. The timeline of events has looked like:
First, Longhorn RWX volume
This RWX volume stored all game server data and was mounted on many pods at once (e.g. the api pods, periodic jobs that needed access to server data, and all the running game servers that were periodically writing back to this volume). There were a few issues with this approach:
1. Single point of failure. Occasionally longhorn would restart and the volumes would detach causing every single server + the API pod to restart. This was obviously incredibly frustrating for users of the service who's server may occasionally stop in the middle of gameplay.
2. Expanding the volume size required all attached workloads to be stopped first. As the service grew in popularity so did the amount of data we were storing. In order to accommodate this increase I would have to scale down all workloads including all running servers in order to increase the underlying storage size. This is because you cannot expand a longhorn RWX volume "live".
3. Accessing server data locally isn't something I've been able to do with this setup (at least I'm not sure how)
Second, Minio
Because of those two issues I mentioned above the current approach via RWX longhorn volume just wasn't sustainable. I needed the ability to expand the underlying storage on demand without significant downtime. I also wasn't happy about the single point of failure with each workload attached to the same RWX volume. Because of this I recently mapped everything over to Minio.
Minio has been working okay but it's probably not the best option for my use case. The way I'm using Minio is sort of like a filesystem which is not its intended use as an object store. When users start/stop their servers we sync the full contents of their server to or from minio. This has some issues:
1. Minio's mirror command doesn't copy empty directories because its an object store and it doesn't make sense (in the traditional sense) to store empty keys. I've had to build a script as a workaround that creates these empty keys after the sync. Unfortunately these empty directories are created automatically by the game when it starts and are required.
2. Sometimes the mirror command leaves behind weird artifacts (see this example a customer raised to our support team today https://i.postimg.cc/CKP1YRQ6/image.png ) where files are represented as "file folder" instead of the usual file type. This might be the interaction between our SFTP server and Minio, though. It's hard to tell.
3. We're running a SFTP server that connects to Minio allowing customers to edit their server files. This has some limitations (e.g. renaming a directory as an object store has to rename all files under that particular key).
Now?
I'm not sure. I really feel like this Minio approach isn't the best solution for this problem but I'm unsure of what the best next step
Hi folks, for some background information I started a video game server hosting service for a particular game over 2 years ago. Since then the service has grown to store hundreds of video game servers-- this may seem like a lot but the overall size of all the servers combined is around 300GB, so not too large.
The service runs atop Hetzner on a rancher K8s cluster. The lifecycle of a server works as follows:
1. Someone starts their server. We copy the files from the data store (currently Minio, previously a RWX longhorn volume) to the node that the server will be running on
2. While the server is running it writes data to its local SSD which provides a smooth gameplay experience. A sidecar container mirrors the data back to the original data store every 60 seconds to prevent data loss if the game crashes.
3. When the user is done playing on their server we write the data from the node the server was running on back to the original data store.
My biggest struggles have revolved around this initial data store that I've been mentioning. The timeline of events has looked like:
First, Longhorn RWX volume
This RWX volume stored all game server data and was mounted on many pods at once (e.g. the api pods, periodic jobs that needed access to server data, and all the running game servers that were periodically writing back to this volume). There were a few issues with this approach:
1. Single point of failure. Occasionally longhorn would restart and the volumes would detach causing every single server + the API pod to restart. This was obviously incredibly frustrating for users of the service who's server may occasionally stop in the middle of gameplay.
2. Expanding the volume size required all attached workloads to be stopped first. As the service grew in popularity so did the amount of data we were storing. In order to accommodate this increase I would have to scale down all workloads including all running servers in order to increase the underlying storage size. This is because you cannot expand a longhorn RWX volume "live".
3. Accessing server data locally isn't something I've been able to do with this setup (at least I'm not sure how)
Second, Minio
Because of those two issues I mentioned above the current approach via RWX longhorn volume just wasn't sustainable. I needed the ability to expand the underlying storage on demand without significant downtime. I also wasn't happy about the single point of failure with each workload attached to the same RWX volume. Because of this I recently mapped everything over to Minio.
Minio has been working okay but it's probably not the best option for my use case. The way I'm using Minio is sort of like a filesystem which is not its intended use as an object store. When users start/stop their servers we sync the full contents of their server to or from minio. This has some issues:
1. Minio's mirror command doesn't copy empty directories because its an object store and it doesn't make sense (in the traditional sense) to store empty keys. I've had to build a script as a workaround that creates these empty keys after the sync. Unfortunately these empty directories are created automatically by the game when it starts and are required.
2. Sometimes the mirror command leaves behind weird artifacts (see this example a customer raised to our support team today https://i.postimg.cc/CKP1YRQ6/image.png ) where files are represented as "file folder" instead of the usual file type. This might be the interaction between our SFTP server and Minio, though. It's hard to tell.
3. We're running a SFTP server that connects to Minio allowing customers to edit their server files. This has some limitations (e.g. renaming a directory as an object store has to rename all files under that particular key).
Now?
I'm not sure. I really feel like this Minio approach isn't the best solution for this problem but I'm unsure of what the best next step
postimg.cc
Image — Postimages
to take is. Ideally I think a data store that is actually a file system instead of an object store is the correct approach here but I wasn't happy with attaching the same RWX volume to all of my workloads. Alternatively maybe an object store is the best path forward here. I work full time as a software engineer in addition to this side business so unfortunately my expertise isn't in devops. I'd love to hear this community's thoughts about my particular scenario. Cheers!
https://redd.it/1jhixzn
@r_devops
https://redd.it/1jhixzn
@r_devops
Reddit
From the devops community on Reddit: Struggling to find a data store that works for my use case [Longhorn/Minio/Something else?]
Explore this post and more from the devops community
Roadmap for cloud,devops.
I have 1 year experience in production support /Application support . i want to transition to cloud support or cloud engineer role . how can i proceed provided i am unemployed right now and need of job ASAP.
https://redd.it/1jhgth3
@r_devops
I have 1 year experience in production support /Application support . i want to transition to cloud support or cloud engineer role . how can i proceed provided i am unemployed right now and need of job ASAP.
https://redd.it/1jhgth3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I'm looking forward to start my System Design DevOps Journey
'm new to this System Design and all if anyone wants to start or have some knowledge do let me know, We can connect.
https://redd.it/1jht6ez
@r_devops
'm new to this System Design and all if anyone wants to start or have some knowledge do let me know, We can connect.
https://redd.it/1jht6ez
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Observability platform for an air-gapped system
We're looking for a single observability platform that can handle our pretty small hybrid-cloud setup and a few big air-gapped production systems in a heavily regulated field. Our system is made up of VMs, OpenShift, and SaaS. Right now, we're using a horrible tech stack that includes Zabbix, Grafana/Prometheus, Elastic APM, Splunk, plus some manual log checking and JDK Flight Recorder.
LLMs recommend that I look into the LGTM stack, Elastic stack, Dynatrace, or IBM Instana since those are the only self-managed options out there.
What are your experience or recommendation? I guess reddit is heavily into LGTM but I read recently the Grafana is abandoning some of their FOSS tools in favor of Cloud only solution (see https://www.reddit.com/r/devops/comments/1j948o9/grafana_oncall_is_deprecated/)
https://redd.it/1jhtva4
@r_devops
We're looking for a single observability platform that can handle our pretty small hybrid-cloud setup and a few big air-gapped production systems in a heavily regulated field. Our system is made up of VMs, OpenShift, and SaaS. Right now, we're using a horrible tech stack that includes Zabbix, Grafana/Prometheus, Elastic APM, Splunk, plus some manual log checking and JDK Flight Recorder.
LLMs recommend that I look into the LGTM stack, Elastic stack, Dynatrace, or IBM Instana since those are the only self-managed options out there.
What are your experience or recommendation? I guess reddit is heavily into LGTM but I read recently the Grafana is abandoning some of their FOSS tools in favor of Cloud only solution (see https://www.reddit.com/r/devops/comments/1j948o9/grafana_oncall_is_deprecated/)
https://redd.it/1jhtva4
@r_devops
Reddit
From the devops community on Reddit: Grafana Oncall is deprecated
Explore this post and more from the devops community
Is it ever a good idea to split CI and CD across two providers?
I recently started a new job that has CI and CD split across two providers GitHub Actions (CI) and AWS Code Pipelines (CD).
AFAIK the reason is historical as infrastructure was always deployed via AWS Code Pipelines and GitHub Actions is a new addition.
I feel it would make more sense to consolidate onto one system so:
- There is a single pane of glass for deployments end-to-end
- There is no hand-off to AWS CP. Currently, a failure can happen in AWS CP which is not reflected in the triggering workflow
- It's easier to look back at what happened during past deployments
- Only one CICD system to learn manage
Thoughts?
https://redd.it/1jhvxam
@r_devops
I recently started a new job that has CI and CD split across two providers GitHub Actions (CI) and AWS Code Pipelines (CD).
AFAIK the reason is historical as infrastructure was always deployed via AWS Code Pipelines and GitHub Actions is a new addition.
I feel it would make more sense to consolidate onto one system so:
- There is a single pane of glass for deployments end-to-end
- There is no hand-off to AWS CP. Currently, a failure can happen in AWS CP which is not reflected in the triggering workflow
- It's easier to look back at what happened during past deployments
- Only one CICD system to learn manage
Thoughts?
https://redd.it/1jhvxam
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DataDog Charges
Hi,
My team decided to try DataDog’s free tier a month ago. After evaluating it, we decided not to continue with DataDog. Since we never provided any payment information (no credit card or billing details), I simply forgot about the account.
Recently, I went to properly close the account and noticed something - even though our free trial had ended, the system was still ingesting all our logs.
My question is: Will DataDog try to charge us or pursue payment for these logs that were collected after our free trial ended? This seems especially unfair since we couldn’t even access these logs (DataDog blocks access to data once the free tier ends until you select a paid plan).
https://redd.it/1jhwbk5
@r_devops
Hi,
My team decided to try DataDog’s free tier a month ago. After evaluating it, we decided not to continue with DataDog. Since we never provided any payment information (no credit card or billing details), I simply forgot about the account.
Recently, I went to properly close the account and noticed something - even though our free trial had ended, the system was still ingesting all our logs.
My question is: Will DataDog try to charge us or pursue payment for these logs that were collected after our free trial ended? This seems especially unfair since we couldn’t even access these logs (DataDog blocks access to data once the free tier ends until you select a paid plan).
https://redd.it/1jhwbk5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Is anyone here in need of a website?
Hi,
I wanted to ask if anyone here is in need of a website or would love to have his/her website redesigned not only do I design and develop websites I also develop softwares, web apps and mobile apps, I currently do not have any project now and I’d love to take on some projects. You can send me a message if you’re in need of my services. Thanks
If you’d love to check out my case studies you can do that by visiting my website: https://warrigodswill.com/
https://redd.it/1jhxjxg
@r_devops
Hi,
I wanted to ask if anyone here is in need of a website or would love to have his/her website redesigned not only do I design and develop websites I also develop softwares, web apps and mobile apps, I currently do not have any project now and I’d love to take on some projects. You can send me a message if you’re in need of my services. Thanks
If you’d love to check out my case studies you can do that by visiting my website: https://warrigodswill.com/
https://redd.it/1jhxjxg
@r_devops
Warrigodswill
Warri Godswill
Hi, I’m Godswill, a passionate and experienced freelance designer and Web3/SaaS software developer. With a strong background in both design and development, I specialize in creating seamless digital experiences that merge aesthetics with functionality.
What is the relation between CPU usage (percentage) and load average?
Looking at the graphs of a database running on Digitalocean. This instance has 1 vcpu and for one particular point in time it has 20% CPU but 1.62 max load. Is this a healthy system?
If I interpret the load graph it seems to me that I should upgrade to 2 vcpu, but the CPU usage tells me that it would not be needed.
https://redd.it/1ji0fvc
@r_devops
Looking at the graphs of a database running on Digitalocean. This instance has 1 vcpu and for one particular point in time it has 20% CPU but 1.62 max load. Is this a healthy system?
If I interpret the load graph it seems to me that I should upgrade to 2 vcpu, but the CPU usage tells me that it would not be needed.
https://redd.it/1ji0fvc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Python packages caching server
Hey all.
I am currently working in a company at a jr position and they have givem a task to run a remote caching sever. The ideas is that whenever someone in our team wants to install a python package via pip or poetry they will query our caching server. The server will look for the package. If it's already there it will return otherwise it will download it from the PyPi repository and then store it on the Google Cloud Storage bucket. We will run this server on GKE.
I have looked into Devpi. It fits our use case but doesn't natively support GCS as storage backend. They have provided support for plugins but I'll have to implement it by myself by referring to the source code.
Next, I looked into PyPi cloud but it is a private pypi registry. We can upload our packages to it and it will store them on the GCS or S3. But it doesn't store the cached packages on s3 or gcs. I am a bit confused here. I went through the documentation and couldn't find much.
Then I looked into bandersnatch and after going through the documentation, they also don't provide support for GCS. Also it's a mirror for all the python packaged and we don't quite want all the packages to be cached but only those which are requested.
I wanna hear from you if I am missing something or if I should change my way of thinking about problem etc.
PS: I am not a native english speaker so apologies for badly written english or grammar mistakes.
https://redd.it/1ji1qe4
@r_devops
Hey all.
I am currently working in a company at a jr position and they have givem a task to run a remote caching sever. The ideas is that whenever someone in our team wants to install a python package via pip or poetry they will query our caching server. The server will look for the package. If it's already there it will return otherwise it will download it from the PyPi repository and then store it on the Google Cloud Storage bucket. We will run this server on GKE.
I have looked into Devpi. It fits our use case but doesn't natively support GCS as storage backend. They have provided support for plugins but I'll have to implement it by myself by referring to the source code.
Next, I looked into PyPi cloud but it is a private pypi registry. We can upload our packages to it and it will store them on the GCS or S3. But it doesn't store the cached packages on s3 or gcs. I am a bit confused here. I went through the documentation and couldn't find much.
Then I looked into bandersnatch and after going through the documentation, they also don't provide support for GCS. Also it's a mirror for all the python packaged and we don't quite want all the packages to be cached but only those which are requested.
I wanna hear from you if I am missing something or if I should change my way of thinking about problem etc.
PS: I am not a native english speaker so apologies for badly written english or grammar mistakes.
https://redd.it/1ji1qe4
@r_devops
Can we talk salaries? What's everyone making these days?
What's everyone making these days?
- salary
- job title
- tech stack
- date hired
- full-time or contract
- industry
- highest education completed
- location
I've been in straight Ops at the same company for 6 years now. I've had two promotions. Currently Lead Engineer (full time). Paid well (160k total comp) at one of the big 4 accounting firms. My tech stack is heavy on Kubernetes and Terraform I'd say. I'm certified in those but work adjacent to the devs who work heavily on those. Certified in and know AWS and Azure. Have an associates in computer networking but will be finishing my compsci degree in a few months. I work remote out of Atlanta, GA.
Feeling stagnant and for other reasons looking to move into a Devops role. Is $200k feasible in the current market? What do roles in that range look like today?
Open discussion...
https://redd.it/1ji23fj
@r_devops
What's everyone making these days?
- salary
- job title
- tech stack
- date hired
- full-time or contract
- industry
- highest education completed
- location
I've been in straight Ops at the same company for 6 years now. I've had two promotions. Currently Lead Engineer (full time). Paid well (160k total comp) at one of the big 4 accounting firms. My tech stack is heavy on Kubernetes and Terraform I'd say. I'm certified in those but work adjacent to the devs who work heavily on those. Certified in and know AWS and Azure. Have an associates in computer networking but will be finishing my compsci degree in a few months. I work remote out of Atlanta, GA.
Feeling stagnant and for other reasons looking to move into a Devops role. Is $200k feasible in the current market? What do roles in that range look like today?
Open discussion...
https://redd.it/1ji23fj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
IT Consultant starting into DevOps
Hey all, I'm an infrastructure guy. Strong with windows, servers on site infrastructure and planning on getting azure 104 (I'm fairly good at azure). In the UK would starting into devops be a good choice? I know c#.Net and fairly comfortable with it. I do projects in c#. Hoping to increase salary 50k+. I know basics of Linux and python. Thanks all.
https://redd.it/1ji13o6
@r_devops
Hey all, I'm an infrastructure guy. Strong with windows, servers on site infrastructure and planning on getting azure 104 (I'm fairly good at azure). In the UK would starting into devops be a good choice? I know c#.Net and fairly comfortable with it. I do projects in c#. Hoping to increase salary 50k+. I know basics of Linux and python. Thanks all.
https://redd.it/1ji13o6
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Course recommendation help
Hello all, I have yearly budget for learning in my company and $150-$200 left for this year and its expiring this week.
Please recommend me courses, bootcamps etc mainly focused on ai, mlops if its possible
on ray,kuberay,kubeflow,mcp
I have CKA,CKS, AWS and GCP Solution architect profs and other several prof certificates so I do not want to spend this on other certificates.
Can you help me on that thanks. For my backgound
I have total 7 years of experience with linux admin, devops, cloud areas.
Still pretty new to AI area thats why I wanted suggestions
https://redd.it/1ji5o5m
@r_devops
Hello all, I have yearly budget for learning in my company and $150-$200 left for this year and its expiring this week.
Please recommend me courses, bootcamps etc mainly focused on ai, mlops if its possible
on ray,kuberay,kubeflow,mcp
I have CKA,CKS, AWS and GCP Solution architect profs and other several prof certificates so I do not want to spend this on other certificates.
Can you help me on that thanks. For my backgound
I have total 7 years of experience with linux admin, devops, cloud areas.
Still pretty new to AI area thats why I wanted suggestions
https://redd.it/1ji5o5m
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Wiz Guide to Kubernetes
Came across this on LinkedIn — looks like a solid session from Wiz if you’re thinking about hardening your Kubernetes setup ahead of KubeCon.
"The Wiz Guide to Kubernetes Security: Avoid Traps, Spot Trends, and Ace KubeCon"
https://wiz.registration.goldcast.io/webinar/de0b7794-9265-4262-860a-9824117acc20
It’s a 45-minute walkthrough with folks from Wiz (Ofir Cohen, CTO of Container Security, and Shay Berkovich, Threat Researcher)
https://redd.it/1jihzuf
@r_devops
Came across this on LinkedIn — looks like a solid session from Wiz if you’re thinking about hardening your Kubernetes setup ahead of KubeCon.
"The Wiz Guide to Kubernetes Security: Avoid Traps, Spot Trends, and Ace KubeCon"
https://wiz.registration.goldcast.io/webinar/de0b7794-9265-4262-860a-9824117acc20
It’s a 45-minute walkthrough with folks from Wiz (Ofir Cohen, CTO of Container Security, and Shay Berkovich, Threat Researcher)
https://redd.it/1jihzuf
@r_devops
wiz.registration.goldcast.io
The Wiz Guide to Kubernetes Security: Avoid Traps, Spot Trends, and Ace KubeCon
Join us to get ready for KubeCon EMEA with an unfiltered look at Kubernetes security from the experts who know it best. Join Ofir Cohen, Container Security CTO, and Shay Berkovich, threat researcher, for a candid fireside chat that pulls no punches.
HR says I'm not professional
More than a month before my contract expired (1-year contract), I told my manager that I’d be open to signing a new contract if the offer met my expectations. Pretty standard, right?
Well, they took their sweet time and only gave me the new offer 25 days later—just 5 days before my contract ended. And guess what? The offer wasn’t good enough. So, I told them I wouldn’t be continuing.
Now HR is acting like I did something wrong. They’re saying I should have informed them a month earlier. But… I did! They just didn’t give me a proper offer in time. Now they’re calling me unprofessional for not staying.
On top of that, they’re withholding my last month’s salary, saying they’ll pay it after offboarding and returning my laptop. And here’s the kicker—the HR rep even tried to threaten me:
“The HR world is small, you’ll have trouble finding your next job.”
She even accused me of blackmailing them just because I’m leaving after rejecting a bad offer.
For more context, this isn’t just about money. Our DevOps team has been bleeding members. One left 2 months ago, another almost a year ago. The real issue? Our so-called “DevOps manager” (he’s really just a lead) is terrible. No soft skills, no team collaboration—he just does whatever he wants. The HR knows this, but since he’s always online and on-call like a bot and listens to everything they say, the CTO loves him, so nothing changes.
So, what do you guys think? Am I the unprofessional one here? Or is this just a toxic workplace trying to guilt-trip me on the way out?
https://redd.it/1jilad5
@r_devops
More than a month before my contract expired (1-year contract), I told my manager that I’d be open to signing a new contract if the offer met my expectations. Pretty standard, right?
Well, they took their sweet time and only gave me the new offer 25 days later—just 5 days before my contract ended. And guess what? The offer wasn’t good enough. So, I told them I wouldn’t be continuing.
Now HR is acting like I did something wrong. They’re saying I should have informed them a month earlier. But… I did! They just didn’t give me a proper offer in time. Now they’re calling me unprofessional for not staying.
On top of that, they’re withholding my last month’s salary, saying they’ll pay it after offboarding and returning my laptop. And here’s the kicker—the HR rep even tried to threaten me:
“The HR world is small, you’ll have trouble finding your next job.”
She even accused me of blackmailing them just because I’m leaving after rejecting a bad offer.
For more context, this isn’t just about money. Our DevOps team has been bleeding members. One left 2 months ago, another almost a year ago. The real issue? Our so-called “DevOps manager” (he’s really just a lead) is terrible. No soft skills, no team collaboration—he just does whatever he wants. The HR knows this, but since he’s always online and on-call like a bot and listens to everything they say, the CTO loves him, so nothing changes.
So, what do you guys think? Am I the unprofessional one here? Or is this just a toxic workplace trying to guilt-trip me on the way out?
https://redd.it/1jilad5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I made an interactive shell-based Dockerfile creator/editor
Sunday afternoon project (all day and most the night really, it turned out pretty good)
Idea is, you type stuff in, it builds the Dockerfile in the pwd and you append to it. Each command you type runs on the container and rebuilds with
Put space before a line to just run it on the container,
Try it out:
or
Video here:
https://asciinema.org/a/709456
Source code:
https://github.com/bitplane/dockershit
https://redd.it/1jij443
@r_devops
Sunday afternoon project (all day and most the night really, it turned out pretty good)
Idea is, you type stuff in, it builds the Dockerfile in the pwd and you append to it. Each command you type runs on the container and rebuilds with
RUN whatever on the end. Type exit to exit, or ADD to add stuff or whatever. If it fails a build or the command returns nonzero then it goes in as a comment.Put space before a line to just run it on the container,
# for comments. Supports command history and deletes no-operations. It might go crazy commenting stuff out if you change the image (it'll only swap the first FROM line, and if you don't provide one it'll use whatever is there, or alpine:latest)Try it out:
uvx dockershit ubuntu:latest
or
pip install dockershit
dockershit nginx
Video here:
https://asciinema.org/a/709456
Source code:
https://github.com/bitplane/dockershit
https://redd.it/1jij443
@r_devops
asciinema.org
docker sh -it
Build docker images interactively
How do you keep your code, repos, and libraries in sync across multiple machines?
I work on multiple machines (Windows & macOS) and I'm trying to find the best way to keep everything in sync—code, Git repositories, and even installed dependencies like Python packages or Flutter SDKs.
I want a setup that doesn’t require me to constantly reinstall dependencies or manually move files.
For those who develop across multiple devices, what’s your go-to method for keeping everything in sync smoothly? Any tools, scripts, or workflows that work well for you?
https://redd.it/1jibtfb
@r_devops
I work on multiple machines (Windows & macOS) and I'm trying to find the best way to keep everything in sync—code, Git repositories, and even installed dependencies like Python packages or Flutter SDKs.
I want a setup that doesn’t require me to constantly reinstall dependencies or manually move files.
For those who develop across multiple devices, what’s your go-to method for keeping everything in sync smoothly? Any tools, scripts, or workflows that work well for you?
https://redd.it/1jibtfb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
PfSense, Cloudflare, Xampp and Windows Server 2022 Datacenter R2
I'm trying to resolve an issue in our homegrown style server. As an fresh IT graduate it's really difficult for me to understand this part of developing a system, it's putting the system in the net. By the way this is a Web system, the nameservers was registered by a sponsor, we are using flexible mode in the Cloudflare and also the dns already matches with the Ipv4. We are also using CMS mainly Wordpress and Joomla. These are the errors I'm facing.
1. Forbidden, you don't have permission to access this resources.
2. XAMPP Apache error: client denied by server configuration
3. PID does not match the certificate
I would really appreciate your comments guys!
https://redd.it/1jio925
@r_devops
I'm trying to resolve an issue in our homegrown style server. As an fresh IT graduate it's really difficult for me to understand this part of developing a system, it's putting the system in the net. By the way this is a Web system, the nameservers was registered by a sponsor, we are using flexible mode in the Cloudflare and also the dns already matches with the Ipv4. We are also using CMS mainly Wordpress and Joomla. These are the errors I'm facing.
1. Forbidden, you don't have permission to access this resources.
2. XAMPP Apache error: client denied by server configuration
3. PID does not match the certificate
I would really appreciate your comments guys!
https://redd.it/1jio925
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Role of programming in devops.
I've directly got internship into devops and i'm the only one other than my mentor who's doing or i should say learning about deployment. So how much should i focus on programming ? Should i also make projects seperately so i can understand basics ? Also how much do i rely on google searches or ai for info ? ( Not for everything ). TIA
https://redd.it/1jihydy
@r_devops
I've directly got internship into devops and i'm the only one other than my mentor who's doing or i should say learning about deployment. So how much should i focus on programming ? Should i also make projects seperately so i can understand basics ? Also how much do i rely on google searches or ai for info ? ( Not for everything ). TIA
https://redd.it/1jihydy
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Are my daily tasks too complex, or irrelevant?
Does anyone else feel that as an infrastructure/platform/DevOps engineer, your day to day tasks, improvements, automation and ensuring acceptable reliability, are often either overlooked, ignored, or senior engineers dont really understand what it is that we do?
It happens too often that during standups I talk about say, observability metrics, automated tests for terraform modules, upgrading outdated modules, reducing costs by switching to spot instances, cicd improvements, infrastructure drift notifications, and so on, but no one really cares? Or they have no idea what I'm taking about, or why it might be useful?
It scares me that I think (unless I'm biased) that these things are important and sometimes key to having a proper reliable workload, but, since no one really cares or knows what the hell it is, it might make me the best candidate for next rounds of layoffs
Is it only me? Why am I here? What am I?
https://redd.it/1jirreo
@r_devops
Does anyone else feel that as an infrastructure/platform/DevOps engineer, your day to day tasks, improvements, automation and ensuring acceptable reliability, are often either overlooked, ignored, or senior engineers dont really understand what it is that we do?
It happens too often that during standups I talk about say, observability metrics, automated tests for terraform modules, upgrading outdated modules, reducing costs by switching to spot instances, cicd improvements, infrastructure drift notifications, and so on, but no one really cares? Or they have no idea what I'm taking about, or why it might be useful?
It scares me that I think (unless I'm biased) that these things are important and sometimes key to having a proper reliable workload, but, since no one really cares or knows what the hell it is, it might make me the best candidate for next rounds of layoffs
Is it only me? Why am I here? What am I?
https://redd.it/1jirreo
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Offered both Backend and DevOps positions as a junior. Bad idea to start with DevOps?
Greetings, I wanted to ask for some career advice here.
I am a new grad going into their first real (non internship, non freelance) job. The DevOps field has always interested me, especially because I come from a background of being passionate about Linux, and that led me to becoming interested in several related themes like containerization, virtualization, IaC and hardening, smoothly, mostly from messing around with Linux in my free time. I have been looking at the DevOps / SRE career path from a safe distance for a few years, before doing sort of a last-minute switch to "maybe I should start with development" a short while ago.
However, I heard that DevOps is not a junior position, but rather, something you pivot to after a background in something else, ideally development.
So, my original plan had been to do exactly that: start off in backend development, with the intention to migrate to DevOps later down the line, but not without a good 2-3 years of experience in pure development (in this case, modern .NET). I think I also enjoy development, but the end goal has always been DevOps.
As I got to the team matching phase after my internship (which was a bit of an hybrid, I participated in the development of internal tooling, such as API testing solutions, which I enjoyed), since they noticed my interest in infrastructure during the internship, I was eventually told that I have the option to choose either the Backend development position, as originally planned, or a DevOps one, in the Infrastructure team, focusing on containerization and security, as they think it might also be a good fit for my skills and interests.
Before I proceed with dev as I had originally planned, though, I found myself kind of second guessing that decision. Would there be any bad implications in taking the DevOps job immediately - considering it would practically be more focused on Ops, in all likelihood? Would this choice be riskier for my career progression? Most importantly, should I regret my decision, save for an internal transfer that should still be an option down the line (they are quite common in this company), how locked in would I be by going the DevOps route first? Is this a specific field like embedded that is hard to get out of once you get in, or should I not be too concerned with this and just try and see how it goes? Or maybe should I ignore this altogether and proceed to backend, and pivot later?
Thanks in advance!
https://redd.it/1jit5kv
@r_devops
Greetings, I wanted to ask for some career advice here.
I am a new grad going into their first real (non internship, non freelance) job. The DevOps field has always interested me, especially because I come from a background of being passionate about Linux, and that led me to becoming interested in several related themes like containerization, virtualization, IaC and hardening, smoothly, mostly from messing around with Linux in my free time. I have been looking at the DevOps / SRE career path from a safe distance for a few years, before doing sort of a last-minute switch to "maybe I should start with development" a short while ago.
However, I heard that DevOps is not a junior position, but rather, something you pivot to after a background in something else, ideally development.
So, my original plan had been to do exactly that: start off in backend development, with the intention to migrate to DevOps later down the line, but not without a good 2-3 years of experience in pure development (in this case, modern .NET). I think I also enjoy development, but the end goal has always been DevOps.
As I got to the team matching phase after my internship (which was a bit of an hybrid, I participated in the development of internal tooling, such as API testing solutions, which I enjoyed), since they noticed my interest in infrastructure during the internship, I was eventually told that I have the option to choose either the Backend development position, as originally planned, or a DevOps one, in the Infrastructure team, focusing on containerization and security, as they think it might also be a good fit for my skills and interests.
Before I proceed with dev as I had originally planned, though, I found myself kind of second guessing that decision. Would there be any bad implications in taking the DevOps job immediately - considering it would practically be more focused on Ops, in all likelihood? Would this choice be riskier for my career progression? Most importantly, should I regret my decision, save for an internal transfer that should still be an option down the line (they are quite common in this company), how locked in would I be by going the DevOps route first? Is this a specific field like embedded that is hard to get out of once you get in, or should I not be too concerned with this and just try and see how it goes? Or maybe should I ignore this altogether and proceed to backend, and pivot later?
Thanks in advance!
https://redd.it/1jit5kv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community