Rclone and S3 sutiable google drive replacement?
Hi i was just wondering if Rclone with S3 cloud storage would be a suitable replacement for google drive?
I don't care for conflicts right now it's mainly performance for multi gigbyte files.
I would wrap rclone in my own applicatoon for user authentication.
Or is there something else to consider.
What i need is:
- custom user auth
- cloud storage
- fast upload and download
- file permission filtering / allow list
- api or sdk or cli to control everything if needed
https://redd.it/1irn28r
@r_devops
Hi i was just wondering if Rclone with S3 cloud storage would be a suitable replacement for google drive?
I don't care for conflicts right now it's mainly performance for multi gigbyte files.
I would wrap rclone in my own applicatoon for user authentication.
Or is there something else to consider.
What i need is:
- custom user auth
- cloud storage
- fast upload and download
- file permission filtering / allow list
- api or sdk or cli to control everything if needed
https://redd.it/1irn28r
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to Deploy Static Site to GCP CDN with GitHub Actions
Hey folks! 👋
After getting tired of managing service account keys and dealing with credential rotation, I spent some time figuring out a cleaner way to deploy static sites to GCP CDN using GitHub Actions and OpenID Connect authentication (or as GCP likes to call it, "Workload Identity Federation" 🙄).
I wrote up a detailed guide covering the entire setup, with full Infrastructure as Code examples using OpenTofu (Terraform's open source fork). Here's what I cover:
- Setting up GCP storage buckets with CDN enabled
- Configuring Workload Identity Federation between GitHub and GCP
- Creating proper IAM bindings and service accounts
- Setting up all the necessary DNS records
- Building a complete GitHub Actions workflow
- Full example of a working frontend repository
The whole setup is production-ready and focuses on security best practices. Everything is defined as code (using OpenTofu + Terragrunt), so you can version control your entire infrastructure.
Here's the guide:
https://developer-friendly.blog/blog/2025/02/17/how-to-deploy-static-site-to-gcp-cdn-with-github-actions/
Would love to hear your thoughts or if you have alternative approaches to solving this!
I'm particularly curious if anyone has experience with similar setups on other cloud providers.
https://redd.it/1iropdv
@r_devops
Hey folks! 👋
After getting tired of managing service account keys and dealing with credential rotation, I spent some time figuring out a cleaner way to deploy static sites to GCP CDN using GitHub Actions and OpenID Connect authentication (or as GCP likes to call it, "Workload Identity Federation" 🙄).
I wrote up a detailed guide covering the entire setup, with full Infrastructure as Code examples using OpenTofu (Terraform's open source fork). Here's what I cover:
- Setting up GCP storage buckets with CDN enabled
- Configuring Workload Identity Federation between GitHub and GCP
- Creating proper IAM bindings and service accounts
- Setting up all the necessary DNS records
- Building a complete GitHub Actions workflow
- Full example of a working frontend repository
The whole setup is production-ready and focuses on security best practices. Everything is defined as code (using OpenTofu + Terragrunt), so you can version control your entire infrastructure.
Here's the guide:
https://developer-friendly.blog/blog/2025/02/17/how-to-deploy-static-site-to-gcp-cdn-with-github-actions/
Would love to hear your thoughts or if you have alternative approaches to solving this!
I'm particularly curious if anyone has experience with similar setups on other cloud providers.
https://redd.it/1iropdv
@r_devops
developer-friendly.blog
How to Deploy Static Site to GCP CDN with GitHub Actions - Developer Friendly Blog
Learn how to deploy static sites to GCP CDN using GitHub Actions with OpenID Connect auth - a secure, automated solution without hardcoded credentials.
Docker interview
Hi, so as the title suggests. I have a technical interview about Docker/Python. It's for an entry-level role (Junior Devops). I had a previous candidate screening call and I was open and honest about not using these tools before with the tech lead at the company, but they still want to invite me to the interview after hearing about my experience with cloud platforms etc. They said the interview will mainly revolve around problem solving. So I was wondering if you guys can provide me with some tips to help prepare for it. Thanks
https://redd.it/1iro8ku
@r_devops
Hi, so as the title suggests. I have a technical interview about Docker/Python. It's for an entry-level role (Junior Devops). I had a previous candidate screening call and I was open and honest about not using these tools before with the tech lead at the company, but they still want to invite me to the interview after hearing about my experience with cloud platforms etc. They said the interview will mainly revolve around problem solving. So I was wondering if you guys can provide me with some tips to help prepare for it. Thanks
https://redd.it/1iro8ku
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Alerting System That Supports Custom Scripts & Smart Alerting
Hey everyone,
In my company, we developed an internal system for alerting that works like this:
1. We have a chain of applications passing data between them until it reaches a database (e.g., an IoT sensor sending data to an on-premise server, which then sends it through RabbitMQ/kafka to a processing app in a Kubernetes cluster, which finally writes it to a DB).
2. Each component in the chain exposes a CNC data endpoint (HTTP, Prometheus, etc.).
3. A sampling system (like Prometheus) collects this data and stores it in a database for postmortem analysis.
4. Our internal system queries this database (via SQL, PromQL, or similar) and runs custom Python scripts that contain alerting logic (e.g., "if value > 5, trigger an alert").
5. If an alert is triggered, the operations team gets notified.
We’re now looking into more established, open-source (or commercial) solutions that can:
\- Support querying a time-series database (Prometheus, InfluxDB, etc.)
\- Allow executing custom scripts for advanced alerting logic
\- Save all sampled data for later postmortems
\- Support smarter alerting—for example, if an IoT module has no ping, we should only see one alert ("No ping to IoT module") instead of multiple cascading alerts like "No input to processing app."
I've looked into Prometheus + Alertmanager, Zabbix, Grafana Loki, Sensu, and Kapacitor, but I’m wondering if there’s something that natively supports custom scripts and prevents redundant alerts in a structured way.
Would love to hear if anyone has used something similar or if there are better tools out there! Thanks in advance.
https://redd.it/1irr036
@r_devops
Hey everyone,
In my company, we developed an internal system for alerting that works like this:
1. We have a chain of applications passing data between them until it reaches a database (e.g., an IoT sensor sending data to an on-premise server, which then sends it through RabbitMQ/kafka to a processing app in a Kubernetes cluster, which finally writes it to a DB).
2. Each component in the chain exposes a CNC data endpoint (HTTP, Prometheus, etc.).
3. A sampling system (like Prometheus) collects this data and stores it in a database for postmortem analysis.
4. Our internal system queries this database (via SQL, PromQL, or similar) and runs custom Python scripts that contain alerting logic (e.g., "if value > 5, trigger an alert").
5. If an alert is triggered, the operations team gets notified.
We’re now looking into more established, open-source (or commercial) solutions that can:
\- Support querying a time-series database (Prometheus, InfluxDB, etc.)
\- Allow executing custom scripts for advanced alerting logic
\- Save all sampled data for later postmortems
\- Support smarter alerting—for example, if an IoT module has no ping, we should only see one alert ("No ping to IoT module") instead of multiple cascading alerts like "No input to processing app."
I've looked into Prometheus + Alertmanager, Zabbix, Grafana Loki, Sensu, and Kapacitor, but I’m wondering if there’s something that natively supports custom scripts and prevents redundant alerts in a structured way.
Would love to hear if anyone has used something similar or if there are better tools out there! Thanks in advance.
https://redd.it/1irr036
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you manage your most frequently used infrastructure automation scripts?
Hey folks! How do you manage your most frequently used infrastructure automation scripts?
https://redd.it/1irt7m1
@r_devops
Hey folks! How do you manage your most frequently used infrastructure automation scripts?
https://redd.it/1irt7m1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Rolling out new features, but everything is slowing down... help?
We’re preparing to roll out a set of new features for our app, but during staging tests, we noticed something weird: the app is running significantly slower. It’s strange because the new features don’t seem heavy on the backend, but somewhere along the way, our API response times nearly doubled.
I’ve already tried a few tools to diagnose the issue:
\- perf – Gave some general insights but didn’t pinpoint the bottleneck.
\- Flamegraph – Useful for a high-level view, but I’m struggling to get actionable details.
\- Py-Spy – Helpful for lightweight Python scripts, but not sufficient for this scale.
At this point, I’m at a loss. Has anyone dealt with something similar? What profiling tools or approaches worked for you? I’m especially curious about tools that work well in live environments, as the slowdown doesn’t always appear in staging.
https://redd.it/1is96rx
@r_devops
We’re preparing to roll out a set of new features for our app, but during staging tests, we noticed something weird: the app is running significantly slower. It’s strange because the new features don’t seem heavy on the backend, but somewhere along the way, our API response times nearly doubled.
I’ve already tried a few tools to diagnose the issue:
\- perf – Gave some general insights but didn’t pinpoint the bottleneck.
\- Flamegraph – Useful for a high-level view, but I’m struggling to get actionable details.
\- Py-Spy – Helpful for lightweight Python scripts, but not sufficient for this scale.
At this point, I’m at a loss. Has anyone dealt with something similar? What profiling tools or approaches worked for you? I’m especially curious about tools that work well in live environments, as the slowdown doesn’t always appear in staging.
https://redd.it/1is96rx
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you manage Docker images across different environments in DevOps?
I have a few questions regarding Docker image management across different environments (e.g., test, UAT, and production).
Single Image vs. Rebuild Per Environment
Should we build a single Docker image and promote it across different environments by retagging?
Or should we rebuild the image for each branch/environment (e.g.,
If we are rebuilding per environment, isn't there a risk that the production image is different from the one that was tested in UAT?
Or is consistency maintained at the branch level (i.e., ensuring the same code is used for all builds)?
Handling Environment-Specific Builds
If we promote the same image across environments but still have server-side build steps (e.g., compilation, minification), how can we properly manage environment variables?
Since they are not embedded in the image, what are the best practices for handling this in a production-like setting?
Jenkinsfile Structure: Bad Practice?
Below is a snippet of my current Jenkinsfile. Is this considered a bad approach?
Should I optimize it, or is there a more scalable way to handle multiple environments?
​
steps {
script {
if (BRANCHNAME == 'uat') {
echo "Running ${BRANCHNAME} Branch"
env.IMAGE = "neo/neo:${BRANCHNAME}-${COMMITHASH}"
echo "New Image Name: ${env.IMAGE}"
docker.withRegistry('https://nexus.example.com', 'HARBORCRED') {
docker.build("${env.IMAGE}", '-f Dockerfile.${BRANCHNAME} .').push()
}
} else if (BRANCHNAME == 'test') {
echo "Running ${BRANCHNAME} Branch"
env.IMAGE = "neo/neo:${BRANCHNAME}-${COMMITHASH}"
echo "New Image Name: ${env.IMAGE}"
docker.withRegistry('https://nexus.example.com', 'HARBORCRED') {
docker.build("${env.IMAGE}", '-f Dockerfile.${BRANCHNAME} .').push()
}
} else if (BRANCHNAME == 'prod') {
echo "Running ${BRANCHNAME} Branch"
env.IMAGE = "neo/neo:${BRANCHNAME}-${COMMITHASH}"
echo "New Image Name: ${env.IMAGE}"
docker.withRegistry('https://nexus.example.com', 'HARBORCRED') {
docker.build("${env.IMAGE}", '-f Dockerfile.${BRANCHNAME} .').push()
}
}
}
}
https://redd.it/1isa0tx
@r_devops
I have a few questions regarding Docker image management across different environments (e.g., test, UAT, and production).
Single Image vs. Rebuild Per Environment
Should we build a single Docker image and promote it across different environments by retagging?
Or should we rebuild the image for each branch/environment (e.g.,
test, uat, prod)?If we are rebuilding per environment, isn't there a risk that the production image is different from the one that was tested in UAT?
Or is consistency maintained at the branch level (i.e., ensuring the same code is used for all builds)?
Handling Environment-Specific Builds
If we promote the same image across environments but still have server-side build steps (e.g., compilation, minification), how can we properly manage environment variables?
Since they are not embedded in the image, what are the best practices for handling this in a production-like setting?
Jenkinsfile Structure: Bad Practice?
Below is a snippet of my current Jenkinsfile. Is this considered a bad approach?
Should I optimize it, or is there a more scalable way to handle multiple environments?
​
steps {
script {
if (BRANCHNAME == 'uat') {
echo "Running ${BRANCHNAME} Branch"
env.IMAGE = "neo/neo:${BRANCHNAME}-${COMMITHASH}"
echo "New Image Name: ${env.IMAGE}"
docker.withRegistry('https://nexus.example.com', 'HARBORCRED') {
docker.build("${env.IMAGE}", '-f Dockerfile.${BRANCHNAME} .').push()
}
} else if (BRANCHNAME == 'test') {
echo "Running ${BRANCHNAME} Branch"
env.IMAGE = "neo/neo:${BRANCHNAME}-${COMMITHASH}"
echo "New Image Name: ${env.IMAGE}"
docker.withRegistry('https://nexus.example.com', 'HARBORCRED') {
docker.build("${env.IMAGE}", '-f Dockerfile.${BRANCHNAME} .').push()
}
} else if (BRANCHNAME == 'prod') {
echo "Running ${BRANCHNAME} Branch"
env.IMAGE = "neo/neo:${BRANCHNAME}-${COMMITHASH}"
echo "New Image Name: ${env.IMAGE}"
docker.withRegistry('https://nexus.example.com', 'HARBORCRED') {
docker.build("${env.IMAGE}", '-f Dockerfile.${BRANCHNAME} .').push()
}
}
}
}
https://redd.it/1isa0tx
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Question
Can you get an entry level devops jobs in the current industry scenario? I am currently studying AWS , I know how to use Docker, Jenkins, git , have basic knowledge of Linux , Networking and OS. After practicing AWS ill study Kubernetes , & Terraform. LMK if there is anything that I should do or shouldn't do and also what is the market like for entry level devops engineer. TY
https://redd.it/1isbbrp
@r_devops
Can you get an entry level devops jobs in the current industry scenario? I am currently studying AWS , I know how to use Docker, Jenkins, git , have basic knowledge of Linux , Networking and OS. After practicing AWS ill study Kubernetes , & Terraform. LMK if there is anything that I should do or shouldn't do and also what is the market like for entry level devops engineer. TY
https://redd.it/1isbbrp
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Can't configure a consent screen. Clicking on "OAuth consent screen" redirects me to "Google Auth Platform / Overview". What is going on?
I can't create client credentials cause I can't configure an OAuth consent screen, which I can't do cause I keep getting re-directed to /auth/overview.
Is this intended behavior or a bug? Honestly stumped over here and i've set up social login dozens of times in the past.
https://redd.it/1isamvc
@r_devops
I can't create client credentials cause I can't configure an OAuth consent screen, which I can't do cause I keep getting re-directed to /auth/overview.
Is this intended behavior or a bug? Honestly stumped over here and i've set up social login dozens of times in the past.
https://redd.it/1isamvc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Building custom Chromium, how do I stay aligned with official Chromium versioning?
Hello,
We have a fairly complex system in place where we fetch a clean Chromium, patch our changes and build the custom browser.
We have an update server where we manage versions, but we want to keep it aligned with Chromium's versions.
For example, Chromium is on
It's NOT possible to:
- Add a custom patch
- Add another simver like
We would like to stay aligned with the official version. I'm not sure how to handle this situation.
Any tips would be welcome.
Thank you!
https://redd.it/1isdtae
@r_devops
Hello,
We have a fairly complex system in place where we fetch a clean Chromium, patch our changes and build the custom browser.
We have an update server where we manage versions, but we want to keep it aligned with Chromium's versions.
For example, Chromium is on
133.0.6943.99, but we continuously release new versions of our custom browser. When we finish building, we're supposed to upload the new artifact to the update server, but it won't trigger an update from the client's "About" page, since the version is still the same.It's NOT possible to:
- Add a custom patch
99-mypatch- Add another simver like
133.0.6943.99.123We would like to stay aligned with the official version. I'm not sure how to handle this situation.
Any tips would be welcome.
Thank you!
https://redd.it/1isdtae
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Which cloud for South America?
My friend wants to deploy his app(still works on it), hoping to establish it as a major player in South America. The big three are there but they are not cheap, we all know that. What about OVH Cloud? How to check if latency and bandwidth are comparable? How about local providers?
https://redd.it/1ise9mk
@r_devops
My friend wants to deploy his app(still works on it), hoping to establish it as a major player in South America. The big three are there but they are not cheap, we all know that. What about OVH Cloud? How to check if latency and bandwidth are comparable? How about local providers?
https://redd.it/1ise9mk
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Is KodeKloud subscription worth it?
KodeKloud PRO subscription is worth 8250 INR per year right now and KodeKloud for BUSINESS is 12250 INR.
Is it worth buying it?
Can I share KodeKloud for business with someone even I bought it for my personal use?
https://redd.it/1isff9m
@r_devops
KodeKloud PRO subscription is worth 8250 INR per year right now and KodeKloud for BUSINESS is 12250 INR.
Is it worth buying it?
Can I share KodeKloud for business with someone even I bought it for my personal use?
https://redd.it/1isff9m
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Recommended gitops ci/Cd pipelines for self managed kubernetes
I'm working at an AI development team, currently I'm setting up the CICD pipelines for development and staging, and is looking for some recommendation on how to setup everything smoothly.
For context, we are running Kubernetes on baremetal, the current setup is 3-4 nodes running on the same LAN with fast bandwith between the nodes. The system consists of Longhorn for the Storage, Sealed Secrets, ArgoCD. We have a Gitops repository where ArgoCD watches and deploys from, and the devs operate on their own application repo. When the application is built, the CI pipeline will push the new image tag and do git commit into the gitops repository to update the tag. Here are some of the pain points I have been dealing with and would want some suggestion on how to resolve them:
1. We are running on the company network infrastructure so there can only be traffics from either the local network, or from outside through the company's reverse proxy. So currently, we can only uses NodePort to expose the services to the outside world, that only the machine on the private network can access. To public the app we would have to file an request to the IT team to update the DNS and reverse proxy. Is this the only way to go? One thing I'm worried about is the managing of NodePorts when the services grow in size
2. Most of the devs here are not familiar to the Kubernetes world, so to deploy a new application stack, I have them create Dockerfiles and Docker compose for referencing. This process takes time to translate fully everything into a Helm chart. This Helm chart then get commited on the Gitops repository. I'm then create a new Application on ArgoCD and start the deployment process. So for each new app, I have to spent most of my time configuring the new Helm chart for deployment.
I'm looking for a way to automate this process, or at least simplify it. Or would the dev learning about writing Kubernetes worth it in the long run?
3. We as the AI team of the company rely heavily on large ML models, most of which are from HuggingFace. In the past, to deploy an AI app we used docker compose to mount a model cache folder, where we would store downloaded ML models so the applications wouldn't need to re-download every time we reload or have a new application running the same model. the problem is now we are migrating the system to k8s so there need to be a way to effectively cache these models, which can be varies from 500MB to 15GB in size. I'm currently considering hostpath PV using NFS ReadWriteMany so every nodes can access the models.
Any suggestions or comments about the system are welcome.
https://redd.it/1ishqn7
@r_devops
I'm working at an AI development team, currently I'm setting up the CICD pipelines for development and staging, and is looking for some recommendation on how to setup everything smoothly.
For context, we are running Kubernetes on baremetal, the current setup is 3-4 nodes running on the same LAN with fast bandwith between the nodes. The system consists of Longhorn for the Storage, Sealed Secrets, ArgoCD. We have a Gitops repository where ArgoCD watches and deploys from, and the devs operate on their own application repo. When the application is built, the CI pipeline will push the new image tag and do git commit into the gitops repository to update the tag. Here are some of the pain points I have been dealing with and would want some suggestion on how to resolve them:
1. We are running on the company network infrastructure so there can only be traffics from either the local network, or from outside through the company's reverse proxy. So currently, we can only uses NodePort to expose the services to the outside world, that only the machine on the private network can access. To public the app we would have to file an request to the IT team to update the DNS and reverse proxy. Is this the only way to go? One thing I'm worried about is the managing of NodePorts when the services grow in size
2. Most of the devs here are not familiar to the Kubernetes world, so to deploy a new application stack, I have them create Dockerfiles and Docker compose for referencing. This process takes time to translate fully everything into a Helm chart. This Helm chart then get commited on the Gitops repository. I'm then create a new Application on ArgoCD and start the deployment process. So for each new app, I have to spent most of my time configuring the new Helm chart for deployment.
I'm looking for a way to automate this process, or at least simplify it. Or would the dev learning about writing Kubernetes worth it in the long run?
3. We as the AI team of the company rely heavily on large ML models, most of which are from HuggingFace. In the past, to deploy an AI app we used docker compose to mount a model cache folder, where we would store downloaded ML models so the applications wouldn't need to re-download every time we reload or have a new application running the same model. the problem is now we are migrating the system to k8s so there need to be a way to effectively cache these models, which can be varies from 500MB to 15GB in size. I'm currently considering hostpath PV using NFS ReadWriteMany so every nodes can access the models.
Any suggestions or comments about the system are welcome.
https://redd.it/1ishqn7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Yaml watch
Hey made a cool Yaml watch face for android based watches, LMK what you think!
I used it to practice parsing yaml easier & faster :)
https://play.google.com/store/apps/details?id=com.balappfacewatch.dev
https://redd.it/1isigl4
@r_devops
Hey made a cool Yaml watch face for android based watches, LMK what you think!
I used it to practice parsing yaml easier & faster :)
https://play.google.com/store/apps/details?id=com.balappfacewatch.dev
https://redd.it/1isigl4
@r_devops
Google Play
Yaml Watch Face by time.dev - Apps on Google Play
Who does'nt like Yamls?
AWS Security groups and Facebook webhooks
Hello,
I'm implementing a Whatsapp Business chatbot and I need to open the Facebook addresses in order to receive the incoming call for the webhook.
When I looked it up, I ran the command and received around 900 addresses and they say it periodically changes.
https://developers.facebook.com/docs/whatsapp/cloud-api/guides/set-up-webhooks#ip-addresses
How can I add all those addresses ? Has anyone experienced this problem and solved it ?
Thank you !
https://redd.it/1isgfgb
@r_devops
Hello,
I'm implementing a Whatsapp Business chatbot and I need to open the Facebook addresses in order to receive the incoming call for the webhook.
When I looked it up, I ran the command and received around 900 addresses and they say it periodically changes.
https://developers.facebook.com/docs/whatsapp/cloud-api/guides/set-up-webhooks#ip-addresses
How can I add all those addresses ? Has anyone experienced this problem and solved it ?
Thank you !
https://redd.it/1isgfgb
@r_devops
Help Nexus 3 on macOS Docker – Not Accessible
Hey everyone,
I’m running Sonatype Nexus 3 on macOS using Docker:
docker run -d --platform=linux/amd64 -p 8081:8081 -p 8083:8083 --name nexus -v nexus-data:/nexus-data sonatype/nexus3
The container is running, logs show no errors, but **https://localhost:8081** doesn’t load.
Tried:
✅ Restarting Docker & Nexus
✅ Removing & recreating the container
✅ Checking ports & logs
Anyone faced this issue on macOS? Could it be a networking/Docker Desktop problem? Appreciate any help! 🙏
https://redd.it/1isojuw
@r_devops
Hey everyone,
I’m running Sonatype Nexus 3 on macOS using Docker:
docker run -d --platform=linux/amd64 -p 8081:8081 -p 8083:8083 --name nexus -v nexus-data:/nexus-data sonatype/nexus3
The container is running, logs show no errors, but **https://localhost:8081** doesn’t load.
Tried:
✅ Restarting Docker & Nexus
✅ Removing & recreating the container
✅ Checking ports & logs
Anyone faced this issue on macOS? Could it be a networking/Docker Desktop problem? Appreciate any help! 🙏
https://redd.it/1isojuw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Using Atlantis for Terraform Deploys
I have been using #Terraform in my homelab to provision LXC containers and VMs in Proxmox, git repositiories in Gitea and dummy AWS infrastructure in #Localstack via GitHub Actions or Gitlab CICD quite heavily until some time ago i replaced that with a tool called #Atlantis which runs your Terraform deploys in Pull Requests.
In this blog post I will talk about what Atlantis is and why you would need it and in the bottom of the article is a link on how to deploy Atlantis to use it with Gitlab:
https://ruan.dev/blog/2024/07/31/unleashing-terraform-automation-with-atlantis-an-overview?utm_source=reddit
https://redd.it/1isx91f
@r_devops
I have been using #Terraform in my homelab to provision LXC containers and VMs in Proxmox, git repositiories in Gitea and dummy AWS infrastructure in #Localstack via GitHub Actions or Gitlab CICD quite heavily until some time ago i replaced that with a tool called #Atlantis which runs your Terraform deploys in Pull Requests.
In this blog post I will talk about what Atlantis is and why you would need it and in the bottom of the article is a link on how to deploy Atlantis to use it with Gitlab:
https://ruan.dev/blog/2024/07/31/unleashing-terraform-automation-with-atlantis-an-overview?utm_source=reddit
https://redd.it/1isx91f
@r_devops
Ruan Bekker's Blog
Unleashing Terraform Automation with Atlantis: An Overview
Discover how Atlantis automates Terraform workflows, enhances collaboration, and improves infrastructure security with this comprehensive guide to its features and benefits.
Migrating Traditional Workloads to AWS – Any Gotchas to Watch Out For?
We’re planning to migrate our on-premises workloads to AWS, but I keep hearing horror stories about cost overruns, security risks, and performance issues. What are the biggest challenges, and how do we ensure a smooth transition?
https://redd.it/1iszukv
@r_devops
We’re planning to migrate our on-premises workloads to AWS, but I keep hearing horror stories about cost overruns, security risks, and performance issues. What are the biggest challenges, and how do we ensure a smooth transition?
https://redd.it/1iszukv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DevOps Engineer vs. Software Engineer: Which Career Path is More Future-Proof?
I’m a software developer with 3 years of experience, and I’m considering shifting into DevOps. However, I’m unsure whether I should completely transition or stick to a software engineering path. Can anyone share insights on the key differences in roles, salaries, and long-term career growth?
https://redd.it/1it046j
@r_devops
I’m a software developer with 3 years of experience, and I’m considering shifting into DevOps. However, I’m unsure whether I should completely transition or stick to a software engineering path. Can anyone share insights on the key differences in roles, salaries, and long-term career growth?
https://redd.it/1it046j
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Little Project Management Project
Firstly sorry about the title really could not figure out an adequate one of what this is about and what I have done, also hope this is allowed,
So for a little background, I have like no budget for my hobbies when it comes to software and development, so free is best, now I was using ClickUp but apparently maxed out the free tier then tried Jira, cant create more than one list, among other things was just struggling to find the software I could enjoy for this type of stuff.
so spent the last 4 hours on this project and already have a basic front-end setup with rudimentary features for project management, have a frontend using next.js and a backend using nest,js and a local host docker container for PostgreSQL, and I just felt like I would share this,
I went from not having any software I like to making my own custom thing that will have all my needs met and more, I didn't even use AI as some people do nowadays, but it will have access to my custom self-learning AI model that I built from scratch but that's a whole other project,
also just wondered if there are other people in this community who can just learn these things super fast and then just know it forever, i just self-taught myself half of this stuff in the last 4 hours, and did not even know any javascript except Minecraft related till now
https://redd.it/1it0uqo
@r_devops
Firstly sorry about the title really could not figure out an adequate one of what this is about and what I have done, also hope this is allowed,
So for a little background, I have like no budget for my hobbies when it comes to software and development, so free is best, now I was using ClickUp but apparently maxed out the free tier then tried Jira, cant create more than one list, among other things was just struggling to find the software I could enjoy for this type of stuff.
so spent the last 4 hours on this project and already have a basic front-end setup with rudimentary features for project management, have a frontend using next.js and a backend using nest,js and a local host docker container for PostgreSQL, and I just felt like I would share this,
I went from not having any software I like to making my own custom thing that will have all my needs met and more, I didn't even use AI as some people do nowadays, but it will have access to my custom self-learning AI model that I built from scratch but that's a whole other project,
also just wondered if there are other people in this community who can just learn these things super fast and then just know it forever, i just self-taught myself half of this stuff in the last 4 hours, and did not even know any javascript except Minecraft related till now
https://redd.it/1it0uqo
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community