How to go about learning and implementing Devops as a backend developer?
Hi, I am a backend developer(student) and am trying to upskill by learning devops. I have recently got a server that I would like to host and work on.
Coming from backend dev, I have a decent-ish grip on the menial server tasks, I would like to learn more about Devops and so how should I go about learning?
I prefer books, so are there any books that explains Devops theory?
What all technologies should I learn to be able to operate my server and also have professional relevance?
https://redd.it/1hxcvsv
@r_devops
Hi, I am a backend developer(student) and am trying to upskill by learning devops. I have recently got a server that I would like to host and work on.
Coming from backend dev, I have a decent-ish grip on the menial server tasks, I would like to learn more about Devops and so how should I go about learning?
I prefer books, so are there any books that explains Devops theory?
What all technologies should I learn to be able to operate my server and also have professional relevance?
https://redd.it/1hxcvsv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Strategies for Containerized development environments?
What tools or strategies have you found most effective for streamlining containerized development environments? I'm curious how others have tackled the challenges mentioned in this blog about improving dev workflows and reducing build times.
Or on the flip side, if you're a container hater, I'd love to know why.
https://redd.it/1hxhel1
@r_devops
What tools or strategies have you found most effective for streamlining containerized development environments? I'm curious how others have tackled the challenges mentioned in this blog about improving dev workflows and reducing build times.
Or on the flip side, if you're a container hater, I'd love to know why.
https://redd.it/1hxhel1
@r_devops
www.getambassador.io
Build Faster & Smarter: Containerized Development Environments
Build faster with containerized development environments. Improve scalability, streamline workflows, and ensure consistent performance across all stages.
Does anyone here market infrastructure and cloud templates?
I've been in the DevOps space for many years and have worked with many cloud and "DevOps" MSPs over the years, all the while looking at the application side of the equation and noticing how that has evolved seemingly quite differently. All the cloud MSPs I have worked with were very hesitant to use shared frameworks and develop reusable artifacts between projects because their business model was selling time. I've also seem a lot of SaaS offerings spring up. But when I compare that to the application space I notice a thriving market of templates, themes, plugins, etc... So I was just wondering from other experienced DevOps folks here is this is a thing in any circles, because I would think given we try for hyper automation and infrastructure as code, templates would be a perfect balance between fully custom and uncustomizable SaaS.
https://redd.it/1hxkpm6
@r_devops
I've been in the DevOps space for many years and have worked with many cloud and "DevOps" MSPs over the years, all the while looking at the application side of the equation and noticing how that has evolved seemingly quite differently. All the cloud MSPs I have worked with were very hesitant to use shared frameworks and develop reusable artifacts between projects because their business model was selling time. I've also seem a lot of SaaS offerings spring up. But when I compare that to the application space I notice a thriving market of templates, themes, plugins, etc... So I was just wondering from other experienced DevOps folks here is this is a thing in any circles, because I would think given we try for hyper automation and infrastructure as code, templates would be a perfect balance between fully custom and uncustomizable SaaS.
https://redd.it/1hxkpm6
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Implementing LoadBalancer services on Cluster API KubeVirt clusters using Cloud Provider KubeVirt
Hi everyone!
I wrote an article about configuring Kubernetes LoadBalancer services on Cluster API managed KubeVirt clusters with Cloud Provider KubeVirt.
This is the first article in a series I'm starting about taking Kubernetes clusters from where the Cluster API documentation leaves you to GitOps managed production clusters.
The next article in the series will be about configuring workloads on Cluster API managed clusters with Argo CD.
In my opinion the most interesting part of the article hides in
the linked Helm chart configuring a cluster with centralized telemetry exporter, secret management and more.
I use the chart with an Argo CD ApplicationSet for configuring clusters in GitOps style.
I am very much a beginner in technical writing, and would appreciate any feedback you have.
https://redd.it/1hxkjrh
@r_devops
Hi everyone!
I wrote an article about configuring Kubernetes LoadBalancer services on Cluster API managed KubeVirt clusters with Cloud Provider KubeVirt.
This is the first article in a series I'm starting about taking Kubernetes clusters from where the Cluster API documentation leaves you to GitOps managed production clusters.
The next article in the series will be about configuring workloads on Cluster API managed clusters with Argo CD.
In my opinion the most interesting part of the article hides in
the linked Helm chart configuring a cluster with centralized telemetry exporter, secret management and more.
I use the chart with an Argo CD ApplicationSet for configuring clusters in GitOps style.
I am very much a beginner in technical writing, and would appreciate any feedback you have.
https://redd.it/1hxkjrh
@r_devops
Sneakybugs
Implementing LoadBalancer services on Cluster API KubeVirt clusters using Cloud Provider KubeVirt
Had trouble getting load balancer services working on Cluster API KubeVirt clusters? This guide will get you sorted out.
Ephemeral environment for open merge requests on azure with microservices architecture
Hello Everyone,
I am new to DevOps and I want to create a pipeline on azure, that create a deployment when a merge request is created/updated and destroy it when it's closed.
I'm seeking help with any hint or resources that I can read from, and would also appreciate your opinions if that's doable, knowing that currently frontend and backend are on different git repositories, but I can consider bringing both under one repo.
Thanks in advance
https://redd.it/1hxlojy
@r_devops
Hello Everyone,
I am new to DevOps and I want to create a pipeline on azure, that create a deployment when a merge request is created/updated and destroy it when it's closed.
I'm seeking help with any hint or resources that I can read from, and would also appreciate your opinions if that's doable, knowing that currently frontend and backend are on different git repositories, but I can consider bringing both under one repo.
Thanks in advance
https://redd.it/1hxlojy
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Recent Interview Experience
So today I had an interview for an Ops Engr role at a company. Going through the job description I felt the requirements aligned well with my background - the JD mentioned the role of an Ops Engineer as someone who would be installing, updating and configuring products.
I have good knowledge on Infrastructure as Code (IaC) and the infrastructure provisioning tools like Terraform and configuration management tool like Ansible. Apart from that I also have high level knowledge on modern devops tools and platforms like docker for containerization and orchestration tools like Kubernetes.
Today as I said I had my interview. While introducing myself when I pointed out that I know all those stuff I was interrupted by one of the interviewers who went on to inform me that since they deal with legacy systems they are yet to adopt all those devops practices and that they are mostly involved in manual maintenance of applications. So, there is little to no automation being used in the process.
Then they went on to grill me on core linux concepts, some linux commands although I did mention that I was familiar with file system and networking linux commands only. I was asked about different linux distributions, about how to schedule processes using linux. Then some qs related to networking were also asked - the basic ones like OSI model, TCP/IP protocol, DNS. I was asked about Ipv4 and ipv6. Unfortunately, I could not recall the difference between ipv4 and ipv6. Until this moment the interview was going fine - the questions were of quite basic level.
Then one of the interviewers asked me to explain how to respond to an incident of spike in CPU usage. I was able to explain him a few steps but he wasn't quite satisfied with the answer and asked me to explain him the steps in a sequential manner. And then there were a few questions on how to respond to a feedback from end user on production related issue and so on...
Honestly I was a bit disappointed at the end of the interview as I was hoping I would be asked questions on containerization, on cloud platforms and on different tools like Terraform and Ansible.
https://redd.it/1hxo1d6
@r_devops
So today I had an interview for an Ops Engr role at a company. Going through the job description I felt the requirements aligned well with my background - the JD mentioned the role of an Ops Engineer as someone who would be installing, updating and configuring products.
I have good knowledge on Infrastructure as Code (IaC) and the infrastructure provisioning tools like Terraform and configuration management tool like Ansible. Apart from that I also have high level knowledge on modern devops tools and platforms like docker for containerization and orchestration tools like Kubernetes.
Today as I said I had my interview. While introducing myself when I pointed out that I know all those stuff I was interrupted by one of the interviewers who went on to inform me that since they deal with legacy systems they are yet to adopt all those devops practices and that they are mostly involved in manual maintenance of applications. So, there is little to no automation being used in the process.
Then they went on to grill me on core linux concepts, some linux commands although I did mention that I was familiar with file system and networking linux commands only. I was asked about different linux distributions, about how to schedule processes using linux. Then some qs related to networking were also asked - the basic ones like OSI model, TCP/IP protocol, DNS. I was asked about Ipv4 and ipv6. Unfortunately, I could not recall the difference between ipv4 and ipv6. Until this moment the interview was going fine - the questions were of quite basic level.
Then one of the interviewers asked me to explain how to respond to an incident of spike in CPU usage. I was able to explain him a few steps but he wasn't quite satisfied with the answer and asked me to explain him the steps in a sequential manner. And then there were a few questions on how to respond to a feedback from end user on production related issue and so on...
Honestly I was a bit disappointed at the end of the interview as I was hoping I would be asked questions on containerization, on cloud platforms and on different tools like Terraform and Ansible.
https://redd.it/1hxo1d6
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I'd like to transition my small web app which uses docker-compose to kubernetes. My friend tells me it's a full time job/too much overhead. Thoughts?
My expertise is as a full stack Django/React developer. Through Udemy + testdriven.io courses and some grit, I got my backend running last year on a DigitalOcean droplet and managed Postgres db. It works great and I will likely keep it this way for another year.
I would like to learn kubernetes over the next year and transition my app over for these reasons:
1. Downtime. I haven't had much traffic so its been fine to manually upload new builds to ghcr + deploy it + ssh into my droplet and run the migration but I want to minimize that
2. I just want to understand kubernetes. I will eventually hire someone to do this full time (when my business takes off, I'm an optimist!), but since I'm a bit curious/a control freak, the idea of not knowing how to debug my own web application/core business is scary to me
3. If my servers are getting battered or I want to replicate my app to different regions, I'd like to know how to actually scale the pods
My buddy is a professional DevOps developer and he says it's a bad idea, that I'd likely be spending all my time doing DevOps stuff while I should be working on my core business. He specifically mentions how you constantly have to update to new versions of kubernetes. But I also wonder if his experience is from working at big companies.
When I read the threads here a lot of it is over my head. Helm charts, provisioning, different flavors of k8s, Ansible, I've heard a lot of these terms but it seems like a lot. That said, I know a lot of you work at companies with SLAs that require 99.9+% uptime and do traffic I can't even fathom, so maybe I'm psyching myself out for no reason?
This is getting long, so if kubernetizing my app is a bad idea, could anyone recommend a more intermediary approach?
Thanks in advance!
https://redd.it/1hxpxfq
@r_devops
My expertise is as a full stack Django/React developer. Through Udemy + testdriven.io courses and some grit, I got my backend running last year on a DigitalOcean droplet and managed Postgres db. It works great and I will likely keep it this way for another year.
I would like to learn kubernetes over the next year and transition my app over for these reasons:
1. Downtime. I haven't had much traffic so its been fine to manually upload new builds to ghcr + deploy it + ssh into my droplet and run the migration but I want to minimize that
2. I just want to understand kubernetes. I will eventually hire someone to do this full time (when my business takes off, I'm an optimist!), but since I'm a bit curious/a control freak, the idea of not knowing how to debug my own web application/core business is scary to me
3. If my servers are getting battered or I want to replicate my app to different regions, I'd like to know how to actually scale the pods
My buddy is a professional DevOps developer and he says it's a bad idea, that I'd likely be spending all my time doing DevOps stuff while I should be working on my core business. He specifically mentions how you constantly have to update to new versions of kubernetes. But I also wonder if his experience is from working at big companies.
When I read the threads here a lot of it is over my head. Helm charts, provisioning, different flavors of k8s, Ansible, I've heard a lot of these terms but it seems like a lot. That said, I know a lot of you work at companies with SLAs that require 99.9+% uptime and do traffic I can't even fathom, so maybe I'm psyching myself out for no reason?
This is getting long, so if kubernetizing my app is a bad idea, could anyone recommend a more intermediary approach?
Thanks in advance!
https://redd.it/1hxpxfq
@r_devops
testdriven.io
Test-Driven Development, Microservices, Web Development Courses from TestDriven.io
Learn how to build, test, and deploy microservices with our web development tutorials powered by Docker, Flask, React, Django, and Angular. View the courses here.
Resume Review for DevOps/Cloud Engineer Positions (Mid)
Hi everyone,
I’ve been updating my resume to improve my chances of securing a DevOps Engineer or Cloud Engineer role and would really appreciate feedback from others in the field.
Unfortunately, most of my friends find the technical details on my resume a bit hard to understand, so I’m hoping someone with relevant experience could offer some advice.
I have 3 years work experience but I've been getting rejected at screening for roles that it seems like I should quite easily qualify for.
Here are a couple of specific areas where I could use some input:
Am I effectively communicating my skills and previous experience in a way that’s clear and engaging for recruiters or hiring managers?
Does the overall layout and structure work well?
Thanks in advance for your help!
Here's my resume: https://imgur.com/a/QhGA8j8
https://redd.it/1hxgtzt
@r_devops
Hi everyone,
I’ve been updating my resume to improve my chances of securing a DevOps Engineer or Cloud Engineer role and would really appreciate feedback from others in the field.
Unfortunately, most of my friends find the technical details on my resume a bit hard to understand, so I’m hoping someone with relevant experience could offer some advice.
I have 3 years work experience but I've been getting rejected at screening for roles that it seems like I should quite easily qualify for.
Here are a couple of specific areas where I could use some input:
Am I effectively communicating my skills and previous experience in a way that’s clear and engaging for recruiters or hiring managers?
Does the overall layout and structure work well?
Thanks in advance for your help!
Here's my resume: https://imgur.com/a/QhGA8j8
https://redd.it/1hxgtzt
@r_devops
How to Enable Swap in EKS
Hi all,
I just published a quick guide on enabling swap in EKS. check it out
https://medium.com/@eliran89c/how-to-enable-swap-in-your-eks-cluster-in-under-5-minutes-b87524cc821b
https://redd.it/1hxydvz
@r_devops
Hi all,
I just published a quick guide on enabling swap in EKS. check it out
https://medium.com/@eliran89c/how-to-enable-swap-in-your-eks-cluster-in-under-5-minutes-b87524cc821b
https://redd.it/1hxydvz
@r_devops
Medium
How to Enable Swap in Your EKS Cluster in Under 5 Minutes
Two years ago, I wrote an article focusing on Kubernetes CPU. If you haven’t read it yet, go check it out! It’s a great article😊. In…
How to transfer free app to domain name?
I have an app hosted on free pythonanywhere account. I now I also own a domain name via goDaddy. How to link that domain name to my site?
https://redd.it/1hy02sv
@r_devops
I have an app hosted on free pythonanywhere account. I now I also own a domain name via goDaddy. How to link that domain name to my site?
https://redd.it/1hy02sv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Self-Hosted Drone CI with issues.
Greetings everyone.
I am trying to setup a selfhosted CI/CD setup.
Development server that is running Drone CI in Docker is running on Ubuntu 24.04.1 LTS.
Currently i have a Drone CI in a docker container (both server and runner), then i have a Docker Private Registry on a seperate server.
Once a push is sent to Github, it will activate a webhook which starts the Drone CI to work.
Been tinkering with this a few days now, tried various solutions.
In short, i want to be able to push my code to Github, webhook is called and my local development server with Drone CI is activated, where it pulls the code, caches the dependencies for backend and frontend, runs the unit tests and such, security checks and then pushes the image to private registry which are used to spin up the development site.
Been having issues with caching part where it doesn't actually store it in the cache folder.
Also been having issues with when Drone-Runner trying to push the image to the Private Registry suddenly stalling and retrying over and over but not always.
Here is the .drone.yml :
kind: pipeline
type: docker
name: default
steps:
# Version 0.1
# Generate Cache Key
- name: generate-cache-key
image: alpine
commands:
- echo "Generating Cache Key..."
- echo -n "$(md5sum package.json | awk '{print $1}')" > .cachekey
# Debug Cache Key Loation
- name: debug-cache-key
image: alpine
commands:
- echo "Current Directory:"
- pwd
- echo "Listing contents of the Directory:"
- ls -la
- echo "Cache Key:"
- cat .cachekey
# Restore Cache for Backend Dependicies
# - name: restore-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# settings:
# backend: "filesystem"
# restore: true
# cachekey: cache-backend-{{ .Commit.Branch }}
# archiveformat: "gzip"
# volumes:
# - name: cache
# path: /tmp/cache
# Build Backend Image for Development
# - name: build-backend-dev
# image: plugins/docker
# when:
# branch:
# - dev
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.dev
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONECOMMITSHA}
# purge: false
# Build Backend Image for Production
# - name: build-backend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.prod
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONECOMMITSHA}
# purge: false
# Check Debug Cache before Rebuild
# - name: debug-cache-before-rebuild
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - echo "Checking cache content before rebuild.."
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Rebuild Cache for Backend Dependicies
# - name: rebuild-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
Greetings everyone.
I am trying to setup a selfhosted CI/CD setup.
Development server that is running Drone CI in Docker is running on Ubuntu 24.04.1 LTS.
Currently i have a Drone CI in a docker container (both server and runner), then i have a Docker Private Registry on a seperate server.
Once a push is sent to Github, it will activate a webhook which starts the Drone CI to work.
Been tinkering with this a few days now, tried various solutions.
In short, i want to be able to push my code to Github, webhook is called and my local development server with Drone CI is activated, where it pulls the code, caches the dependencies for backend and frontend, runs the unit tests and such, security checks and then pushes the image to private registry which are used to spin up the development site.
Been having issues with caching part where it doesn't actually store it in the cache folder.
Also been having issues with when Drone-Runner trying to push the image to the Private Registry suddenly stalling and retrying over and over but not always.
Here is the .drone.yml :
kind: pipeline
type: docker
name: default
steps:
# Version 0.1
# Generate Cache Key
- name: generate-cache-key
image: alpine
commands:
- echo "Generating Cache Key..."
- echo -n "$(md5sum package.json | awk '{print $1}')" > .cachekey
# Debug Cache Key Loation
- name: debug-cache-key
image: alpine
commands:
- echo "Current Directory:"
- pwd
- echo "Listing contents of the Directory:"
- ls -la
- echo "Cache Key:"
- cat .cachekey
# Restore Cache for Backend Dependicies
# - name: restore-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# settings:
# backend: "filesystem"
# restore: true
# cachekey: cache-backend-{{ .Commit.Branch }}
# archiveformat: "gzip"
# volumes:
# - name: cache
# path: /tmp/cache
# Build Backend Image for Development
# - name: build-backend-dev
# image: plugins/docker
# when:
# branch:
# - dev
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.dev
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONECOMMITSHA}
# purge: false
# Build Backend Image for Production
# - name: build-backend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.prod
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONECOMMITSHA}
# purge: false
# Check Debug Cache before Rebuild
# - name: debug-cache-before-rebuild
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - echo "Checking cache content before rebuild.."
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Rebuild Cache for Backend Dependicies
# - name: rebuild-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
Self-Hosted Drone CI with issues.
Greetings everyone.
I am trying to setup a selfhosted CI/CD setup.
Development server that is running Drone CI in Docker is running on Ubuntu 24.04.1 LTS.
Currently i have a Drone CI in a docker container (both server and runner), then i have a Docker Private Registry on a seperate server.
Once a push is sent to Github, it will activate a webhook which starts the Drone CI to work.
Been tinkering with this a few days now, tried various solutions.
In short, i want to be able to push my code to Github, webhook is called and my local development server with Drone CI is activated, where it pulls the code, caches the dependencies for backend and frontend, runs the unit tests and such, security checks and then pushes the image to private registry which are used to spin up the development site.
Been having issues with caching part where it doesn't actually store it in the cache folder.
Also been having issues with when Drone-Runner trying to push the image to the Private Registry suddenly stalling and retrying over and over but not always.
Here is the .drone.yml :
kind: pipeline
type: docker
name: default
steps:
# Version 0.1
# Generate Cache Key
- name: generate-cache-key
image: alpine
commands:
- echo "Generating Cache Key..."
- echo -n "$(md5sum package.json | awk '{print $1}')" > .cache_key
# Debug Cache Key Loation
- name: debug-cache-key
image: alpine
commands:
- echo "Current Directory:"
- pwd
- echo "Listing contents of the Directory:"
- ls -la
- echo "Cache Key:"
- cat .cache_key
# Restore Cache for Backend Dependicies
# - name: restore-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# settings:
# backend: "filesystem"
# restore: true
# cache_key: cache-backend-{{ .Commit.Branch }}
# archive_format: "gzip"
# volumes:
# - name: cache
# path: /tmp/cache
# Build Backend Image for Development
# - name: build-backend-dev
# image: plugins/docker
# when:
# branch:
# - dev
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.dev
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# Build Backend Image for Production
# - name: build-backend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.prod
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# Check Debug Cache before Rebuild
# - name: debug-cache-before-rebuild
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - echo "Checking cache content before rebuild.."
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Rebuild Cache for Backend Dependicies
# - name: rebuild-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
Greetings everyone.
I am trying to setup a selfhosted CI/CD setup.
Development server that is running Drone CI in Docker is running on Ubuntu 24.04.1 LTS.
Currently i have a Drone CI in a docker container (both server and runner), then i have a Docker Private Registry on a seperate server.
Once a push is sent to Github, it will activate a webhook which starts the Drone CI to work.
Been tinkering with this a few days now, tried various solutions.
In short, i want to be able to push my code to Github, webhook is called and my local development server with Drone CI is activated, where it pulls the code, caches the dependencies for backend and frontend, runs the unit tests and such, security checks and then pushes the image to private registry which are used to spin up the development site.
Been having issues with caching part where it doesn't actually store it in the cache folder.
Also been having issues with when Drone-Runner trying to push the image to the Private Registry suddenly stalling and retrying over and over but not always.
Here is the .drone.yml :
kind: pipeline
type: docker
name: default
steps:
# Version 0.1
# Generate Cache Key
- name: generate-cache-key
image: alpine
commands:
- echo "Generating Cache Key..."
- echo -n "$(md5sum package.json | awk '{print $1}')" > .cache_key
# Debug Cache Key Loation
- name: debug-cache-key
image: alpine
commands:
- echo "Current Directory:"
- pwd
- echo "Listing contents of the Directory:"
- ls -la
- echo "Cache Key:"
- cat .cache_key
# Restore Cache for Backend Dependicies
# - name: restore-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# settings:
# backend: "filesystem"
# restore: true
# cache_key: cache-backend-{{ .Commit.Branch }}
# archive_format: "gzip"
# volumes:
# - name: cache
# path: /tmp/cache
# Build Backend Image for Development
# - name: build-backend-dev
# image: plugins/docker
# when:
# branch:
# - dev
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.dev
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# Build Backend Image for Production
# - name: build-backend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.prod
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# Check Debug Cache before Rebuild
# - name: debug-cache-before-rebuild
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - echo "Checking cache content before rebuild.."
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Rebuild Cache for Backend Dependicies
# - name: rebuild-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# backend: "filesystem"
# rebuild: true
# cache_key: cache-backend-{{ .Commit.Branch }}
# archive_format: "gzip"
# purge: false
# Validate Rebuilt Cache for Backend Dependicies
# - name: debug-cache
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Restore Cache Frontend
- name: restore-cache-frontend
image: drillster/drone-volume-cache
privileged: true
volumes:
- name: cache
path: /tmp/cache
settings:
restore: true
mount:
- /tmp/cache/node_modules
cache_key: [ ".cache_key" ]
# Debug Cache Before Build
- name: debug-cache-restore
image: alpine
volumes:
- name: cache
path: /tmp/cache
commands:
- echo "Checking restored Cache..."
- ls -al /tmp/cache/node_modules
# Build Frontend Image for Development
- name: build-frontend-dev
image: plugins/docker
privileged: true
when:
branch:
- dev
environment:
PNPM_STORE_PATH: /tmp/cache/node_modules
settings:
dockerfile: ./frontend/Dockerfile.dev
context: ./frontend
repo: registry.local/my-frontend
tags: ${DRONE_COMMIT_SHA}
purge: false
build_args:
NODE_MODULES_CACHE: /tmp/cache/node_modules
volumes:
- name: cache
path: /tmp/cache
- name: dockersock
path: /var/run/docker.sock
# Debug Cache after Build
- name: debug-cache-after-build
image: alpine
volumes:
- name: cache
path: /tmp/cache
commands:
- echo "Cache after build:"
- ls -la /tmp/cache/node_modules
- du -sh /tmp/cache/node_modules
# Rebuild Cache Frontend
- name: rebuild-cache-frontend
image: drillster/drone-volume-cache
privileged: true
volumes:
- name: cache
path: /tmp/cache
settings:
rebuild: true
mount:
- /tmp/cache/node_modules
cache_key: [ ".cache_key" ]
# Build Frontend Image for Production
# - name: build-frontend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# PNPM_STORE_PATH: /tmp/cache/node_modules
# settings:
# dockerfile: ./frontend/Dockerfile.prod
# context: ./frontend
# repo: registry.local/my-frontend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# # Test Backend Using Pushed Image
# - name: test-backend
# image: docker:24
# volumes:
# - name: dockersock
# path: /var/run/docker.sock
# commands:
# - docker pull registry.local/my-backend:${DRONE_COMMIT_SHA}
# - docker run --rm --entrypoint ./test-runner.sh registry.local/my-backend:${DRONE_COMMIT_SHA}
# # Test Frontend Using Pushed Image
# - name: test-frontend
# image: docker:24
# volumes:
# - name: dockersock
# path: /var/run/docker.sock
# commands:
# - docker pull registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - docker run --rm --entrypoint ./test-frontend.sh registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - name: static-code-analysis
# image: sonarsource/sonar-scanner-cli:latest
# environment:
# SONAR_TOKEN:
# from_secret: SONAR_TOKEN
# commands:
# - sonar-scanner -Dsonar.projectKey=togethral -Dsonar.organization=forser -Dsonar.login=$SONAR_TOKEN
# path: /var/run/docker.sock
# settings:
# backend: "filesystem"
# rebuild: true
# cache_key: cache-backend-{{ .Commit.Branch }}
# archive_format: "gzip"
# purge: false
# Validate Rebuilt Cache for Backend Dependicies
# - name: debug-cache
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Restore Cache Frontend
- name: restore-cache-frontend
image: drillster/drone-volume-cache
privileged: true
volumes:
- name: cache
path: /tmp/cache
settings:
restore: true
mount:
- /tmp/cache/node_modules
cache_key: [ ".cache_key" ]
# Debug Cache Before Build
- name: debug-cache-restore
image: alpine
volumes:
- name: cache
path: /tmp/cache
commands:
- echo "Checking restored Cache..."
- ls -al /tmp/cache/node_modules
# Build Frontend Image for Development
- name: build-frontend-dev
image: plugins/docker
privileged: true
when:
branch:
- dev
environment:
PNPM_STORE_PATH: /tmp/cache/node_modules
settings:
dockerfile: ./frontend/Dockerfile.dev
context: ./frontend
repo: registry.local/my-frontend
tags: ${DRONE_COMMIT_SHA}
purge: false
build_args:
NODE_MODULES_CACHE: /tmp/cache/node_modules
volumes:
- name: cache
path: /tmp/cache
- name: dockersock
path: /var/run/docker.sock
# Debug Cache after Build
- name: debug-cache-after-build
image: alpine
volumes:
- name: cache
path: /tmp/cache
commands:
- echo "Cache after build:"
- ls -la /tmp/cache/node_modules
- du -sh /tmp/cache/node_modules
# Rebuild Cache Frontend
- name: rebuild-cache-frontend
image: drillster/drone-volume-cache
privileged: true
volumes:
- name: cache
path: /tmp/cache
settings:
rebuild: true
mount:
- /tmp/cache/node_modules
cache_key: [ ".cache_key" ]
# Build Frontend Image for Production
# - name: build-frontend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# PNPM_STORE_PATH: /tmp/cache/node_modules
# settings:
# dockerfile: ./frontend/Dockerfile.prod
# context: ./frontend
# repo: registry.local/my-frontend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# # Test Backend Using Pushed Image
# - name: test-backend
# image: docker:24
# volumes:
# - name: dockersock
# path: /var/run/docker.sock
# commands:
# - docker pull registry.local/my-backend:${DRONE_COMMIT_SHA}
# - docker run --rm --entrypoint ./test-runner.sh registry.local/my-backend:${DRONE_COMMIT_SHA}
# # Test Frontend Using Pushed Image
# - name: test-frontend
# image: docker:24
# volumes:
# - name: dockersock
# path: /var/run/docker.sock
# commands:
# - docker pull registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - docker run --rm --entrypoint ./test-frontend.sh registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - name: static-code-analysis
# image: sonarsource/sonar-scanner-cli:latest
# environment:
# SONAR_TOKEN:
# from_secret: SONAR_TOKEN
# commands:
# - sonar-scanner -Dsonar.projectKey=togethral -Dsonar.organization=forser -Dsonar.login=$SONAR_TOKEN
-Dsonar.working.directory=/tmp/sonar
# - name: security-scan
# image: aquasec/trivy:latest
# commands:
# - trivy image registry.local/my-backend:${DRONE_COMMIT_SHA}
# - trivy image registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - name: deploy
# image: docker:24
# environment:
# DOCKER_TLS_VERIFY: 1
# DOCKER_HOST: tcp://docker-hosts:2376
# commands:
# - docker stack deploy -c ci-cd/docker-scripts/docker-compose.prod.yml togethral
volumes:
- name: dockersock
host:
path: /var/run/docker.sock
- name: cache
host:
path: /var/lib/drone/cache
Here is the [Dockerfile.dev](https://Dockerfile.dev) :
# Use Cypress browser image with Node.js and Chrome
FROM registry.local/cypress-browsers:node-20
# Set the working directory
WORKDIR /app
# Set the cache directory for node_modules
ENV NODE_MODULES_CACHE=/tmp/cache/node_modules
# Copy the dependency files
COPY package.json pnpm-lock.yaml ./
# Install dependencies
RUN npm install -g pnpm
# Create and set permissions for the cache directory
RUN mkdir -p "$NODE_MODULES_CACHE" && chmod -R 777 "$NODE_MODULES_CACHE"
# Configure pnpm to use a custom store directory
RUN pnpm config set store-dir "$NODE_MODULES_CACHE"
# Install dependencies
RUN if [ "$(ls -A $NODE_MODULES_CACHE 2>/dev/null)"]; then \
echo "Cache is valid. Skipping dependencies installation"; \
else \
echo "Cache is empty. Installing dependencies"; \
pnpm install --force --frozen-lockfile; \
fi
# Debug: Log the contents of the cache directory
RUN echo "Cache contents:" && ls -la "$NODE_MODULES_CACHE" || echo "Cache is empty"
# Copy the remaining files
COPY . .
# Ensure test script is executable
# RUN chmod +x ./test-frontend.sh
# Default entrypoint for development
CMD ["pnpm", "start"]
Haven't really toyed with CI/CD previously that much so i gotten some help from ChatGPT but that gives me more headache since it often reference incorrect material.
Been reading the docs for the various tools but still can't figure it out.
Willing to swap out Drone CI for other CI/CD setup also if that would be recommended.
https://redd.it/1hy1aym
@r_devops
# - name: security-scan
# image: aquasec/trivy:latest
# commands:
# - trivy image registry.local/my-backend:${DRONE_COMMIT_SHA}
# - trivy image registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - name: deploy
# image: docker:24
# environment:
# DOCKER_TLS_VERIFY: 1
# DOCKER_HOST: tcp://docker-hosts:2376
# commands:
# - docker stack deploy -c ci-cd/docker-scripts/docker-compose.prod.yml togethral
volumes:
- name: dockersock
host:
path: /var/run/docker.sock
- name: cache
host:
path: /var/lib/drone/cache
Here is the [Dockerfile.dev](https://Dockerfile.dev) :
# Use Cypress browser image with Node.js and Chrome
FROM registry.local/cypress-browsers:node-20
# Set the working directory
WORKDIR /app
# Set the cache directory for node_modules
ENV NODE_MODULES_CACHE=/tmp/cache/node_modules
# Copy the dependency files
COPY package.json pnpm-lock.yaml ./
# Install dependencies
RUN npm install -g pnpm
# Create and set permissions for the cache directory
RUN mkdir -p "$NODE_MODULES_CACHE" && chmod -R 777 "$NODE_MODULES_CACHE"
# Configure pnpm to use a custom store directory
RUN pnpm config set store-dir "$NODE_MODULES_CACHE"
# Install dependencies
RUN if [ "$(ls -A $NODE_MODULES_CACHE 2>/dev/null)"]; then \
echo "Cache is valid. Skipping dependencies installation"; \
else \
echo "Cache is empty. Installing dependencies"; \
pnpm install --force --frozen-lockfile; \
fi
# Debug: Log the contents of the cache directory
RUN echo "Cache contents:" && ls -la "$NODE_MODULES_CACHE" || echo "Cache is empty"
# Copy the remaining files
COPY . .
# Ensure test script is executable
# RUN chmod +x ./test-frontend.sh
# Default entrypoint for development
CMD ["pnpm", "start"]
Haven't really toyed with CI/CD previously that much so i gotten some help from ChatGPT but that gives me more headache since it often reference incorrect material.
Been reading the docs for the various tools but still can't figure it out.
Willing to swap out Drone CI for other CI/CD setup also if that would be recommended.
https://redd.it/1hy1aym
@r_devops
Is there is any way to make production deployment of spinnaker without using hal?
Hey guys,
I'm going to deploy spinnaker in AWS. As I have found in documentation main idea would be to deploy and setup it via
Do you know any place with documentation/Helm chart, or anything similar that helps setup spinnaker from scratch?
https://redd.it/1hy15h9
@r_devops
Hey guys,
I'm going to deploy spinnaker in AWS. As I have found in documentation main idea would be to deploy and setup it via
hal application, that I don't really like. Only post that is somehow mentioning setup spinnaker in old-facioned way was from Expedia https://medium.com/expedia-group-tech/installing-spinnaker-in-the-cloud-c7f518c98dc1 , but code is not fully describes all the process.Do you know any place with documentation/Helm chart, or anything similar that helps setup spinnaker from scratch?
https://redd.it/1hy15h9
@r_devops
Medium
Using Terraform to Provision Spinnaker on Kubernetes
How Expedia Group installs Spinnaker in the cloud
I got Job as a DevOps intern, but salary is too low
I am final year engineering Student, in India , I recently got Job as a DevOps intern at AI startup, my work is mostly kubernetes and monitoring my salary is 10 k rupees which is around 120 USD for month, considering current market situation I am confused, whether i take this job or not.
https://redd.it/1hy21cg
@r_devops
I am final year engineering Student, in India , I recently got Job as a DevOps intern at AI startup, my work is mostly kubernetes and monitoring my salary is 10 k rupees which is around 120 USD for month, considering current market situation I am confused, whether i take this job or not.
https://redd.it/1hy21cg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Taking My Career Seriously
Sparing the most of the sob story behind everything, I wound up in DevOps on accident after the military with a resume that looks a lot like SysAd who transitioned to DevOps. The portion of the sob story you do get is a major life tragedy this last year led me to actually get my shit together after a very long spiral, and part of getting my shit together was realizing I was coasting on raw intelligence without really picking up new skills.
The issue now is basically I've been pedal to the medal in catching up skills I've neglected, taking studying seriously, all that fun stuff, but I wanted to get opinions on the best way to display that skill, especially since I don't have a degree of or certs of any kind, so I'm missing the foot in the door leverage having a degree gives you. I've always been a solid interviewer and test-taker, so I'm basically just looking for the best ways to get recruiting/hiring teams attention.
The reason I'm asking this is that I've decided I'm fed up with living on the opposite side of my country from my family/friends and only seeing them at most 4 times a year, so I want to relocate, but I'm not fully remote, so this requires getting back out in the hiring arena.
If I count my military experience since I first did a SysAd thing in MOS school, I'm looking at 10 YOE. I'm in a staff/mid position currently. I have a good understanding of everything on the roadmaps.sh DevOps roadmap and its getting better by the day thanks to finally turning my damn life around. Also, learning quickly has always been something I'm good at, as well as flying blind with just a manual in hand.
Asking especially the seniors/principals/managers out there who have any influence on the hiring process, what would seeing from a no-degree, no-cert candidate to get them into the actual interview pipeline. I've received the advice that having projects on GitHub is almost a waste of time if you're not 'just' a developer, and that contributing to open source projects like CNCF is solid move. I've heard mixed things about certs, but my company covers 10k of education benefits per year so if there's any that are solid door-openers, I've always been a good test taker. My current skills are listed at the bottom in case there's some specific showcase that anyone is aware of for a particular skill.
Open to any and all ideas/critique, really just want to go hiking in the mountains with my friends on the regular again. Completely open to a harsh 'drop to a junior role' answer if that is the move. My current salary is 145, and I'm willing to cut that down if need be, though that's obviously not the primary plan of action.
I deeply appreciate any and all input on this.
My Skills Currently:
Python
Golang
Bash/Shell
RHEL, Rocky, and Ubuntu Linux
Windows Server
Mac admin experience but I hate it lol
Docker/K8s
SaltStack(learning Ansible on my own because tbh I hate Salt with a passion and want to move somewhere that doesn't use it)
Vagrant
Jenkins
Google Cloud Platform(learning this on my own because my project uses exactly zero cloud)
I also have a pretty solid gasp of both ChatGPT and Gemini's API because of personal interest, but have had zero opportunity to use this in a professional capacity
I keep getting made Scrum Master when we lose ours and I am begrudgingly good at it
Know my way around the backend of Atlassian's suite way too well(Jira, Confluence, BitBucket)
https://redd.it/1hy668e
@r_devops
Sparing the most of the sob story behind everything, I wound up in DevOps on accident after the military with a resume that looks a lot like SysAd who transitioned to DevOps. The portion of the sob story you do get is a major life tragedy this last year led me to actually get my shit together after a very long spiral, and part of getting my shit together was realizing I was coasting on raw intelligence without really picking up new skills.
The issue now is basically I've been pedal to the medal in catching up skills I've neglected, taking studying seriously, all that fun stuff, but I wanted to get opinions on the best way to display that skill, especially since I don't have a degree of or certs of any kind, so I'm missing the foot in the door leverage having a degree gives you. I've always been a solid interviewer and test-taker, so I'm basically just looking for the best ways to get recruiting/hiring teams attention.
The reason I'm asking this is that I've decided I'm fed up with living on the opposite side of my country from my family/friends and only seeing them at most 4 times a year, so I want to relocate, but I'm not fully remote, so this requires getting back out in the hiring arena.
If I count my military experience since I first did a SysAd thing in MOS school, I'm looking at 10 YOE. I'm in a staff/mid position currently. I have a good understanding of everything on the roadmaps.sh DevOps roadmap and its getting better by the day thanks to finally turning my damn life around. Also, learning quickly has always been something I'm good at, as well as flying blind with just a manual in hand.
Asking especially the seniors/principals/managers out there who have any influence on the hiring process, what would seeing from a no-degree, no-cert candidate to get them into the actual interview pipeline. I've received the advice that having projects on GitHub is almost a waste of time if you're not 'just' a developer, and that contributing to open source projects like CNCF is solid move. I've heard mixed things about certs, but my company covers 10k of education benefits per year so if there's any that are solid door-openers, I've always been a good test taker. My current skills are listed at the bottom in case there's some specific showcase that anyone is aware of for a particular skill.
Open to any and all ideas/critique, really just want to go hiking in the mountains with my friends on the regular again. Completely open to a harsh 'drop to a junior role' answer if that is the move. My current salary is 145, and I'm willing to cut that down if need be, though that's obviously not the primary plan of action.
I deeply appreciate any and all input on this.
My Skills Currently:
Python
Golang
Bash/Shell
RHEL, Rocky, and Ubuntu Linux
Windows Server
Mac admin experience but I hate it lol
Docker/K8s
SaltStack(learning Ansible on my own because tbh I hate Salt with a passion and want to move somewhere that doesn't use it)
Vagrant
Jenkins
Google Cloud Platform(learning this on my own because my project uses exactly zero cloud)
I also have a pretty solid gasp of both ChatGPT and Gemini's API because of personal interest, but have had zero opportunity to use this in a professional capacity
I keep getting made Scrum Master when we lose ours and I am begrudgingly good at it
Know my way around the backend of Atlassian's suite way too well(Jira, Confluence, BitBucket)
https://redd.it/1hy668e
@r_devops
Reddit
Taking My Career Seriously : r/devops
52 votes, 30 comments. 377K subscribers in the devops community.
OpenTofu 1.9.0 is out with many long awaited features!
OpenTofu is a IaC tool used to manage your infrastructure across clouds and environments using a declarative language. This latest release includes provider iteration, the most requested feature so far!
You can learn more at https://opentofu.org/blog/opentofu-1-9-0/.
https://redd.it/1hy71pg
@r_devops
OpenTofu is a IaC tool used to manage your infrastructure across clouds and environments using a declarative language. This latest release includes provider iteration, the most requested feature so far!
You can learn more at https://opentofu.org/blog/opentofu-1-9-0/.
https://redd.it/1hy71pg
@r_devops
opentofu.org
OpenTofu 1.9.0 is available now with provider for_each | OpenTofu
OpenTofu 1.9.0 available now with provider for_each, a much-requested feature that makes multi-zone deployments easier and reduces code duplication.
Notice of termination of your A Cloud Guru lifetime course access
Anyone else received this and/or got more info? What is the Complete plan as it is not on the website?
-------
Dear customer,
Thank you for using our A Cloud Guru (ACG) product. We wanted to let you know we are launching new packages that now include the best of tech and cloud, all within one platform. These new packages include integrating A Cloud Guru into the Pluralsight platform.
Our records show that you have a ACG course lifetime access account. As part of integrating A Cloud Guru into the Pluralsight platform, we are terminating your lifetime course access license to the software-as-a-service (SaaS) offering of A Cloud Guru on February 1, 2025 due to the plan being retired. This move is made in accordance with the termination for convenience clause as outlined in section 14.2 of our https://legal.pluralsight.com/policies?name=individual-terms-of-use
Please note the following details regarding the termination:
* Termination date: The termination will occur February 1, 2025. After that date, you’ll no longer have access to the lifetime course.
* Data retrieval: We may delete any personal data associated with your personal A Cloud Guru account after the termination date and aren’t responsible for any loss or deletion thereafter.
* Outstanding obligations: You have no outstanding payment obligations related to the SaaS offering prior to the termination date.
Information regarding your other subscription
Your Pluralsight or ACG subscription will soon be upgraded at no additional cost to our new package, Complete. You will receive additional notifications about this change coming soon.
https://redd.it/1hy8jm7
@r_devops
Anyone else received this and/or got more info? What is the Complete plan as it is not on the website?
-------
Dear customer,
Thank you for using our A Cloud Guru (ACG) product. We wanted to let you know we are launching new packages that now include the best of tech and cloud, all within one platform. These new packages include integrating A Cloud Guru into the Pluralsight platform.
Our records show that you have a ACG course lifetime access account. As part of integrating A Cloud Guru into the Pluralsight platform, we are terminating your lifetime course access license to the software-as-a-service (SaaS) offering of A Cloud Guru on February 1, 2025 due to the plan being retired. This move is made in accordance with the termination for convenience clause as outlined in section 14.2 of our https://legal.pluralsight.com/policies?name=individual-terms-of-use
Please note the following details regarding the termination:
* Termination date: The termination will occur February 1, 2025. After that date, you’ll no longer have access to the lifetime course.
* Data retrieval: We may delete any personal data associated with your personal A Cloud Guru account after the termination date and aren’t responsible for any loss or deletion thereafter.
* Outstanding obligations: You have no outstanding payment obligations related to the SaaS offering prior to the termination date.
Information regarding your other subscription
Your Pluralsight or ACG subscription will soon be upgraded at no additional cost to our new package, Complete. You will receive additional notifications about this change coming soon.
https://redd.it/1hy8jm7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Bucardo Alternatives?
I know this probably isn't a "true" DevOps question, but lucky me I've inherited our ex-DBA's responsibilities! Anyway we have an onprem postgres cluster in a master-standby setup using streaming replication currently. I'm looking to migrate this into RDS, more specifically looking to replicate into RDS without disrupting our current master. Eventually after testing is complete we would do a cutover to the RDS instance. As far as we are concerned the master is "untouchable"
I've been weighing my options: -
Bucardo seems not possible as it would require adding triggers to tables and I can't do any DDL on a secondary as they are read-only. It would have to be set up on the master (which is a no-no here). And the app/db is so fragile and sensitive to latency everything would fall down (I'm working on fixing this next lol)
Streaming replication - can't do this into RDS
Logical replication - I don't think there is a way to set this up on one of my secondaries as they are already hooked into the streaming setup? This option is a maybe I guess, but I'm really unsure.
pgdump/restore - this isn't feasible as it would require too much downtime and also my RDS instance needs to be fully in-sync when it is time for cutover.
I've been trying to weigh my options and from what I can surmise there's no real good ones. Other than looking for a new job XD
I'm curious if anybody else has had a similar experience and how they were able to overcome, thanks in advance!
https://redd.it/1hyaefd
@r_devops
I know this probably isn't a "true" DevOps question, but lucky me I've inherited our ex-DBA's responsibilities! Anyway we have an onprem postgres cluster in a master-standby setup using streaming replication currently. I'm looking to migrate this into RDS, more specifically looking to replicate into RDS without disrupting our current master. Eventually after testing is complete we would do a cutover to the RDS instance. As far as we are concerned the master is "untouchable"
I've been weighing my options: -
Bucardo seems not possible as it would require adding triggers to tables and I can't do any DDL on a secondary as they are read-only. It would have to be set up on the master (which is a no-no here). And the app/db is so fragile and sensitive to latency everything would fall down (I'm working on fixing this next lol)
Streaming replication - can't do this into RDS
Logical replication - I don't think there is a way to set this up on one of my secondaries as they are already hooked into the streaming setup? This option is a maybe I guess, but I'm really unsure.
pgdump/restore - this isn't feasible as it would require too much downtime and also my RDS instance needs to be fully in-sync when it is time for cutover.
I've been trying to weigh my options and from what I can surmise there's no real good ones. Other than looking for a new job XD
I'm curious if anybody else has had a similar experience and how they were able to overcome, thanks in advance!
https://redd.it/1hyaefd
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community