How to transfer free app to domain name?
I have an app hosted on free pythonanywhere account. I now I also own a domain name via goDaddy. How to link that domain name to my site?
https://redd.it/1hy02sv
@r_devops
I have an app hosted on free pythonanywhere account. I now I also own a domain name via goDaddy. How to link that domain name to my site?
https://redd.it/1hy02sv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Self-Hosted Drone CI with issues.
Greetings everyone.
I am trying to setup a selfhosted CI/CD setup.
Development server that is running Drone CI in Docker is running on Ubuntu 24.04.1 LTS.
Currently i have a Drone CI in a docker container (both server and runner), then i have a Docker Private Registry on a seperate server.
Once a push is sent to Github, it will activate a webhook which starts the Drone CI to work.
Been tinkering with this a few days now, tried various solutions.
In short, i want to be able to push my code to Github, webhook is called and my local development server with Drone CI is activated, where it pulls the code, caches the dependencies for backend and frontend, runs the unit tests and such, security checks and then pushes the image to private registry which are used to spin up the development site.
Been having issues with caching part where it doesn't actually store it in the cache folder.
Also been having issues with when Drone-Runner trying to push the image to the Private Registry suddenly stalling and retrying over and over but not always.
Here is the .drone.yml :
kind: pipeline
type: docker
name: default
steps:
# Version 0.1
# Generate Cache Key
- name: generate-cache-key
image: alpine
commands:
- echo "Generating Cache Key..."
- echo -n "$(md5sum package.json | awk '{print $1}')" > .cachekey
# Debug Cache Key Loation
- name: debug-cache-key
image: alpine
commands:
- echo "Current Directory:"
- pwd
- echo "Listing contents of the Directory:"
- ls -la
- echo "Cache Key:"
- cat .cachekey
# Restore Cache for Backend Dependicies
# - name: restore-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# settings:
# backend: "filesystem"
# restore: true
# cachekey: cache-backend-{{ .Commit.Branch }}
# archiveformat: "gzip"
# volumes:
# - name: cache
# path: /tmp/cache
# Build Backend Image for Development
# - name: build-backend-dev
# image: plugins/docker
# when:
# branch:
# - dev
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.dev
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONECOMMITSHA}
# purge: false
# Build Backend Image for Production
# - name: build-backend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.prod
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONECOMMITSHA}
# purge: false
# Check Debug Cache before Rebuild
# - name: debug-cache-before-rebuild
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - echo "Checking cache content before rebuild.."
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Rebuild Cache for Backend Dependicies
# - name: rebuild-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
Greetings everyone.
I am trying to setup a selfhosted CI/CD setup.
Development server that is running Drone CI in Docker is running on Ubuntu 24.04.1 LTS.
Currently i have a Drone CI in a docker container (both server and runner), then i have a Docker Private Registry on a seperate server.
Once a push is sent to Github, it will activate a webhook which starts the Drone CI to work.
Been tinkering with this a few days now, tried various solutions.
In short, i want to be able to push my code to Github, webhook is called and my local development server with Drone CI is activated, where it pulls the code, caches the dependencies for backend and frontend, runs the unit tests and such, security checks and then pushes the image to private registry which are used to spin up the development site.
Been having issues with caching part where it doesn't actually store it in the cache folder.
Also been having issues with when Drone-Runner trying to push the image to the Private Registry suddenly stalling and retrying over and over but not always.
Here is the .drone.yml :
kind: pipeline
type: docker
name: default
steps:
# Version 0.1
# Generate Cache Key
- name: generate-cache-key
image: alpine
commands:
- echo "Generating Cache Key..."
- echo -n "$(md5sum package.json | awk '{print $1}')" > .cachekey
# Debug Cache Key Loation
- name: debug-cache-key
image: alpine
commands:
- echo "Current Directory:"
- pwd
- echo "Listing contents of the Directory:"
- ls -la
- echo "Cache Key:"
- cat .cachekey
# Restore Cache for Backend Dependicies
# - name: restore-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# settings:
# backend: "filesystem"
# restore: true
# cachekey: cache-backend-{{ .Commit.Branch }}
# archiveformat: "gzip"
# volumes:
# - name: cache
# path: /tmp/cache
# Build Backend Image for Development
# - name: build-backend-dev
# image: plugins/docker
# when:
# branch:
# - dev
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.dev
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONECOMMITSHA}
# purge: false
# Build Backend Image for Production
# - name: build-backend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.prod
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONECOMMITSHA}
# purge: false
# Check Debug Cache before Rebuild
# - name: debug-cache-before-rebuild
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - echo "Checking cache content before rebuild.."
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Rebuild Cache for Backend Dependicies
# - name: rebuild-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGETPACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
Self-Hosted Drone CI with issues.
Greetings everyone.
I am trying to setup a selfhosted CI/CD setup.
Development server that is running Drone CI in Docker is running on Ubuntu 24.04.1 LTS.
Currently i have a Drone CI in a docker container (both server and runner), then i have a Docker Private Registry on a seperate server.
Once a push is sent to Github, it will activate a webhook which starts the Drone CI to work.
Been tinkering with this a few days now, tried various solutions.
In short, i want to be able to push my code to Github, webhook is called and my local development server with Drone CI is activated, where it pulls the code, caches the dependencies for backend and frontend, runs the unit tests and such, security checks and then pushes the image to private registry which are used to spin up the development site.
Been having issues with caching part where it doesn't actually store it in the cache folder.
Also been having issues with when Drone-Runner trying to push the image to the Private Registry suddenly stalling and retrying over and over but not always.
Here is the .drone.yml :
kind: pipeline
type: docker
name: default
steps:
# Version 0.1
# Generate Cache Key
- name: generate-cache-key
image: alpine
commands:
- echo "Generating Cache Key..."
- echo -n "$(md5sum package.json | awk '{print $1}')" > .cache_key
# Debug Cache Key Loation
- name: debug-cache-key
image: alpine
commands:
- echo "Current Directory:"
- pwd
- echo "Listing contents of the Directory:"
- ls -la
- echo "Cache Key:"
- cat .cache_key
# Restore Cache for Backend Dependicies
# - name: restore-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# settings:
# backend: "filesystem"
# restore: true
# cache_key: cache-backend-{{ .Commit.Branch }}
# archive_format: "gzip"
# volumes:
# - name: cache
# path: /tmp/cache
# Build Backend Image for Development
# - name: build-backend-dev
# image: plugins/docker
# when:
# branch:
# - dev
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.dev
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# Build Backend Image for Production
# - name: build-backend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.prod
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# Check Debug Cache before Rebuild
# - name: debug-cache-before-rebuild
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - echo "Checking cache content before rebuild.."
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Rebuild Cache for Backend Dependicies
# - name: rebuild-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
Greetings everyone.
I am trying to setup a selfhosted CI/CD setup.
Development server that is running Drone CI in Docker is running on Ubuntu 24.04.1 LTS.
Currently i have a Drone CI in a docker container (both server and runner), then i have a Docker Private Registry on a seperate server.
Once a push is sent to Github, it will activate a webhook which starts the Drone CI to work.
Been tinkering with this a few days now, tried various solutions.
In short, i want to be able to push my code to Github, webhook is called and my local development server with Drone CI is activated, where it pulls the code, caches the dependencies for backend and frontend, runs the unit tests and such, security checks and then pushes the image to private registry which are used to spin up the development site.
Been having issues with caching part where it doesn't actually store it in the cache folder.
Also been having issues with when Drone-Runner trying to push the image to the Private Registry suddenly stalling and retrying over and over but not always.
Here is the .drone.yml :
kind: pipeline
type: docker
name: default
steps:
# Version 0.1
# Generate Cache Key
- name: generate-cache-key
image: alpine
commands:
- echo "Generating Cache Key..."
- echo -n "$(md5sum package.json | awk '{print $1}')" > .cache_key
# Debug Cache Key Loation
- name: debug-cache-key
image: alpine
commands:
- echo "Current Directory:"
- pwd
- echo "Listing contents of the Directory:"
- ls -la
- echo "Cache Key:"
- cat .cache_key
# Restore Cache for Backend Dependicies
# - name: restore-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# settings:
# backend: "filesystem"
# restore: true
# cache_key: cache-backend-{{ .Commit.Branch }}
# archive_format: "gzip"
# volumes:
# - name: cache
# path: /tmp/cache
# Build Backend Image for Development
# - name: build-backend-dev
# image: plugins/docker
# when:
# branch:
# - dev
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.dev
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# Build Backend Image for Production
# - name: build-backend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# dockerfile: ./backend/Dockerfile.prod
# context: ./backend
# repo: registry.local/my-backend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# Check Debug Cache before Rebuild
# - name: debug-cache-before-rebuild
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - echo "Checking cache content before rebuild.."
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Rebuild Cache for Backend Dependicies
# - name: rebuild-cache-backend
# image: meltwater/drone-cache:latest
# pull: if-not-exists
# environment:
# NUGET_PACKAGES: /tmp/cache/.nuget/packages
# volumes:
# - name: cache
# path: /tmp/cache
# - name: dockersock
# path: /var/run/docker.sock
# settings:
# backend: "filesystem"
# rebuild: true
# cache_key: cache-backend-{{ .Commit.Branch }}
# archive_format: "gzip"
# purge: false
# Validate Rebuilt Cache for Backend Dependicies
# - name: debug-cache
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Restore Cache Frontend
- name: restore-cache-frontend
image: drillster/drone-volume-cache
privileged: true
volumes:
- name: cache
path: /tmp/cache
settings:
restore: true
mount:
- /tmp/cache/node_modules
cache_key: [ ".cache_key" ]
# Debug Cache Before Build
- name: debug-cache-restore
image: alpine
volumes:
- name: cache
path: /tmp/cache
commands:
- echo "Checking restored Cache..."
- ls -al /tmp/cache/node_modules
# Build Frontend Image for Development
- name: build-frontend-dev
image: plugins/docker
privileged: true
when:
branch:
- dev
environment:
PNPM_STORE_PATH: /tmp/cache/node_modules
settings:
dockerfile: ./frontend/Dockerfile.dev
context: ./frontend
repo: registry.local/my-frontend
tags: ${DRONE_COMMIT_SHA}
purge: false
build_args:
NODE_MODULES_CACHE: /tmp/cache/node_modules
volumes:
- name: cache
path: /tmp/cache
- name: dockersock
path: /var/run/docker.sock
# Debug Cache after Build
- name: debug-cache-after-build
image: alpine
volumes:
- name: cache
path: /tmp/cache
commands:
- echo "Cache after build:"
- ls -la /tmp/cache/node_modules
- du -sh /tmp/cache/node_modules
# Rebuild Cache Frontend
- name: rebuild-cache-frontend
image: drillster/drone-volume-cache
privileged: true
volumes:
- name: cache
path: /tmp/cache
settings:
rebuild: true
mount:
- /tmp/cache/node_modules
cache_key: [ ".cache_key" ]
# Build Frontend Image for Production
# - name: build-frontend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# PNPM_STORE_PATH: /tmp/cache/node_modules
# settings:
# dockerfile: ./frontend/Dockerfile.prod
# context: ./frontend
# repo: registry.local/my-frontend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# # Test Backend Using Pushed Image
# - name: test-backend
# image: docker:24
# volumes:
# - name: dockersock
# path: /var/run/docker.sock
# commands:
# - docker pull registry.local/my-backend:${DRONE_COMMIT_SHA}
# - docker run --rm --entrypoint ./test-runner.sh registry.local/my-backend:${DRONE_COMMIT_SHA}
# # Test Frontend Using Pushed Image
# - name: test-frontend
# image: docker:24
# volumes:
# - name: dockersock
# path: /var/run/docker.sock
# commands:
# - docker pull registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - docker run --rm --entrypoint ./test-frontend.sh registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - name: static-code-analysis
# image: sonarsource/sonar-scanner-cli:latest
# environment:
# SONAR_TOKEN:
# from_secret: SONAR_TOKEN
# commands:
# - sonar-scanner -Dsonar.projectKey=togethral -Dsonar.organization=forser -Dsonar.login=$SONAR_TOKEN
# path: /var/run/docker.sock
# settings:
# backend: "filesystem"
# rebuild: true
# cache_key: cache-backend-{{ .Commit.Branch }}
# archive_format: "gzip"
# purge: false
# Validate Rebuilt Cache for Backend Dependicies
# - name: debug-cache
# image: alpine
# volumes:
# - name: cache
# path: /tmp/cache
# commands:
# - ls -la /tmp/cache
# - ls -la /tmp/cache/.nuget/packages
# Restore Cache Frontend
- name: restore-cache-frontend
image: drillster/drone-volume-cache
privileged: true
volumes:
- name: cache
path: /tmp/cache
settings:
restore: true
mount:
- /tmp/cache/node_modules
cache_key: [ ".cache_key" ]
# Debug Cache Before Build
- name: debug-cache-restore
image: alpine
volumes:
- name: cache
path: /tmp/cache
commands:
- echo "Checking restored Cache..."
- ls -al /tmp/cache/node_modules
# Build Frontend Image for Development
- name: build-frontend-dev
image: plugins/docker
privileged: true
when:
branch:
- dev
environment:
PNPM_STORE_PATH: /tmp/cache/node_modules
settings:
dockerfile: ./frontend/Dockerfile.dev
context: ./frontend
repo: registry.local/my-frontend
tags: ${DRONE_COMMIT_SHA}
purge: false
build_args:
NODE_MODULES_CACHE: /tmp/cache/node_modules
volumes:
- name: cache
path: /tmp/cache
- name: dockersock
path: /var/run/docker.sock
# Debug Cache after Build
- name: debug-cache-after-build
image: alpine
volumes:
- name: cache
path: /tmp/cache
commands:
- echo "Cache after build:"
- ls -la /tmp/cache/node_modules
- du -sh /tmp/cache/node_modules
# Rebuild Cache Frontend
- name: rebuild-cache-frontend
image: drillster/drone-volume-cache
privileged: true
volumes:
- name: cache
path: /tmp/cache
settings:
rebuild: true
mount:
- /tmp/cache/node_modules
cache_key: [ ".cache_key" ]
# Build Frontend Image for Production
# - name: build-frontend-prod
# image: plugins/docker
# when:
# branch:
# - main
# environment:
# PNPM_STORE_PATH: /tmp/cache/node_modules
# settings:
# dockerfile: ./frontend/Dockerfile.prod
# context: ./frontend
# repo: registry.local/my-frontend
# tags: ${DRONE_COMMIT_SHA}
# purge: false
# # Test Backend Using Pushed Image
# - name: test-backend
# image: docker:24
# volumes:
# - name: dockersock
# path: /var/run/docker.sock
# commands:
# - docker pull registry.local/my-backend:${DRONE_COMMIT_SHA}
# - docker run --rm --entrypoint ./test-runner.sh registry.local/my-backend:${DRONE_COMMIT_SHA}
# # Test Frontend Using Pushed Image
# - name: test-frontend
# image: docker:24
# volumes:
# - name: dockersock
# path: /var/run/docker.sock
# commands:
# - docker pull registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - docker run --rm --entrypoint ./test-frontend.sh registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - name: static-code-analysis
# image: sonarsource/sonar-scanner-cli:latest
# environment:
# SONAR_TOKEN:
# from_secret: SONAR_TOKEN
# commands:
# - sonar-scanner -Dsonar.projectKey=togethral -Dsonar.organization=forser -Dsonar.login=$SONAR_TOKEN
-Dsonar.working.directory=/tmp/sonar
# - name: security-scan
# image: aquasec/trivy:latest
# commands:
# - trivy image registry.local/my-backend:${DRONE_COMMIT_SHA}
# - trivy image registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - name: deploy
# image: docker:24
# environment:
# DOCKER_TLS_VERIFY: 1
# DOCKER_HOST: tcp://docker-hosts:2376
# commands:
# - docker stack deploy -c ci-cd/docker-scripts/docker-compose.prod.yml togethral
volumes:
- name: dockersock
host:
path: /var/run/docker.sock
- name: cache
host:
path: /var/lib/drone/cache
Here is the [Dockerfile.dev](https://Dockerfile.dev) :
# Use Cypress browser image with Node.js and Chrome
FROM registry.local/cypress-browsers:node-20
# Set the working directory
WORKDIR /app
# Set the cache directory for node_modules
ENV NODE_MODULES_CACHE=/tmp/cache/node_modules
# Copy the dependency files
COPY package.json pnpm-lock.yaml ./
# Install dependencies
RUN npm install -g pnpm
# Create and set permissions for the cache directory
RUN mkdir -p "$NODE_MODULES_CACHE" && chmod -R 777 "$NODE_MODULES_CACHE"
# Configure pnpm to use a custom store directory
RUN pnpm config set store-dir "$NODE_MODULES_CACHE"
# Install dependencies
RUN if [ "$(ls -A $NODE_MODULES_CACHE 2>/dev/null)"]; then \
echo "Cache is valid. Skipping dependencies installation"; \
else \
echo "Cache is empty. Installing dependencies"; \
pnpm install --force --frozen-lockfile; \
fi
# Debug: Log the contents of the cache directory
RUN echo "Cache contents:" && ls -la "$NODE_MODULES_CACHE" || echo "Cache is empty"
# Copy the remaining files
COPY . .
# Ensure test script is executable
# RUN chmod +x ./test-frontend.sh
# Default entrypoint for development
CMD ["pnpm", "start"]
Haven't really toyed with CI/CD previously that much so i gotten some help from ChatGPT but that gives me more headache since it often reference incorrect material.
Been reading the docs for the various tools but still can't figure it out.
Willing to swap out Drone CI for other CI/CD setup also if that would be recommended.
https://redd.it/1hy1aym
@r_devops
# - name: security-scan
# image: aquasec/trivy:latest
# commands:
# - trivy image registry.local/my-backend:${DRONE_COMMIT_SHA}
# - trivy image registry.local/my-frontend:${DRONE_COMMIT_SHA}
# - name: deploy
# image: docker:24
# environment:
# DOCKER_TLS_VERIFY: 1
# DOCKER_HOST: tcp://docker-hosts:2376
# commands:
# - docker stack deploy -c ci-cd/docker-scripts/docker-compose.prod.yml togethral
volumes:
- name: dockersock
host:
path: /var/run/docker.sock
- name: cache
host:
path: /var/lib/drone/cache
Here is the [Dockerfile.dev](https://Dockerfile.dev) :
# Use Cypress browser image with Node.js and Chrome
FROM registry.local/cypress-browsers:node-20
# Set the working directory
WORKDIR /app
# Set the cache directory for node_modules
ENV NODE_MODULES_CACHE=/tmp/cache/node_modules
# Copy the dependency files
COPY package.json pnpm-lock.yaml ./
# Install dependencies
RUN npm install -g pnpm
# Create and set permissions for the cache directory
RUN mkdir -p "$NODE_MODULES_CACHE" && chmod -R 777 "$NODE_MODULES_CACHE"
# Configure pnpm to use a custom store directory
RUN pnpm config set store-dir "$NODE_MODULES_CACHE"
# Install dependencies
RUN if [ "$(ls -A $NODE_MODULES_CACHE 2>/dev/null)"]; then \
echo "Cache is valid. Skipping dependencies installation"; \
else \
echo "Cache is empty. Installing dependencies"; \
pnpm install --force --frozen-lockfile; \
fi
# Debug: Log the contents of the cache directory
RUN echo "Cache contents:" && ls -la "$NODE_MODULES_CACHE" || echo "Cache is empty"
# Copy the remaining files
COPY . .
# Ensure test script is executable
# RUN chmod +x ./test-frontend.sh
# Default entrypoint for development
CMD ["pnpm", "start"]
Haven't really toyed with CI/CD previously that much so i gotten some help from ChatGPT but that gives me more headache since it often reference incorrect material.
Been reading the docs for the various tools but still can't figure it out.
Willing to swap out Drone CI for other CI/CD setup also if that would be recommended.
https://redd.it/1hy1aym
@r_devops
Is there is any way to make production deployment of spinnaker without using hal?
Hey guys,
I'm going to deploy spinnaker in AWS. As I have found in documentation main idea would be to deploy and setup it via
Do you know any place with documentation/Helm chart, or anything similar that helps setup spinnaker from scratch?
https://redd.it/1hy15h9
@r_devops
Hey guys,
I'm going to deploy spinnaker in AWS. As I have found in documentation main idea would be to deploy and setup it via
hal application, that I don't really like. Only post that is somehow mentioning setup spinnaker in old-facioned way was from Expedia https://medium.com/expedia-group-tech/installing-spinnaker-in-the-cloud-c7f518c98dc1 , but code is not fully describes all the process.Do you know any place with documentation/Helm chart, or anything similar that helps setup spinnaker from scratch?
https://redd.it/1hy15h9
@r_devops
Medium
Using Terraform to Provision Spinnaker on Kubernetes
How Expedia Group installs Spinnaker in the cloud
I got Job as a DevOps intern, but salary is too low
I am final year engineering Student, in India , I recently got Job as a DevOps intern at AI startup, my work is mostly kubernetes and monitoring my salary is 10 k rupees which is around 120 USD for month, considering current market situation I am confused, whether i take this job or not.
https://redd.it/1hy21cg
@r_devops
I am final year engineering Student, in India , I recently got Job as a DevOps intern at AI startup, my work is mostly kubernetes and monitoring my salary is 10 k rupees which is around 120 USD for month, considering current market situation I am confused, whether i take this job or not.
https://redd.it/1hy21cg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Taking My Career Seriously
Sparing the most of the sob story behind everything, I wound up in DevOps on accident after the military with a resume that looks a lot like SysAd who transitioned to DevOps. The portion of the sob story you do get is a major life tragedy this last year led me to actually get my shit together after a very long spiral, and part of getting my shit together was realizing I was coasting on raw intelligence without really picking up new skills.
The issue now is basically I've been pedal to the medal in catching up skills I've neglected, taking studying seriously, all that fun stuff, but I wanted to get opinions on the best way to display that skill, especially since I don't have a degree of or certs of any kind, so I'm missing the foot in the door leverage having a degree gives you. I've always been a solid interviewer and test-taker, so I'm basically just looking for the best ways to get recruiting/hiring teams attention.
The reason I'm asking this is that I've decided I'm fed up with living on the opposite side of my country from my family/friends and only seeing them at most 4 times a year, so I want to relocate, but I'm not fully remote, so this requires getting back out in the hiring arena.
If I count my military experience since I first did a SysAd thing in MOS school, I'm looking at 10 YOE. I'm in a staff/mid position currently. I have a good understanding of everything on the roadmaps.sh DevOps roadmap and its getting better by the day thanks to finally turning my damn life around. Also, learning quickly has always been something I'm good at, as well as flying blind with just a manual in hand.
Asking especially the seniors/principals/managers out there who have any influence on the hiring process, what would seeing from a no-degree, no-cert candidate to get them into the actual interview pipeline. I've received the advice that having projects on GitHub is almost a waste of time if you're not 'just' a developer, and that contributing to open source projects like CNCF is solid move. I've heard mixed things about certs, but my company covers 10k of education benefits per year so if there's any that are solid door-openers, I've always been a good test taker. My current skills are listed at the bottom in case there's some specific showcase that anyone is aware of for a particular skill.
Open to any and all ideas/critique, really just want to go hiking in the mountains with my friends on the regular again. Completely open to a harsh 'drop to a junior role' answer if that is the move. My current salary is 145, and I'm willing to cut that down if need be, though that's obviously not the primary plan of action.
I deeply appreciate any and all input on this.
My Skills Currently:
Python
Golang
Bash/Shell
RHEL, Rocky, and Ubuntu Linux
Windows Server
Mac admin experience but I hate it lol
Docker/K8s
SaltStack(learning Ansible on my own because tbh I hate Salt with a passion and want to move somewhere that doesn't use it)
Vagrant
Jenkins
Google Cloud Platform(learning this on my own because my project uses exactly zero cloud)
I also have a pretty solid gasp of both ChatGPT and Gemini's API because of personal interest, but have had zero opportunity to use this in a professional capacity
I keep getting made Scrum Master when we lose ours and I am begrudgingly good at it
Know my way around the backend of Atlassian's suite way too well(Jira, Confluence, BitBucket)
https://redd.it/1hy668e
@r_devops
Sparing the most of the sob story behind everything, I wound up in DevOps on accident after the military with a resume that looks a lot like SysAd who transitioned to DevOps. The portion of the sob story you do get is a major life tragedy this last year led me to actually get my shit together after a very long spiral, and part of getting my shit together was realizing I was coasting on raw intelligence without really picking up new skills.
The issue now is basically I've been pedal to the medal in catching up skills I've neglected, taking studying seriously, all that fun stuff, but I wanted to get opinions on the best way to display that skill, especially since I don't have a degree of or certs of any kind, so I'm missing the foot in the door leverage having a degree gives you. I've always been a solid interviewer and test-taker, so I'm basically just looking for the best ways to get recruiting/hiring teams attention.
The reason I'm asking this is that I've decided I'm fed up with living on the opposite side of my country from my family/friends and only seeing them at most 4 times a year, so I want to relocate, but I'm not fully remote, so this requires getting back out in the hiring arena.
If I count my military experience since I first did a SysAd thing in MOS school, I'm looking at 10 YOE. I'm in a staff/mid position currently. I have a good understanding of everything on the roadmaps.sh DevOps roadmap and its getting better by the day thanks to finally turning my damn life around. Also, learning quickly has always been something I'm good at, as well as flying blind with just a manual in hand.
Asking especially the seniors/principals/managers out there who have any influence on the hiring process, what would seeing from a no-degree, no-cert candidate to get them into the actual interview pipeline. I've received the advice that having projects on GitHub is almost a waste of time if you're not 'just' a developer, and that contributing to open source projects like CNCF is solid move. I've heard mixed things about certs, but my company covers 10k of education benefits per year so if there's any that are solid door-openers, I've always been a good test taker. My current skills are listed at the bottom in case there's some specific showcase that anyone is aware of for a particular skill.
Open to any and all ideas/critique, really just want to go hiking in the mountains with my friends on the regular again. Completely open to a harsh 'drop to a junior role' answer if that is the move. My current salary is 145, and I'm willing to cut that down if need be, though that's obviously not the primary plan of action.
I deeply appreciate any and all input on this.
My Skills Currently:
Python
Golang
Bash/Shell
RHEL, Rocky, and Ubuntu Linux
Windows Server
Mac admin experience but I hate it lol
Docker/K8s
SaltStack(learning Ansible on my own because tbh I hate Salt with a passion and want to move somewhere that doesn't use it)
Vagrant
Jenkins
Google Cloud Platform(learning this on my own because my project uses exactly zero cloud)
I also have a pretty solid gasp of both ChatGPT and Gemini's API because of personal interest, but have had zero opportunity to use this in a professional capacity
I keep getting made Scrum Master when we lose ours and I am begrudgingly good at it
Know my way around the backend of Atlassian's suite way too well(Jira, Confluence, BitBucket)
https://redd.it/1hy668e
@r_devops
Reddit
Taking My Career Seriously : r/devops
52 votes, 30 comments. 377K subscribers in the devops community.
OpenTofu 1.9.0 is out with many long awaited features!
OpenTofu is a IaC tool used to manage your infrastructure across clouds and environments using a declarative language. This latest release includes provider iteration, the most requested feature so far!
You can learn more at https://opentofu.org/blog/opentofu-1-9-0/.
https://redd.it/1hy71pg
@r_devops
OpenTofu is a IaC tool used to manage your infrastructure across clouds and environments using a declarative language. This latest release includes provider iteration, the most requested feature so far!
You can learn more at https://opentofu.org/blog/opentofu-1-9-0/.
https://redd.it/1hy71pg
@r_devops
opentofu.org
OpenTofu 1.9.0 is available now with provider for_each | OpenTofu
OpenTofu 1.9.0 available now with provider for_each, a much-requested feature that makes multi-zone deployments easier and reduces code duplication.
Notice of termination of your A Cloud Guru lifetime course access
Anyone else received this and/or got more info? What is the Complete plan as it is not on the website?
-------
Dear customer,
Thank you for using our A Cloud Guru (ACG) product. We wanted to let you know we are launching new packages that now include the best of tech and cloud, all within one platform. These new packages include integrating A Cloud Guru into the Pluralsight platform.
Our records show that you have a ACG course lifetime access account. As part of integrating A Cloud Guru into the Pluralsight platform, we are terminating your lifetime course access license to the software-as-a-service (SaaS) offering of A Cloud Guru on February 1, 2025 due to the plan being retired. This move is made in accordance with the termination for convenience clause as outlined in section 14.2 of our https://legal.pluralsight.com/policies?name=individual-terms-of-use
Please note the following details regarding the termination:
* Termination date: The termination will occur February 1, 2025. After that date, you’ll no longer have access to the lifetime course.
* Data retrieval: We may delete any personal data associated with your personal A Cloud Guru account after the termination date and aren’t responsible for any loss or deletion thereafter.
* Outstanding obligations: You have no outstanding payment obligations related to the SaaS offering prior to the termination date.
Information regarding your other subscription
Your Pluralsight or ACG subscription will soon be upgraded at no additional cost to our new package, Complete. You will receive additional notifications about this change coming soon.
https://redd.it/1hy8jm7
@r_devops
Anyone else received this and/or got more info? What is the Complete plan as it is not on the website?
-------
Dear customer,
Thank you for using our A Cloud Guru (ACG) product. We wanted to let you know we are launching new packages that now include the best of tech and cloud, all within one platform. These new packages include integrating A Cloud Guru into the Pluralsight platform.
Our records show that you have a ACG course lifetime access account. As part of integrating A Cloud Guru into the Pluralsight platform, we are terminating your lifetime course access license to the software-as-a-service (SaaS) offering of A Cloud Guru on February 1, 2025 due to the plan being retired. This move is made in accordance with the termination for convenience clause as outlined in section 14.2 of our https://legal.pluralsight.com/policies?name=individual-terms-of-use
Please note the following details regarding the termination:
* Termination date: The termination will occur February 1, 2025. After that date, you’ll no longer have access to the lifetime course.
* Data retrieval: We may delete any personal data associated with your personal A Cloud Guru account after the termination date and aren’t responsible for any loss or deletion thereafter.
* Outstanding obligations: You have no outstanding payment obligations related to the SaaS offering prior to the termination date.
Information regarding your other subscription
Your Pluralsight or ACG subscription will soon be upgraded at no additional cost to our new package, Complete. You will receive additional notifications about this change coming soon.
https://redd.it/1hy8jm7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Bucardo Alternatives?
I know this probably isn't a "true" DevOps question, but lucky me I've inherited our ex-DBA's responsibilities! Anyway we have an onprem postgres cluster in a master-standby setup using streaming replication currently. I'm looking to migrate this into RDS, more specifically looking to replicate into RDS without disrupting our current master. Eventually after testing is complete we would do a cutover to the RDS instance. As far as we are concerned the master is "untouchable"
I've been weighing my options: -
Bucardo seems not possible as it would require adding triggers to tables and I can't do any DDL on a secondary as they are read-only. It would have to be set up on the master (which is a no-no here). And the app/db is so fragile and sensitive to latency everything would fall down (I'm working on fixing this next lol)
Streaming replication - can't do this into RDS
Logical replication - I don't think there is a way to set this up on one of my secondaries as they are already hooked into the streaming setup? This option is a maybe I guess, but I'm really unsure.
pgdump/restore - this isn't feasible as it would require too much downtime and also my RDS instance needs to be fully in-sync when it is time for cutover.
I've been trying to weigh my options and from what I can surmise there's no real good ones. Other than looking for a new job XD
I'm curious if anybody else has had a similar experience and how they were able to overcome, thanks in advance!
https://redd.it/1hyaefd
@r_devops
I know this probably isn't a "true" DevOps question, but lucky me I've inherited our ex-DBA's responsibilities! Anyway we have an onprem postgres cluster in a master-standby setup using streaming replication currently. I'm looking to migrate this into RDS, more specifically looking to replicate into RDS without disrupting our current master. Eventually after testing is complete we would do a cutover to the RDS instance. As far as we are concerned the master is "untouchable"
I've been weighing my options: -
Bucardo seems not possible as it would require adding triggers to tables and I can't do any DDL on a secondary as they are read-only. It would have to be set up on the master (which is a no-no here). And the app/db is so fragile and sensitive to latency everything would fall down (I'm working on fixing this next lol)
Streaming replication - can't do this into RDS
Logical replication - I don't think there is a way to set this up on one of my secondaries as they are already hooked into the streaming setup? This option is a maybe I guess, but I'm really unsure.
pgdump/restore - this isn't feasible as it would require too much downtime and also my RDS instance needs to be fully in-sync when it is time for cutover.
I've been trying to weigh my options and from what I can surmise there's no real good ones. Other than looking for a new job XD
I'm curious if anybody else has had a similar experience and how they were able to overcome, thanks in advance!
https://redd.it/1hyaefd
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Socket.io connection being cancelled on fly.io
Hi everyone!
I have a single fly.io machine running my Python backend app, which does text-to-speech audio streaming using Socket.io to stream the audio chunks to the Next.js frontend.
Locally the audio streaming over websockets works, but when I deploy the backend to fly.io and frontend to Vercel - the audio streaming randomly stops after \~2 seconds.
My Python backend uses RealtimeTTS GitHub - KoljaB/RealtimeTTS: Converts text to speech in realtime
with Azure Speech Services.
It’s almost like something forces the audio streaming to stop after a few seconds. Could the fly.io loadbalancer or proxy be the reason here?
My fly.io config:
app = 'staging-polyglotpal'
primaryregion = 'gru'
[processes]
app = "./main.py"
otelcollector = "otelcol-contrib --config /etc/otelcol/otel-config.yaml"
[build]
dockerfile = "Dockerfile"
[[services]]
name = "app"
processes = ["app"]
internalport = 8000
protocol = "tcp"
autostopmachines = "suspend"
autostartmachines = true
minmachinesrunning = 1
forcehttps = true
[services.concurrency]
hardlimit = 20
softlimit = 15
[[services.ports]]
handlers = ["tls", "http"]
tlsoptions = { "alpn" = "h2", "http/1.1", "versions" = "TLSv1.2", "TLSv1.3" }
port = 443
[services.ports]
handlers = "http"
port = 80
[services.http_checks]
interval = "10s"
graceperiod = "30s"
method = "get"
path = "/api/healthcheck/status"
protocol = "http"
timeout = "5s"
tlsskipverify = true
[[services]]
name = "otelcollector"
processes = ["otelcollector"]
internalport = 40000
[services.tcp_checks]
interval = "10s"
graceperiod = "30s"
timeout = "5s"
[[vm]]
size = "performance-1x"
#size = "shared-cpu-1x"
cpus = 1
memory = "2gb"
processes = ["app"]
[[vm]]
size = "shared-cpu-1x"
cpus = 1
memory = "512mb"
processes = ["otelcollector"]
[Socket.io](https://socket.io/) config:
ping\interval=45,
ping_timeout=120,
transports: [“websocket”\],
forceNew: false,
reconnection: true,
reconnectionAttempts: 3,
reconnectionDelay: 3000,
timeout: 10000,
I’m quite stuck right now and would really appreciate some feedback. Has anyone had a similar issue like this?
https://redd.it/1hy7odc
@r_devops
Hi everyone!
I have a single fly.io machine running my Python backend app, which does text-to-speech audio streaming using Socket.io to stream the audio chunks to the Next.js frontend.
Locally the audio streaming over websockets works, but when I deploy the backend to fly.io and frontend to Vercel - the audio streaming randomly stops after \~2 seconds.
My Python backend uses RealtimeTTS GitHub - KoljaB/RealtimeTTS: Converts text to speech in realtime
with Azure Speech Services.
It’s almost like something forces the audio streaming to stop after a few seconds. Could the fly.io loadbalancer or proxy be the reason here?
My fly.io config:
app = 'staging-polyglotpal'
primaryregion = 'gru'
[processes]
app = "./main.py"
otelcollector = "otelcol-contrib --config /etc/otelcol/otel-config.yaml"
[build]
dockerfile = "Dockerfile"
[[services]]
name = "app"
processes = ["app"]
internalport = 8000
protocol = "tcp"
autostopmachines = "suspend"
autostartmachines = true
minmachinesrunning = 1
forcehttps = true
[services.concurrency]
hardlimit = 20
softlimit = 15
[[services.ports]]
handlers = ["tls", "http"]
tlsoptions = { "alpn" = "h2", "http/1.1", "versions" = "TLSv1.2", "TLSv1.3" }
port = 443
[services.ports]
handlers = "http"
port = 80
[services.http_checks]
interval = "10s"
graceperiod = "30s"
method = "get"
path = "/api/healthcheck/status"
protocol = "http"
timeout = "5s"
tlsskipverify = true
[[services]]
name = "otelcollector"
processes = ["otelcollector"]
internalport = 40000
[services.tcp_checks]
interval = "10s"
graceperiod = "30s"
timeout = "5s"
[[vm]]
size = "performance-1x"
#size = "shared-cpu-1x"
cpus = 1
memory = "2gb"
processes = ["app"]
[[vm]]
size = "shared-cpu-1x"
cpus = 1
memory = "512mb"
processes = ["otelcollector"]
[Socket.io](https://socket.io/) config:
ping\interval=45,
ping_timeout=120,
transports: [“websocket”\],
forceNew: false,
reconnection: true,
reconnectionAttempts: 3,
reconnectionDelay: 3000,
timeout: 10000,
I’m quite stuck right now and would really appreciate some feedback. Has anyone had a similar issue like this?
https://redd.it/1hy7odc
@r_devops
GitHub
GitHub - KoljaB/RealtimeTTS: Converts text to speech in realtime
Converts text to speech in realtime. Contribute to KoljaB/RealtimeTTS development by creating an account on GitHub.
Errors when validating Packer .hcl template: Error: Extraneous label for build and Error: Unsupported block type
Using Packer version v1.11.2 with the AWS Plugin v1.3.4_x5.0 on a RHEL8 EC2 instance. Using it to try and build out a Windows 2019 AMI.
I have a background with Terraform and trying to keep the .hcl template simple, however I keep hitting up against two errors (Error: Unsupported block type and Error: Extraneous label for build) various times and can't figure out from searching online how on fix.
https://redd.it/1hycz8e
@r_devops
Using Packer version v1.11.2 with the AWS Plugin v1.3.4_x5.0 on a RHEL8 EC2 instance. Using it to try and build out a Windows 2019 AMI.
I have a background with Terraform and trying to keep the .hcl template simple, however I keep hitting up against two errors (Error: Unsupported block type and Error: Extraneous label for build) various times and can't figure out from searching online how on fix.
packer {required_version = ">=1.10.0"required_plugins { amazon = { version = ">=1.3.3" source = "github.com/hashicorp/amazon" }}}variables { ami_name = "packer-windows-server-2019" instance_type = "g4dn.xlarge" region = "us-east-1"vpc_id = "vpc-abc123"subnet_id = "us-east-1b"key_name = "generic-keypair"}build "amazon-ebs" { region = "${var.region}" source_ami = "ami-04d76aa3cb20388b6"instance_type = "${var.instance_type}"vpc_id = "${var.vpc_id}"subnet_id = "${var.subnet_id}"associate_public_ip_address = true ssh_username = "Administrator" ssh_password = "Password-goes-here" ssh_interface = "winrm" winrm_protocol = "https" winrm_transport = "ntlm" communicator = "winrm" ami_name = "${var.ami_name}" tags = { Name = "${var.ami_name}" }provisioner "shell" { inline = [ "powershell.exe -Command \"Install-WindowsFeature -Name IIS -IncludeManagementTools\"", "powershell.exe -Command \""iisreset\"", ]}https://redd.it/1hycz8e
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
AI: Driven DevOps?
Hello, my dear DevOps engineers.
I’m writing to gather information about platforms that you may be using that have already incorporated AI and are seeing significant benefits from it.
I’ve always been straightforward when creating deployment pipelines. I recently switched from GitLab to GitHub Actions due to a change in my role, but the issue persists.
This company ecosystem is fragmented, and we urgently need to transition from manual deployments to automated ones (We are AWS powered).
While I’m hesitant about DevOps AI platforms, the future seems to point in that direction. Therefore, I’m requesting your assistance in understanding how platforms like Harness or any other can alleviate the challenges associated with managing automated deployments.
In my opinion, we should begin by standardizing code at the architecture level to enable the utilization of reusable deployment patterns.
I would greatly appreciate any guidance on AI-driven solutions in the following areas:
Deployment platforms
Observability
Platforms that automatically generate test cases based on Epic and stories (as context)
Security
Any assistance you can provide would be invaluable.
Thank you.
https://redd.it/1hyfc3a
@r_devops
Hello, my dear DevOps engineers.
I’m writing to gather information about platforms that you may be using that have already incorporated AI and are seeing significant benefits from it.
I’ve always been straightforward when creating deployment pipelines. I recently switched from GitLab to GitHub Actions due to a change in my role, but the issue persists.
This company ecosystem is fragmented, and we urgently need to transition from manual deployments to automated ones (We are AWS powered).
While I’m hesitant about DevOps AI platforms, the future seems to point in that direction. Therefore, I’m requesting your assistance in understanding how platforms like Harness or any other can alleviate the challenges associated with managing automated deployments.
In my opinion, we should begin by standardizing code at the architecture level to enable the utilization of reusable deployment patterns.
I would greatly appreciate any guidance on AI-driven solutions in the following areas:
Deployment platforms
Observability
Platforms that automatically generate test cases based on Epic and stories (as context)
Security
Any assistance you can provide would be invaluable.
Thank you.
https://redd.it/1hyfc3a
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Key Management Question: Rewrite Docker ENV or Rewrite JSON config for script?
I have a simple AWS SES mailer node script where I read the AWS keys from a config file. The keys aren't in my repo, but on deploy I write them to a config file.
I know it's a better practice to use the ENV variables in docker because they aren't written to a file in the container directly, but... I'm still rewriting the Dockerfile and that persists after launch. It feels like it doesn't really solve the problem of "can someone actually see these keys in a deploy process?"
Any suggestions for a better way to handle the AWS keys on deploy? I'm using a simple script to do the actual deploy.
https://redd.it/1hybtlw
@r_devops
I have a simple AWS SES mailer node script where I read the AWS keys from a config file. The keys aren't in my repo, but on deploy I write them to a config file.
I know it's a better practice to use the ENV variables in docker because they aren't written to a file in the container directly, but... I'm still rewriting the Dockerfile and that persists after launch. It feels like it doesn't really solve the problem of "can someone actually see these keys in a deploy process?"
Any suggestions for a better way to handle the AWS keys on deploy? I'm using a simple script to do the actual deploy.
https://redd.it/1hybtlw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Assigning instance role to my ec2 instance breaks network connectivity to ec2 endpoint and other aws endpoints
Hey all... really weird issue I am having.
Originally I was trying to set up an EKS cluster, and the nodes were not joining the cluster. I checked it out, and apparently nodeadm-config was unable to do an ec2:DescribeInstances -- but not due to permissions errors, instead due to a network timeout for the ec2.region.amazonaws.com endpoint. Indeed a direct curl to the endpoint just hangs. Other public services e.g. google.com, text.npr.org can be accessed. But stuff on amazonaws.com ... no go.
Through trial and error, I narrowed the issue down to the instance profile used for the ec2 instances. I have made several test ec2 instances, and it seems that adding an instance profile causes requests to the ec2 endpoint to hang.
Does anyone have any idea why this might be happening? Thanks in advance.
https://redd.it/1hyj8gd
@r_devops
Hey all... really weird issue I am having.
Originally I was trying to set up an EKS cluster, and the nodes were not joining the cluster. I checked it out, and apparently nodeadm-config was unable to do an ec2:DescribeInstances -- but not due to permissions errors, instead due to a network timeout for the ec2.region.amazonaws.com endpoint. Indeed a direct curl to the endpoint just hangs. Other public services e.g. google.com, text.npr.org can be accessed. But stuff on amazonaws.com ... no go.
Through trial and error, I narrowed the issue down to the instance profile used for the ec2 instances. I have made several test ec2 instances, and it seems that adding an instance profile causes requests to the ec2 endpoint to hang.
Does anyone have any idea why this might be happening? Thanks in advance.
https://redd.it/1hyj8gd
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
40 to 50 year olds, please check in?
Are there DevOps engineers in this age group? I will be 40 years next year. I have such a bad fear of being laid off, I don't know what I will do. I haven't been let go yet. I've been socking away money but I know a lot of work friends that got let go towards 50 in non-devops positions. I am a Dad too so I can't focus on leveling up all the time.
https://redd.it/1hyjz2g
@r_devops
Are there DevOps engineers in this age group? I will be 40 years next year. I have such a bad fear of being laid off, I don't know what I will do. I haven't been let go yet. I've been socking away money but I know a lot of work friends that got let go towards 50 in non-devops positions. I am a Dad too so I can't focus on leveling up all the time.
https://redd.it/1hyjz2g
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Any free alternatives to SonarQube?
Any free alternatives to SonarQube? Looking for something free. I am already using prettier and ESLint.
https://redd.it/1hyn7uj
@r_devops
Any free alternatives to SonarQube? Looking for something free. I am already using prettier and ESLint.
https://redd.it/1hyn7uj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What terminal do you guys use as a devops engineer?
Looking to enhance my terminal experience. What terminal do you guys use? How have been your experience? Whats the best feature you like about that terminal
https://redd.it/1hyqg6p
@r_devops
Looking to enhance my terminal experience. What terminal do you guys use? How have been your experience? Whats the best feature you like about that terminal
https://redd.it/1hyqg6p
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Questionnaire on Log aggregation and monitoring for University Project
I’m working on a university project, and I’d really appreciate it if you could take a few minutes to answer this questionnaire, thanks. This questionnaire is mainly targeting sysadmins. https://forms.gle/cb7Vg1s8avGSvjJDA
https://redd.it/1hysbxz
@r_devops
I’m working on a university project, and I’d really appreciate it if you could take a few minutes to answer this questionnaire, thanks. This questionnaire is mainly targeting sysadmins. https://forms.gle/cb7Vg1s8avGSvjJDA
https://redd.it/1hysbxz
@r_devops
Google Docs
Insights on Log Aggregation and Monitoring
This questionnaire is part of a university project to gather insights from professionals in system administration and cybersecurity on current practices, challenges, and preferences in log aggregation and monitoring. Thank you for your participation.
AWS internal CI/CD best practices
AWS internal CI/CD best practices
I was setting up my own pipeline and I ran into this article when doing a quick Google search.
It said 80% CPU and 80% MEM on the rollback alarm. What do y'all think? In general, I think fault rate percentage depends on what your overall traffic volume is.
https://redd.it/1hysonc
@r_devops
AWS internal CI/CD best practices
I was setting up my own pipeline and I ran into this article when doing a quick Google search.
It said 80% CPU and 80% MEM on the rollback alarm. What do y'all think? In general, I think fault rate percentage depends on what your overall traffic volume is.
https://redd.it/1hysonc
@r_devops
powderlabs.dev
Powder Labs: Reliable, Small, and Agile Software Firm from Seattle
Powder Labs is a small software firm based in Seattle. We focus on DevOps migration, DevOps optimization, and DevOps as a Service. We build EMR/EHR software with HIPAA-certification capabilities.