Missed call from AWS HR
I attended a 5 round technical interview with AWS recently. I got a call from HR today to tell me the results of the interview, but I missed it.
If anybody in this subreddit has experience with Amazon HR, please let me know what you think might happen.
I have been driving myself crazy thinking about the possibilities.
1. Either they reject me, but this call is a courtesy where they go in detail as to how much I suck
2. Or they tell me I have cleared the technical rounds and now have to go through HR round
3. Or they just tell me I have cleared and wanna work out the logistics.
What do you guys think?
https://redd.it/1elmjsf
@r_devops
I attended a 5 round technical interview with AWS recently. I got a call from HR today to tell me the results of the interview, but I missed it.
If anybody in this subreddit has experience with Amazon HR, please let me know what you think might happen.
I have been driving myself crazy thinking about the possibilities.
1. Either they reject me, but this call is a courtesy where they go in detail as to how much I suck
2. Or they tell me I have cleared the technical rounds and now have to go through HR round
3. Or they just tell me I have cleared and wanna work out the logistics.
What do you guys think?
https://redd.it/1elmjsf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Best side hustle/side job for a DevOps engineer?
Hey everyone,
I'm a DevOps engineer with about 3.5 years of experience working at a Fortune 500 company in the US. I mostly deal with Infrastructure as Code, pipelines, GitHub Actions, and some Python scripting—basically a mix of sys admin and coding/automation.
I have a decent salary and a great work-life balance, which gives me some extra time to explore side hustles. Earlier this year, I started teaching an online computer science class. It brings in an extra $1000 a month and takes about 9 hours a week, mostly grading assignments and helping students.
I'm looking for more ways to make some extra cash on the side without committing to another full-time job. Ideally, something that only takes a few hours a week and uses my cloud engineering, programming, or DevOps skills. I also get the occasional consulting gig through AlphaSights, but that's rare.
Any suggestions for side gigs or income streams that fit this criteria? I’d love to hear your ideas or experiences. Thanks!
https://redd.it/1elp1pw
@r_devops
Hey everyone,
I'm a DevOps engineer with about 3.5 years of experience working at a Fortune 500 company in the US. I mostly deal with Infrastructure as Code, pipelines, GitHub Actions, and some Python scripting—basically a mix of sys admin and coding/automation.
I have a decent salary and a great work-life balance, which gives me some extra time to explore side hustles. Earlier this year, I started teaching an online computer science class. It brings in an extra $1000 a month and takes about 9 hours a week, mostly grading assignments and helping students.
I'm looking for more ways to make some extra cash on the side without committing to another full-time job. Ideally, something that only takes a few hours a week and uses my cloud engineering, programming, or DevOps skills. I also get the occasional consulting gig through AlphaSights, but that's rare.
Any suggestions for side gigs or income streams that fit this criteria? I’d love to hear your ideas or experiences. Thanks!
https://redd.it/1elp1pw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DB access and all night pings
My devops team is based in the US but about half of our engineers are in Serbia and India. We currently have no plans to add devops headcount at our international sites. As a result overnight pages are extremely common and on call is pretty brutal for us right now. The WORST part is it’s usually minor issues that the dev could fix on their own, but they don’t have access to our prod DBs, etc. so they can’t do anything until we come online.
I’m looking into ways to give them self-serve access to specific tables outside of normal working hours (needs to be auditable and tables are a must due to compliance requirements). My wife who wakes up every time I get paged will be extremely grateful for any recs.
https://redd.it/1elp829
@r_devops
My devops team is based in the US but about half of our engineers are in Serbia and India. We currently have no plans to add devops headcount at our international sites. As a result overnight pages are extremely common and on call is pretty brutal for us right now. The WORST part is it’s usually minor issues that the dev could fix on their own, but they don’t have access to our prod DBs, etc. so they can’t do anything until we come online.
I’m looking into ways to give them self-serve access to specific tables outside of normal working hours (needs to be auditable and tables are a must due to compliance requirements). My wife who wakes up every time I get paged will be extremely grateful for any recs.
https://redd.it/1elp829
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Adding subfolders in Artifactory Repository Tree while deploying
I am trying to add a subfolder to a repository tree and cannot find a way to do it. I’ve tried appending to the path name before adding the file I want to deploy but nothing seems to help.
https://redd.it/1elo3sc
@r_devops
I am trying to add a subfolder to a repository tree and cannot find a way to do it. I’ve tried appending to the path name before adding the file I want to deploy but nothing seems to help.
https://redd.it/1elo3sc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you describe the prod ways to deploy to production?
I wanna have paths to learn to deploy apps to production and also have the develop/staging environments too.
I’m try to do this one for example:
A github project, with a dockerfile, a github workflow will build that dockerfile and deploy in GHCR and then later that build will be picked up by Railways and deploy it.
I can make it work fine with the environments vars but the secrets, these are give me hardtime. I think if someone gets the docker image they can in theory see the secrets, will no longer be secrets right?. Do I in the build process (docker or github workflow) copy/create the secrets folder in the docker ?
/run/secrets/api\_key
/run/secrets/password
> node index.js
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086114000Z)
Server is running on port[ ~https://localhost:6684/~](https://localhost:6684/)
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086199000Z)
Environment Variables:
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086212000Z)
APP\_VERSION: 1.0.1
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086225000Z)
BUILD\_ENV: development
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086233000Z)
NODE\_ENV: production
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086244000Z)
PORT: 6684
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.492825000Z)
Error: ENOENT: no such file or directory, open '/run/secrets/db\_password'
name: CI/CD
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Install dependencies
run: npm install
# - name: Build the app
# run: npm run build
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
platforms: linux/amd64
push: true
tags: ghcr.io/${{ github.repository_owner }}/simple-web-server:latest
build-args: |
APP_VERSION=${{ env.APP_VERSION }}
BUILD_ENV=${{
I wanna have paths to learn to deploy apps to production and also have the develop/staging environments too.
I’m try to do this one for example:
A github project, with a dockerfile, a github workflow will build that dockerfile and deploy in GHCR and then later that build will be picked up by Railways and deploy it.
I can make it work fine with the environments vars but the secrets, these are give me hardtime. I think if someone gets the docker image they can in theory see the secrets, will no longer be secrets right?. Do I in the build process (docker or github workflow) copy/create the secrets folder in the docker ?
/run/secrets/api\_key
/run/secrets/password
> node index.js
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086114000Z)
Server is running on port[ ~https://localhost:6684/~](https://localhost:6684/)
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086199000Z)
Environment Variables:
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086212000Z)
APP\_VERSION: 1.0.1
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086225000Z)
BUILD\_ENV: development
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086233000Z)
NODE\_ENV: production
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086244000Z)
PORT: 6684
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.492825000Z)
Error: ENOENT: no such file or directory, open '/run/secrets/db\_password'
name: CI/CD
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Install dependencies
run: npm install
# - name: Build the app
# run: npm run build
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
platforms: linux/amd64
push: true
tags: ghcr.io/${{ github.repository_owner }}/simple-web-server:latest
build-args: |
APP_VERSION=${{ env.APP_VERSION }}
BUILD_ENV=${{
Railway
Railway is an infrastructure platform where you can provision infrastructure, develop with that infrastructure locally, and then deploy to the cloud.
env.BUILD_ENV }}
secrets: |
db_password=${{ secrets.DB_PASSWORD }}
api_key=${{ secrets.API_KEY }}
# Stage 1: Build the application
FROM node:20 AS builder
# Set build-time arguments
ARG APP_VERSION
ARG BUILD_ENV
# Log build-time arguments
RUN echo "Building with APP_VERSION=${APP_VERSION} and BUILD_ENV=${BUILD_ENV}"
# Set environment variables
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the application code to the container
COPY package*.json ./
COPY . .
# Install dependencies
RUN npm install
# Stage 2: Run the application
FROM node:20
# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the build artifacts from the builder stage
COPY --from=builder /app ./
# Log environment variables
RUN echo "Running with APP_VERSION=${APP_VERSION}, BUILD_ENV=${BUILD_ENV}, NODE_ENV=${NODE_ENV}, and PORT=${PORT}"
# Expose the application port
EXPOSE 3000
# Start the application
CMD ["npm", "start"]
this one that use for local, ir works fine because I'm copy the secrets
Dockerfile.local
# Stage 1: Build the application
FROM node:20 AS builder
# Set build-time arguments
ARG APP_VERSION
ARG BUILD_ENV
# Log build-time arguments
RUN echo "Building with APP_VERSION=${APP_VERSION} and BUILD_ENV=${BUILD_ENV}"
# Set environment variables
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the application code to the container
COPY package*.json ./
COPY . .
# Install dependencies
RUN npm install
# Stage 2: Run the application
FROM node:20
# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the build artifacts from the builder stage
COPY --from=builder /app ./
# Log environment variables
RUN echo "Running with APP_VERSION=${APP_VERSION}, BUILD_ENV=${BUILD_ENV}, NODE_ENV=${NODE_ENV}, and PORT=${PORT}"
# Create Environment Variables in GitHub Actions:
# Go to your GitHub repository.
#
# Click on Settings > Secrets and Variables > Actions.
#
# Add the following variables:
#
# APP_VERSION
# BUILD_ENV
#
# Add the following secrets:
#
# DB_PASSWORD
# API_KEY
# Copy secrets for local testing
COPY secrets/db_password /run/secrets/db_password
COPY secrets/api_key /run/secrets/api_key
# Expose the application port
EXPOSE 3000
# Start the application
CMD ["npm", "start"]
https://redd.it/1elt24o
@r_devops
secrets: |
db_password=${{ secrets.DB_PASSWORD }}
api_key=${{ secrets.API_KEY }}
# Stage 1: Build the application
FROM node:20 AS builder
# Set build-time arguments
ARG APP_VERSION
ARG BUILD_ENV
# Log build-time arguments
RUN echo "Building with APP_VERSION=${APP_VERSION} and BUILD_ENV=${BUILD_ENV}"
# Set environment variables
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the application code to the container
COPY package*.json ./
COPY . .
# Install dependencies
RUN npm install
# Stage 2: Run the application
FROM node:20
# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the build artifacts from the builder stage
COPY --from=builder /app ./
# Log environment variables
RUN echo "Running with APP_VERSION=${APP_VERSION}, BUILD_ENV=${BUILD_ENV}, NODE_ENV=${NODE_ENV}, and PORT=${PORT}"
# Expose the application port
EXPOSE 3000
# Start the application
CMD ["npm", "start"]
this one that use for local, ir works fine because I'm copy the secrets
Dockerfile.local
# Stage 1: Build the application
FROM node:20 AS builder
# Set build-time arguments
ARG APP_VERSION
ARG BUILD_ENV
# Log build-time arguments
RUN echo "Building with APP_VERSION=${APP_VERSION} and BUILD_ENV=${BUILD_ENV}"
# Set environment variables
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the application code to the container
COPY package*.json ./
COPY . .
# Install dependencies
RUN npm install
# Stage 2: Run the application
FROM node:20
# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the build artifacts from the builder stage
COPY --from=builder /app ./
# Log environment variables
RUN echo "Running with APP_VERSION=${APP_VERSION}, BUILD_ENV=${BUILD_ENV}, NODE_ENV=${NODE_ENV}, and PORT=${PORT}"
# Create Environment Variables in GitHub Actions:
# Go to your GitHub repository.
#
# Click on Settings > Secrets and Variables > Actions.
#
# Add the following variables:
#
# APP_VERSION
# BUILD_ENV
#
# Add the following secrets:
#
# DB_PASSWORD
# API_KEY
# Copy secrets for local testing
COPY secrets/db_password /run/secrets/db_password
COPY secrets/api_key /run/secrets/api_key
# Expose the application port
EXPOSE 3000
# Start the application
CMD ["npm", "start"]
https://redd.it/1elt24o
@r_devops
Reddit
From the devops community on Reddit: How do you describe the prod ways to deploy to production?
Explore this post and more from the devops community
What tools are there to manage autoscaling for kafka?
I'm familiar with cruise control but I wonder what are the options out there and which are the most popular ones? Are they fully automatic or do they require some level of continuous manual work?
https://redd.it/1eluf1q
@r_devops
I'm familiar with cruise control but I wonder what are the options out there and which are the most popular ones? Are they fully automatic or do they require some level of continuous manual work?
https://redd.it/1eluf1q
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Thinking about getting an extra job
I currently work for a company, and lately, the demand has been quite low to the point where I'm convinced I can handle another job.
However, I’m still not sure about the approach I should take when applying for another position. I’ve done some interviews where I mentioned that I was already working and wanted a second job, but that didn't go very well, haha.
My contract doesn’t have an exclusivity clause, but I wouldn't want them to know that I work somewhere else. I know some companies do a reference check and might end up contacting my current employer.
Any tips on how to proceed? Should I lie about being employed? Tell the truth?
https://redd.it/1elv30q
@r_devops
I currently work for a company, and lately, the demand has been quite low to the point where I'm convinced I can handle another job.
However, I’m still not sure about the approach I should take when applying for another position. I’ve done some interviews where I mentioned that I was already working and wanted a second job, but that didn't go very well, haha.
My contract doesn’t have an exclusivity clause, but I wouldn't want them to know that I work somewhere else. I know some companies do a reference check and might end up contacting my current employer.
Any tips on how to proceed? Should I lie about being employed? Tell the truth?
https://redd.it/1elv30q
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Challenges with CI/CD permissions management in OSS project: GitHub action.
Hi all :)
We have an OSS project sitting under OSS organization and I encountered a challenge with our CI/CD workflows and hope to get some insights.
The project is in GitHub, and it is a multilingual client library for ValKey/Redis OSS.
We are a team working for one of the big cloud companies, mainly dedicated to this project (not owned by the company, fully open source).
Most of the workflows are simple workflows that can be performed on a regular machine offered by GitHub.
But some of our CI tests including interacting with our company service in order to test massive cases and test that the project working also when the server is the cloud hosted version.
The issue is that in order to interact with the service safely, we need to hold the keys under the repo secrets, and those are available just from the main repo.
The maintainers are not working on the main repo but on theirs forks and opening PR's from forks to the main repo - so their PR's don't have access to the secrets and CI cannot accomplish all tests.
It is an OSS project, so we have to find a way to keep the secrets safe but still to make them available for CI triggered by maintainer fork, after approval from one of the organization members (ValKey).
Any ideas, offers, or insights?
Maybe somebody even want to join the community and help us with DevOps challenges? :P
https://redd.it/1elwmy3
@r_devops
Hi all :)
We have an OSS project sitting under OSS organization and I encountered a challenge with our CI/CD workflows and hope to get some insights.
The project is in GitHub, and it is a multilingual client library for ValKey/Redis OSS.
We are a team working for one of the big cloud companies, mainly dedicated to this project (not owned by the company, fully open source).
Most of the workflows are simple workflows that can be performed on a regular machine offered by GitHub.
But some of our CI tests including interacting with our company service in order to test massive cases and test that the project working also when the server is the cloud hosted version.
The issue is that in order to interact with the service safely, we need to hold the keys under the repo secrets, and those are available just from the main repo.
The maintainers are not working on the main repo but on theirs forks and opening PR's from forks to the main repo - so their PR's don't have access to the secrets and CI cannot accomplish all tests.
It is an OSS project, so we have to find a way to keep the secrets safe but still to make them available for CI triggered by maintainer fork, after approval from one of the organization members (ValKey).
Any ideas, offers, or insights?
Maybe somebody even want to join the community and help us with DevOps challenges? :P
https://redd.it/1elwmy3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Cypher for Kubernetes API: An expressive new way to work with k8s
Hey everybody 👋
I created this tool six months ago and it's been a daily driver for me since.
I lets me use a syntax similar to Cypher (Neo4j's query language which I adore) to perform CRUD operations on K8s.
My main use for this is examining resources, crafting custom JSON payloads with data from multiple resource kinds is a breeze.
This is an alpha release and while Cyphernetes has been in real-world use by me and a handful of other folk, test thoroughly before performing create/update/delete operations in production.
https://redd.it/1elvb3v
@r_devops
Hey everybody 👋
I created this tool six months ago and it's been a daily driver for me since.
I lets me use a syntax similar to Cypher (Neo4j's query language which I adore) to perform CRUD operations on K8s.
My main use for this is examining resources, crafting custom JSON payloads with data from multiple resource kinds is a breeze.
This is an alpha release and while Cyphernetes has been in real-world use by me and a handful of other folk, test thoroughly before performing create/update/delete operations in production.
https://redd.it/1elvb3v
@r_devops
GitHub
GitHub - AvitalTamir/cyphernetes: A Kubernetes Query Language
A Kubernetes Query Language. Contribute to AvitalTamir/cyphernetes development by creating an account on GitHub.
Ideas for a local development CICD pipeline
TL;DR : I have some ideas how I want to handle a local CICD pipeline, in order to increase developers speed in coding and testing.
What problem I want to solve: developers want to test their changes locally and as fast as possible, but the application they are working on has dependencies, and their resources are limited (ram, cpu...). Also, they don't want to wait for the remote CICD pipeline to work its magic and finally tell them it failed.
What solution I want to build : We create an application dependencies graph, as a JSON object. Our local CICD script will read this graph and build everything the application needs. Note, we shouldn't have to rebuild everything each time, only the base application.
How I see it working :
// Application building //
1. Pull the dev configurations from an outside source or .env (if you pull from outside, the credentials to connect are also be in the .env)
2. Build the application image for docker. (image will not be pushed and deleted every rebuild)
3. Launch the container (Your application need to be able to wait on its dependencies)
4. We should be able to launch the containers locally, on a local VM, on a remote local machine (via SSH?), or on a "dev cloud"
// Application dependencies building //
1. Resolve dependencies, pull the necessary code or build steps for all of them, creating subfolders (don't forget to .gitignore them).
2. Pull the configuration for each dependency, like before.
3. Build images if necessary (also not pushed to repo)
4. Launch the container
5. We can do extra steps, like include a dataset in the dependencies for later testing.
// Recursive dependencies building //
1. Repeat "Application dependencies building" recursively. each dependency need to have its own dependencies graph.
2. Allow the developer to decide the depth of dependencies to resolve
// Testing //
2. Now the developer can launch any automatic or manual tests, linters he wants...
3. All the tests remain optional before pushing (no pre-commit / pre-push)
4. The tests available to the developer should be the same as in the remote CICD pipeline, so he can be confident he is pushing correct code (if he launched the tests...)
5. Nuke all and push to Git button.
My question : does something like this already exist ? All this pipeline need to be his own tool. I could do all of this in bash, git, docker and vagrant, probably.
Note : I do all of this for fun on my free time, in my company we do things very differently
https://redd.it/1elzf5l
@r_devops
TL;DR : I have some ideas how I want to handle a local CICD pipeline, in order to increase developers speed in coding and testing.
What problem I want to solve: developers want to test their changes locally and as fast as possible, but the application they are working on has dependencies, and their resources are limited (ram, cpu...). Also, they don't want to wait for the remote CICD pipeline to work its magic and finally tell them it failed.
What solution I want to build : We create an application dependencies graph, as a JSON object. Our local CICD script will read this graph and build everything the application needs. Note, we shouldn't have to rebuild everything each time, only the base application.
How I see it working :
// Application building //
1. Pull the dev configurations from an outside source or .env (if you pull from outside, the credentials to connect are also be in the .env)
2. Build the application image for docker. (image will not be pushed and deleted every rebuild)
3. Launch the container (Your application need to be able to wait on its dependencies)
4. We should be able to launch the containers locally, on a local VM, on a remote local machine (via SSH?), or on a "dev cloud"
// Application dependencies building //
1. Resolve dependencies, pull the necessary code or build steps for all of them, creating subfolders (don't forget to .gitignore them).
2. Pull the configuration for each dependency, like before.
3. Build images if necessary (also not pushed to repo)
4. Launch the container
5. We can do extra steps, like include a dataset in the dependencies for later testing.
// Recursive dependencies building //
1. Repeat "Application dependencies building" recursively. each dependency need to have its own dependencies graph.
2. Allow the developer to decide the depth of dependencies to resolve
// Testing //
2. Now the developer can launch any automatic or manual tests, linters he wants...
3. All the tests remain optional before pushing (no pre-commit / pre-push)
4. The tests available to the developer should be the same as in the remote CICD pipeline, so he can be confident he is pushing correct code (if he launched the tests...)
5. Nuke all and push to Git button.
My question : does something like this already exist ? All this pipeline need to be his own tool. I could do all of this in bash, git, docker and vagrant, probably.
Note : I do all of this for fun on my free time, in my company we do things very differently
https://redd.it/1elzf5l
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
5YOE aws engineer, any good Azure crash courses?
Hello, have been working in cloud/devops for 5 years primarily with AWS. Got laid off and have s 3rd round interview with a place that is mostly an Azure shop. I understand cloud computing well I just need Azure specific info, have like a week to study it. Is there any recommended courses that would help for an interview?
I did the AZ104 cert 2 years ago but dont remember anything
https://redd.it/1em5llh
@r_devops
Hello, have been working in cloud/devops for 5 years primarily with AWS. Got laid off and have s 3rd round interview with a place that is mostly an Azure shop. I understand cloud computing well I just need Azure specific info, have like a week to study it. Is there any recommended courses that would help for an interview?
I did the AZ104 cert 2 years ago but dont remember anything
https://redd.it/1em5llh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Do you have a strategy for dealing with 100s of alerts/rules?
Started a new job recently and their alerting seems a bit of a mess. We have default alerts enabled in tools like Datadog and Lacework, monitoring a few dozen AWS and GCP accounts and it seems like a little bit of a mess.
Hoping for some help/advice on how you guys have approached the high level strategy around alerting. I think it will start with an audit of what rules are enabled and where (there seems to be some overlap).
Maybe categorising alerts at a high level and churning through them to assess whether its useful?
https://redd.it/1em6kf3
@r_devops
Started a new job recently and their alerting seems a bit of a mess. We have default alerts enabled in tools like Datadog and Lacework, monitoring a few dozen AWS and GCP accounts and it seems like a little bit of a mess.
Hoping for some help/advice on how you guys have approached the high level strategy around alerting. I think it will start with an audit of what rules are enabled and where (there seems to be some overlap).
Maybe categorising alerts at a high level and churning through them to assess whether its useful?
https://redd.it/1em6kf3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Side Projects to Deepen My Knowledge
Hello everyone,
I'm currently studying computer sciene and I'm most interested in DevOps. So far I've learned and used various tools and technologies such as K8s, Terraform, Ansible, using AWS, monitoring solutions, CI/CD, GitOps tools and some programming languages (recently picked up Go aswell).
I'm now interested in creating a side project to deepen my knowledge in the mentioned things above and also acquire new skills.
Do you have any ideas or suggestions for a project?
Thanks for reading. Suggestions are appreciated!
https://redd.it/1em7gca
@r_devops
Hello everyone,
I'm currently studying computer sciene and I'm most interested in DevOps. So far I've learned and used various tools and technologies such as K8s, Terraform, Ansible, using AWS, monitoring solutions, CI/CD, GitOps tools and some programming languages (recently picked up Go aswell).
I'm now interested in creating a side project to deepen my knowledge in the mentioned things above and also acquire new skills.
Do you have any ideas or suggestions for a project?
Thanks for reading. Suggestions are appreciated!
https://redd.it/1em7gca
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Azure DevOps Server pre-production upgrade - Two machines in application tiers?
Hi. I have done pre-production upgrade of Azure DevOps Server 2020 to a new Windows Server to Azure DevOps Server 2022. The new Windows Server (new DevOps Server) is isolated on network level. I changed IDs as the MS documentation states. I deployed (upgraded) new DevOps Server with pre-production option. Everything was successfull, everything is working, no errors during deployment, no errors in event viewer.
However there is one thing... When I open Azure DevOps Server Administration Console on the new Windows Server (new DevOps Server) I see two servers in Application Tiers part. The first is the new Windows Server with version Azure DevOps Server 2022. The second is the actual production Windows Server with version Azure DevOps Server 2020. Why? It doesn't make sense to me. Does this view has something to do with a fact that I have done pre-production upgrade? Because no network connection is even available between these two servers (I see it in netstat). Why is the actual production machine in application tiers???
https://redd.it/1em97md
@r_devops
Hi. I have done pre-production upgrade of Azure DevOps Server 2020 to a new Windows Server to Azure DevOps Server 2022. The new Windows Server (new DevOps Server) is isolated on network level. I changed IDs as the MS documentation states. I deployed (upgraded) new DevOps Server with pre-production option. Everything was successfull, everything is working, no errors during deployment, no errors in event viewer.
However there is one thing... When I open Azure DevOps Server Administration Console on the new Windows Server (new DevOps Server) I see two servers in Application Tiers part. The first is the new Windows Server with version Azure DevOps Server 2022. The second is the actual production Windows Server with version Azure DevOps Server 2020. Why? It doesn't make sense to me. Does this view has something to do with a fact that I have done pre-production upgrade? Because no network connection is even available between these two servers (I see it in netstat). Why is the actual production machine in application tiers???
https://redd.it/1em97md
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Do phrases "lower environments" and "pre-production environments" refer to the same thing?
Somehow, I have not come across the term "lower environment" earlier in my career, and now I'm struggling to understand what it means and what are its origins.
Is it safe to say that anything apart from production and environments that stand by for production (canaries?) are "upper environments", whereas local development, CI environments, and test environments are all "lower environments" a.k.a. "pre-production environments"?
Also, which of these categories does staging fit in?
Bonus points for any knowledge of how this lower/upper distinction came about historically.
Thanks.
https://redd.it/1em9pxg
@r_devops
Somehow, I have not come across the term "lower environment" earlier in my career, and now I'm struggling to understand what it means and what are its origins.
Is it safe to say that anything apart from production and environments that stand by for production (canaries?) are "upper environments", whereas local development, CI environments, and test environments are all "lower environments" a.k.a. "pre-production environments"?
Also, which of these categories does staging fit in?
Bonus points for any knowledge of how this lower/upper distinction came about historically.
Thanks.
https://redd.it/1em9pxg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How much do you know about the applications you maintain ?
Hey all
Junior engineer here in a devops role ( read junior sysadmin that also deals with miscellaneous requests related to our application)
Was just curious how well do you guys know the applications you help develop as in our company i’ve found we are quite detached from a lot of the aspects of the development,
I’ll sometimes be dragged into calls and feel very useless as when debugging an issue if it’s not infrastructure , networking or pipeline related I won’t have much to offer
I’m currently learning to code for the past few months as feel this is a major gap in my skills
What are your responsibilities in your role ?
https://redd.it/1embl37
@r_devops
Hey all
Junior engineer here in a devops role ( read junior sysadmin that also deals with miscellaneous requests related to our application)
Was just curious how well do you guys know the applications you help develop as in our company i’ve found we are quite detached from a lot of the aspects of the development,
I’ll sometimes be dragged into calls and feel very useless as when debugging an issue if it’s not infrastructure , networking or pipeline related I won’t have much to offer
I’m currently learning to code for the past few months as feel this is a major gap in my skills
What are your responsibilities in your role ?
https://redd.it/1embl37
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What to learn first? AWS or Terraform
I bought two courses from KodeKloud to learn about AWS and how can I use Terraform with AWS
While I have little experience in AWS, I must still learn a lot to tell myself that I can manage some tasks in an AWS environment
And with Terraform I have 0 experience, both professional and as a student.
What would you choose to learn first?
Thanks!
https://redd.it/1emdrhc
@r_devops
I bought two courses from KodeKloud to learn about AWS and how can I use Terraform with AWS
While I have little experience in AWS, I must still learn a lot to tell myself that I can manage some tasks in an AWS environment
And with Terraform I have 0 experience, both professional and as a student.
What would you choose to learn first?
Thanks!
https://redd.it/1emdrhc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Interview advice
I seem to be having a harder time in interviews now than I was a year or two ago. And I don’t really understand it. I’m far better at my job than I was in 2021 or 2022 and I was getting tons of interviews back then and it was going really well. I got some really decent offers. But I got promoted at my job and chose to stay for less money because I was comfortable here. Now I’m mostly self taught. I took a code bootcamp during the pandemic. Got hammered on an interview on system architecture questions and it went so badly that I took some udemy courses about AWS and studied really hard and then I interviewed for a cloud engineer role at a company that had just started building environments in AWS, I’d been playing with the AWS CDK and making my own sandbox environments. It was an older company and most of the people had been there for 10+ years and were doing things in very manual, very backwards ways. And this is a single tenant SaaS product with 5,000 plus servers. I actually knew a lot more about AWS than they did. So they hired me. I was criminally underpaid. Like $50k. But they’d been hiring entry level IT people. I’m up to $80k plus bonuses and equity which is a little better. But I’m doing DevOps work in addition to CloudOps work. They keep telling me i have a raise and promotion coming but cannot right now due to budget constraints related to higher interest rates and slowing growth. Which is I why I’m looking for something new.
So I came on, I knew no powershell, no bash, no python, just JavaScript, but I started automating everything. Kind of embarrassing but first I built a web app that triggered simple tasks. I kept researching things. I built more tools. I discovered SSM automations. I learned powershell. I started using terraform to manage New Relic alerts. Then I was tasked with provisioning new SMTP servers and I used ansible to configure them. I started writing self healing lambdas that would start stopped services and perform simple troubleshooting steps based on the alert that was received and I managed them with terraform. I taught the team git. I routinely hold training sessions. I implemented code reviews. I’ve got this whole team of IT people working as engineers now. We just migrated 3500 servers to AWS. I did not come up with the plan to migrate and I think we did it in the most difficult and painful way possible, but I pulled it off. I worked with consultants from AWS to set up the new VPC, the subnets, the network routes, the domain. Set it all up in terraform. I handled all the Active Directory stuff. I didn’t even know what Active Directory was until I started working here. I configured all the shared services, wrote the SSM docs that run during migration to update config files to point at the needed services in the new VPC.
I’ve done all this stuff but I get hammered in interviews over basically trivia. I don’t know the right terms for things or I struggle to explain how to do things without documentation right in front of me. But I know exactly where to look. I remember more or less what to do I just can’t speak to it in detail off the top of my head without a reference. And I get nervous so a lot of the things I do know escape me. Or like I’ll get asked a question and I say I don’t know and I realize I actually did know. I was just caught off guard because I wasn’t familiar with how the question was being asked. But my work should speak for itself. Interviews where I’m asked to whiteboard a solution or do a code challenge go significantly better for me than when I’m just getting drilled with DevOps trivia. It’s like I have knowledge and understanding of this stuff I just don’t have the formal education. I can explain it in code or in a diagram but putting it into the proper words is hard for me. I don’t understand though because before I was getting offers and now I’m barely making it past the phone screen. And I’m a much better engineer than I was two years ago. And this week I had three jobs I was interviewing for all tell me they decided not to fill the position at
I seem to be having a harder time in interviews now than I was a year or two ago. And I don’t really understand it. I’m far better at my job than I was in 2021 or 2022 and I was getting tons of interviews back then and it was going really well. I got some really decent offers. But I got promoted at my job and chose to stay for less money because I was comfortable here. Now I’m mostly self taught. I took a code bootcamp during the pandemic. Got hammered on an interview on system architecture questions and it went so badly that I took some udemy courses about AWS and studied really hard and then I interviewed for a cloud engineer role at a company that had just started building environments in AWS, I’d been playing with the AWS CDK and making my own sandbox environments. It was an older company and most of the people had been there for 10+ years and were doing things in very manual, very backwards ways. And this is a single tenant SaaS product with 5,000 plus servers. I actually knew a lot more about AWS than they did. So they hired me. I was criminally underpaid. Like $50k. But they’d been hiring entry level IT people. I’m up to $80k plus bonuses and equity which is a little better. But I’m doing DevOps work in addition to CloudOps work. They keep telling me i have a raise and promotion coming but cannot right now due to budget constraints related to higher interest rates and slowing growth. Which is I why I’m looking for something new.
So I came on, I knew no powershell, no bash, no python, just JavaScript, but I started automating everything. Kind of embarrassing but first I built a web app that triggered simple tasks. I kept researching things. I built more tools. I discovered SSM automations. I learned powershell. I started using terraform to manage New Relic alerts. Then I was tasked with provisioning new SMTP servers and I used ansible to configure them. I started writing self healing lambdas that would start stopped services and perform simple troubleshooting steps based on the alert that was received and I managed them with terraform. I taught the team git. I routinely hold training sessions. I implemented code reviews. I’ve got this whole team of IT people working as engineers now. We just migrated 3500 servers to AWS. I did not come up with the plan to migrate and I think we did it in the most difficult and painful way possible, but I pulled it off. I worked with consultants from AWS to set up the new VPC, the subnets, the network routes, the domain. Set it all up in terraform. I handled all the Active Directory stuff. I didn’t even know what Active Directory was until I started working here. I configured all the shared services, wrote the SSM docs that run during migration to update config files to point at the needed services in the new VPC.
I’ve done all this stuff but I get hammered in interviews over basically trivia. I don’t know the right terms for things or I struggle to explain how to do things without documentation right in front of me. But I know exactly where to look. I remember more or less what to do I just can’t speak to it in detail off the top of my head without a reference. And I get nervous so a lot of the things I do know escape me. Or like I’ll get asked a question and I say I don’t know and I realize I actually did know. I was just caught off guard because I wasn’t familiar with how the question was being asked. But my work should speak for itself. Interviews where I’m asked to whiteboard a solution or do a code challenge go significantly better for me than when I’m just getting drilled with DevOps trivia. It’s like I have knowledge and understanding of this stuff I just don’t have the formal education. I can explain it in code or in a diagram but putting it into the proper words is hard for me. I don’t understand though because before I was getting offers and now I’m barely making it past the phone screen. And I’m a much better engineer than I was two years ago. And this week I had three jobs I was interviewing for all tell me they decided not to fill the position at
all. I’m very frustrated. I’ve probably applied to 100 jobs. 20 phone screens. Maybe 8 interviews no offers. 2 years ago it was like 30 jobs, 10 phone screens , 7 interviews, 3 offers.
https://redd.it/1emdppb
@r_devops
https://redd.it/1emdppb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Transitioning to DevOps - Seeking Advice and Validation
Hi everyone,
I'm a System Engineer with 5-6 years of experience, starting my career in 1st and 2nd line support. I spent about 3-4 years focused on support roles and various projects before moving into a Junior System Engineer role, which was essentially still support but with greater access to servers, O365 tenants, Azure, firewalls, etc.
Later, I transitioned to freelancing and worked as a System Engineer at an international company. My responsibilities included server management, patch management, some networking, project management, IAM, and L3 support.
Recently, I delved into SCCM and Intune, becoming part of a team dedicated to SCCM and Intune management. We handled everything possible in SCCM or Intune except setting up the environment. After 10 months, I took on a new project at a major client where my primary role was Mobile Device Management and L3 support.
The client is high-level and has multiple ongoing projects, one of which was deploying Azure Virtual Desktop using Infrastructure as Code (IaC). This project was handed to me because I had previously built AVD using the Azure Portal. Initially, I struggled to understand the IaC approach. I reviewed the code built by the previous engineer and started building everything from scratch in a test environment using IaC tools like Terraform, Ansible, and GitLab.
In just 4 weeks, I learned a lot. I successfully created resources like resource groups, vnets, subnets, network settings, workspaces, virtual machines, key vaults, Azure Image Gallery deployments, storage accounts, fslogix, etc. – basically everything related to Azure Virtual Desktop.
Surprisingly, I found that I really enjoy working with Terraform and Ansible. Although I used to dislike software engineering, this blend of coding and cloud engineering has been incredibly engaging. I've been so absorbed that I almost forgot about the world outside.
Now, I'm wondering if this path aligns with DevOps. If I know Azure, Terraform, Ansible, Python, Linux, and dive into CI/CD pipelines and Docker, am I on the right track to get into DevOps?
Looking forward to your insight.
https://redd.it/1emh9qr
@r_devops
Hi everyone,
I'm a System Engineer with 5-6 years of experience, starting my career in 1st and 2nd line support. I spent about 3-4 years focused on support roles and various projects before moving into a Junior System Engineer role, which was essentially still support but with greater access to servers, O365 tenants, Azure, firewalls, etc.
Later, I transitioned to freelancing and worked as a System Engineer at an international company. My responsibilities included server management, patch management, some networking, project management, IAM, and L3 support.
Recently, I delved into SCCM and Intune, becoming part of a team dedicated to SCCM and Intune management. We handled everything possible in SCCM or Intune except setting up the environment. After 10 months, I took on a new project at a major client where my primary role was Mobile Device Management and L3 support.
The client is high-level and has multiple ongoing projects, one of which was deploying Azure Virtual Desktop using Infrastructure as Code (IaC). This project was handed to me because I had previously built AVD using the Azure Portal. Initially, I struggled to understand the IaC approach. I reviewed the code built by the previous engineer and started building everything from scratch in a test environment using IaC tools like Terraform, Ansible, and GitLab.
In just 4 weeks, I learned a lot. I successfully created resources like resource groups, vnets, subnets, network settings, workspaces, virtual machines, key vaults, Azure Image Gallery deployments, storage accounts, fslogix, etc. – basically everything related to Azure Virtual Desktop.
Surprisingly, I found that I really enjoy working with Terraform and Ansible. Although I used to dislike software engineering, this blend of coding and cloud engineering has been incredibly engaging. I've been so absorbed that I almost forgot about the world outside.
Now, I'm wondering if this path aligns with DevOps. If I know Azure, Terraform, Ansible, Python, Linux, and dive into CI/CD pipelines and Docker, am I on the right track to get into DevOps?
Looking forward to your insight.
https://redd.it/1emh9qr
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community