Remote state isolation with terraform workspaces for multi-account deployments
I decided to try terraform workspaces instead of using wrapper script for managing environments and especially the remote state. I wrote a small blog post on how to segregate the access to the remote state, given tf creates a state key in a single bucket https://ifritltd.com/2023/03/05/remote-state-isolation-in-multiple-environments-with-terraform-workspaces/
https://redd.it/11kikb3
@r_devops
I decided to try terraform workspaces instead of using wrapper script for managing environments and especially the remote state. I wrote a small blog post on how to segregate the access to the remote state, given tf creates a state key in a single bucket https://ifritltd.com/2023/03/05/remote-state-isolation-in-multiple-environments-with-terraform-workspaces/
https://redd.it/11kikb3
@r_devops
Reddit
r/devops on Reddit: Remote state isolation with terraform workspaces for multi-account deployments
Posted by u/kenych - No votes and no comments
Collecting metrics for cron jobs on a per-execution basis with Prometheus?
I have several cron jobs that lasts from a couple minutes to several hours. I want to emit time series data (such as latency from http calls made by the cron job) to Prometheus. However, I also want to be able to do time series aggregation down to the level of a specific job execution. For example, a job executes twice, I want to be able to view the quartiles for the first job and then also view the quartiles for the second job. My initial thoughts were to use two labels: job_id and job_execution_id. However, this would lead to high cardinality. Is prometheus still the right solution for this?
https://redd.it/11ktfcw
@r_devops
I have several cron jobs that lasts from a couple minutes to several hours. I want to emit time series data (such as latency from http calls made by the cron job) to Prometheus. However, I also want to be able to do time series aggregation down to the level of a specific job execution. For example, a job executes twice, I want to be able to view the quartiles for the first job and then also view the quartiles for the second job. My initial thoughts were to use two labels: job_id and job_execution_id. However, this would lead to high cardinality. Is prometheus still the right solution for this?
https://redd.it/11ktfcw
@r_devops
Reddit
r/devops on Reddit: Collecting metrics for cron jobs on a per-execution basis with Prometheus?
Posted by u/roadbiking19 - No votes and no comments
CI/CD pipeline using Docker Containers into Heroku - what's actually being deployed?
I've started a basic hobby project of a Python Flask api, and in trying out new things, I've found myself using Docker Containers in a CI/CD pipeline (Semaphore), to deploy to Heroku. It has been a really interesting/frustrating learning experience.
I've reached the point where I can get my containers to run nicely locally, build and deploy to Docker Hub when there's changes in the Main branch, lint the code depending on environment, run unit and end-to-end tests, and then - once everything is green - deploy to Heroku. Frustratingly, the app fails in Heroku with the generic 510 error code.
I'm about to drop back into the troubleshooting loop, and will no-doubt learn more useful stuff, but my knowledge of the CI/CD space is so limited, I don't quite know where the issue may be. I'd be really grateful if someone could check my mental model of what's currently happening.
The app consists of 5 separate Docker containers:
Webserver/proxy (Nginx, gunicorn)
Web (Flask app)
Persistent DB for data (Postgres)
Worker (Celery)
Cache DB for the worker (Redis)
Each of these have their own Dockerfile, and all is co-ordinated via a docker-compose.yml. Docker-compose pulls the env variables from a specific .env file; the command line is used to reference the start.sh file for each container (this handles any start-up instructions for the container); and each container is tagged with the same image (\[dockerhubusername\]/\[dockerhubproject\]:latest).
This works well locally, and I have pushed all 5 containers to DockerHub as one image.
My CI/CD pipeline consists of 3 blocks:
build: checks for changes in git, builds a new image (using docker-compose build), tags it, pushes it back to DockerHub
test: gets the latest image from dockerhub, runs the images (using docker-compose up), runs the tests
deploy: uses a heroku.yml file to pull the image from dockerhub, log-in to heroku:container, push the image to heroku container, set the stack to the container, and then release it as a web process
This all runs successfully, but the app fails over on Heroku with 510 code and nothing else. Heroku states that it's building the app using Python3.
What I am having difficulty conceptualising, is what's actually being pushed to Heroku?
Am I deploying the 5 separate containers in a 'down' state? In which case, surely I need to use something analogous to docker-compose in the heroku.yml to spin them up and run start up commands etc.
Am I deploying one container image which contains the 5 containers in a down state, so now I need to spin up this overarching 'app' container, and then run docker-compose in it over on Heroku?
Given that I have web, worker and DB containers, do I need to split them into different process over on Heroku, deploying my web container to a web process, worker to a worker process etc?
Should I have built each container as a separate image in DockerHub, deploy them individually to Heroku and coordinate and orchestrate over on Heroku (or via the herou.yml)?
Once I know which mental model is closest, I'm happy to dig around and play with config and variables, I'm just stuck at the moment knowing which thread to pull. Any pointers gratefully received.
https://redd.it/11ku15p
@r_devops
I've started a basic hobby project of a Python Flask api, and in trying out new things, I've found myself using Docker Containers in a CI/CD pipeline (Semaphore), to deploy to Heroku. It has been a really interesting/frustrating learning experience.
I've reached the point where I can get my containers to run nicely locally, build and deploy to Docker Hub when there's changes in the Main branch, lint the code depending on environment, run unit and end-to-end tests, and then - once everything is green - deploy to Heroku. Frustratingly, the app fails in Heroku with the generic 510 error code.
I'm about to drop back into the troubleshooting loop, and will no-doubt learn more useful stuff, but my knowledge of the CI/CD space is so limited, I don't quite know where the issue may be. I'd be really grateful if someone could check my mental model of what's currently happening.
The app consists of 5 separate Docker containers:
Webserver/proxy (Nginx, gunicorn)
Web (Flask app)
Persistent DB for data (Postgres)
Worker (Celery)
Cache DB for the worker (Redis)
Each of these have their own Dockerfile, and all is co-ordinated via a docker-compose.yml. Docker-compose pulls the env variables from a specific .env file; the command line is used to reference the start.sh file for each container (this handles any start-up instructions for the container); and each container is tagged with the same image (\[dockerhubusername\]/\[dockerhubproject\]:latest).
This works well locally, and I have pushed all 5 containers to DockerHub as one image.
My CI/CD pipeline consists of 3 blocks:
build: checks for changes in git, builds a new image (using docker-compose build), tags it, pushes it back to DockerHub
test: gets the latest image from dockerhub, runs the images (using docker-compose up), runs the tests
deploy: uses a heroku.yml file to pull the image from dockerhub, log-in to heroku:container, push the image to heroku container, set the stack to the container, and then release it as a web process
This all runs successfully, but the app fails over on Heroku with 510 code and nothing else. Heroku states that it's building the app using Python3.
What I am having difficulty conceptualising, is what's actually being pushed to Heroku?
Am I deploying the 5 separate containers in a 'down' state? In which case, surely I need to use something analogous to docker-compose in the heroku.yml to spin them up and run start up commands etc.
Am I deploying one container image which contains the 5 containers in a down state, so now I need to spin up this overarching 'app' container, and then run docker-compose in it over on Heroku?
Given that I have web, worker and DB containers, do I need to split them into different process over on Heroku, deploying my web container to a web process, worker to a worker process etc?
Should I have built each container as a separate image in DockerHub, deploy them individually to Heroku and coordinate and orchestrate over on Heroku (or via the herou.yml)?
Once I know which mental model is closest, I'm happy to dig around and play with config and variables, I'm just stuck at the moment knowing which thread to pull. Any pointers gratefully received.
https://redd.it/11ku15p
@r_devops
Reddit
r/devops on Reddit: CI/CD pipeline using Docker Containers into Heroku - what's actually being deployed?
Posted by u/hitman_cat - No votes and 1 comment
How to create a github actions workflow that automatically creates a PR and merges it
I have a protected main branch in my organization, while the other ones are not. The problem is I want the commits to be done in the name of an action bot that will directly show it is an automated action. However, I couldn't find a reasonable way to do it. The closer I got was PRs created on push trigger by bot, that need to be manually approved by an admin. But then another PR was created when the first one was automatically merged..and the author was the admin, not the bot.
I cant have 2 workflows because an automated action cant trigger another. I will try using github actions but they still seem like will trigger the workflow to create PR agains, for each automatic PR merge.
For a bit of context, the PR contains a build that generates a new file always, but should be done only on new pushes. Can anyone provide some guidance?
https://redd.it/11kv16y
@r_devops
I have a protected main branch in my organization, while the other ones are not. The problem is I want the commits to be done in the name of an action bot that will directly show it is an automated action. However, I couldn't find a reasonable way to do it. The closer I got was PRs created on push trigger by bot, that need to be manually approved by an admin. But then another PR was created when the first one was automatically merged..and the author was the admin, not the bot.
I cant have 2 workflows because an automated action cant trigger another. I will try using github actions but they still seem like will trigger the workflow to create PR agains, for each automatic PR merge.
For a bit of context, the PR contains a build that generates a new file always, but should be done only on new pushes. Can anyone provide some guidance?
https://redd.it/11kv16y
@r_devops
Reddit
r/devops on Reddit: How to create a github actions workflow that automatically creates a PR and merges it
Posted by u/National-Evidence-79 - No votes and no comments
k8s Simple Lambda http proxy
For a POC I want to be able to seamlessly call Lambdas in AWS from inside a kube cluster, without having to use API Gateway / FunctionURLs etc.
I've tested with Kong ingress controller, but this will only proxy external requests to lambdas as it acts on the ingress layer, so doesn't work for internal requests from inside the cluster.
I did a very quick spike to create a Go HTTP server that listens for requests, and just calls "invokeFunction" and returns the response. This could then run in a Pod behind a Service and invisibly call lambda functions as if they were another k8s service.
But i'm sure something like this already exists. a simple single binary that proxies lambda requests. if it doesn't, ill write something myself
notes:
Ive worked with k8s for a few years, have moved jobs and never had to deal with this situation before.
The kong lambda proxy does exactly this, it just calls invokeFunction, its written in Lua and is part of their ingress controller.
I don't want the hassle of ApiGateway due to internal business reasons / the amount of terraform needed which I just hate.
FunctionURLs are public, unless you wanna mess with IAM permissions / signed requests from clients and I want to try and avoid that.
https://redd.it/11kwvcm
@r_devops
For a POC I want to be able to seamlessly call Lambdas in AWS from inside a kube cluster, without having to use API Gateway / FunctionURLs etc.
I've tested with Kong ingress controller, but this will only proxy external requests to lambdas as it acts on the ingress layer, so doesn't work for internal requests from inside the cluster.
I did a very quick spike to create a Go HTTP server that listens for requests, and just calls "invokeFunction" and returns the response. This could then run in a Pod behind a Service and invisibly call lambda functions as if they were another k8s service.
But i'm sure something like this already exists. a simple single binary that proxies lambda requests. if it doesn't, ill write something myself
notes:
Ive worked with k8s for a few years, have moved jobs and never had to deal with this situation before.
The kong lambda proxy does exactly this, it just calls invokeFunction, its written in Lua and is part of their ingress controller.
I don't want the hassle of ApiGateway due to internal business reasons / the amount of terraform needed which I just hate.
FunctionURLs are public, unless you wanna mess with IAM permissions / signed requests from clients and I want to try and avoid that.
https://redd.it/11kwvcm
@r_devops
Reddit
r/devops on Reddit: k8s Simple Lambda http proxy
Posted by u/cuotos - No votes and no comments
Argo CD vs GoCD
In the near future, I will try to convince the client to change the GoCD tool to another, more community-supported one. The first thing that comes to my mind is Argo CD. I'm trying to find some comparison of these tools but I can't find something like that. Do you know of any sources where such a comparison has been made?
If not, does anyone have experience with these tools and can share their thoughts?
https://redd.it/11kwj0l
@r_devops
In the near future, I will try to convince the client to change the GoCD tool to another, more community-supported one. The first thing that comes to my mind is Argo CD. I'm trying to find some comparison of these tools but I can't find something like that. Do you know of any sources where such a comparison has been made?
If not, does anyone have experience with these tools and can share their thoughts?
https://redd.it/11kwj0l
@r_devops
Reddit
r/devops on Reddit: Argo CD vs GoCD
Posted by u/czerniga_it - No votes and 2 comments
Artifactory with MFA
Hello ppl,
I am looking for some way to harden our artifactory by implementing MFA solution rit now jfrog is being used, however current setup is anonymous downloads are enabled and using standard username and pass for uploading.
However, going forward we would like to implement a way to keep the non-ui interaction anonymous and UI activity with MFA any suggestion?
https://redd.it/11kxjih
@r_devops
Hello ppl,
I am looking for some way to harden our artifactory by implementing MFA solution rit now jfrog is being used, however current setup is anonymous downloads are enabled and using standard username and pass for uploading.
However, going forward we would like to implement a way to keep the non-ui interaction anonymous and UI activity with MFA any suggestion?
https://redd.it/11kxjih
@r_devops
Reddit
r/devops on Reddit: Artifactory with MFA
Posted by u/CrazyBrownDog - No votes and 2 comments
What's your on call rate?
My employer is preparing to introduce an on call schema. We're negotiating contracts now. They offered 150% of normal wage during on call times, and for each hour spent on an incident, the same number of hours added to holidays.
This is the initial offer that doesn't include what happens when the incident last longer than on-call hours.
https://redd.it/11kzqrs
@r_devops
My employer is preparing to introduce an on call schema. We're negotiating contracts now. They offered 150% of normal wage during on call times, and for each hour spent on an incident, the same number of hours added to holidays.
This is the initial offer that doesn't include what happens when the incident last longer than on-call hours.
https://redd.it/11kzqrs
@r_devops
Reddit
What's your on call rate? : r/devops
262K subscribers in the devops community.
What do you use to detect/be alerted changes to Kubernetes deployments?
Recently some people have unfortunately misused their access to the cluster, and ran some commands on one cluster instead of the other. The problem is that they have actual need for the permissions, so I'm trying to see how to get notified when someone changes something on the prod clusters for example.
I tried searching, but all I could find are solutions that require their own deployments/maintenance. Is there a lightweight method of detecting the changes?
https://redd.it/11l1uib
@r_devops
Recently some people have unfortunately misused their access to the cluster, and ran some commands on one cluster instead of the other. The problem is that they have actual need for the permissions, so I'm trying to see how to get notified when someone changes something on the prod clusters for example.
I tried searching, but all I could find are solutions that require their own deployments/maintenance. Is there a lightweight method of detecting the changes?
https://redd.it/11l1uib
@r_devops
Reddit
r/devops on Reddit: What do you use to detect/be alerted changes to Kubernetes deployments?
Posted by u/GureenRyuu - No votes and 10 comments
Service mesh users… a little help, maybe?
A few days back, I posted a Google form here.
I wanted to know a few things about DevOps/SREs who use Istio service mesh.
Specifically, I wanted to know their challenges and wishes to make it easier for them to use Istio.
A few replies popped up when I posted it...
One was quick to tell me that I'm running a free survey... well, that was a given.
The other 2 replies straight-up mocked me… but that's okay…
I don't expect everyone I see on the internet to be kind to me (although I try my best to be so).
See, the thing is, I'm a marketing guy.
I wanted to run a survey and validate the pain points, because I believe in starting from those who'd use the product.
My plan is to bring all the insights I get from the survey and give them to the Product team…
So they can create something meaningful and useful.
"I see… why not try survey platforms then?" you may ask.
There are 2 reasons:
1/ we don’t have enough budget (I bet you saw that one coming :D). And spoiler alert: nobody asked me to do this.
2/ I’d like it to be more targeted and get filled up by the actual users
Even if I had the money, I’d still run the survey here and give coupons or something to the participants.
For some reason, I trust Reddit communities…
Reddit is where I come for research, especially when I’m writing a blog…
And it’s always worth the time and effort.
(I've another account I use to actively participate in some communities. Not dev related, since I don't have enough ex's: experience & expertise.)
So… let me ask you a favor this time…
If you use Istio/any other service mesh, would you be kind enough to fill out a form?
It has only 4 questions, out of which 2 are multiple-choice ones…
Plus, your contact info is optional to submit the form.
I assure you that it wouldn’t take more than 2 minutes of your time… but it will be of great help for a stealth-mode startup like ours.
Last but not least…
3 great souls have already filled up the form… fill it up ASAP to join them (Just kidding :D)
Hope you'd fill it.
Good day, fellow human!
(If this post gets traction, I’ll wait to analyze the responses and then add the form below if they’re positive.)
https://redd.it/11l3vab
@r_devops
A few days back, I posted a Google form here.
I wanted to know a few things about DevOps/SREs who use Istio service mesh.
Specifically, I wanted to know their challenges and wishes to make it easier for them to use Istio.
A few replies popped up when I posted it...
One was quick to tell me that I'm running a free survey... well, that was a given.
The other 2 replies straight-up mocked me… but that's okay…
I don't expect everyone I see on the internet to be kind to me (although I try my best to be so).
See, the thing is, I'm a marketing guy.
I wanted to run a survey and validate the pain points, because I believe in starting from those who'd use the product.
My plan is to bring all the insights I get from the survey and give them to the Product team…
So they can create something meaningful and useful.
"I see… why not try survey platforms then?" you may ask.
There are 2 reasons:
1/ we don’t have enough budget (I bet you saw that one coming :D). And spoiler alert: nobody asked me to do this.
2/ I’d like it to be more targeted and get filled up by the actual users
Even if I had the money, I’d still run the survey here and give coupons or something to the participants.
For some reason, I trust Reddit communities…
Reddit is where I come for research, especially when I’m writing a blog…
And it’s always worth the time and effort.
(I've another account I use to actively participate in some communities. Not dev related, since I don't have enough ex's: experience & expertise.)
So… let me ask you a favor this time…
If you use Istio/any other service mesh, would you be kind enough to fill out a form?
It has only 4 questions, out of which 2 are multiple-choice ones…
Plus, your contact info is optional to submit the form.
I assure you that it wouldn’t take more than 2 minutes of your time… but it will be of great help for a stealth-mode startup like ours.
Last but not least…
3 great souls have already filled up the form… fill it up ASAP to join them (Just kidding :D)
Hope you'd fill it.
Good day, fellow human!
(If this post gets traction, I’ll wait to analyze the responses and then add the form below if they’re positive.)
https://redd.it/11l3vab
@r_devops
Reddit
r/devops on Reddit: Service mesh users… a little help, maybe?
Posted by u/Maleficent_Goose_483 - No votes and no comments
How do you keep track of your "Update debt" ?
A bit of context, I'm using ArgoCD to manage 3 clusters. We set auto-upgrade on the minor version of many tools and some of them are locked to a specific version.
My problem is that sometimes the chart version does not follow the same release lifecycle as the application version. (Like Prometheus-community chart, which release a major version of the chart for a minor of the app).
So How do you keep track of the last version of an app available versus what you have in your cluster?
I would like to avoid maintaining excel sheets manually updated every month :-)
https://redd.it/11l4246
@r_devops
A bit of context, I'm using ArgoCD to manage 3 clusters. We set auto-upgrade on the minor version of many tools and some of them are locked to a specific version.
My problem is that sometimes the chart version does not follow the same release lifecycle as the application version. (Like Prometheus-community chart, which release a major version of the chart for a minor of the app).
So How do you keep track of the last version of an app available versus what you have in your cluster?
I would like to avoid maintaining excel sheets manually updated every month :-)
https://redd.it/11l4246
@r_devops
Reddit
r/devops on Reddit: How do you keep track of your "Update debt" ?
Posted by u/Asfalots - No votes and 1 comment
Cannot get an Azure file share to mount in container instances
Trying to get a multi container instance up into ACI, wordpress, mysql, phpmyadmin. Everything works locally. I'm using Azure file shares for persistent data. Followed the documentation [here](https://docs.docker.com/cloud/aci-integration/), also checked my script against this [one](https://www.reddit.com/r/AZURE/comments/s6819x/azure_fileshare_volume_for_docker_compose/) and some others from googling and searching here and in the [r/docker](https://www.reddit.com/r/docker/) sub. Containers get created but cannot start because the mounts are missing. the mounts show up when I check the properties of the container but the wordpress and mysql both complain about missing data.
I have a storage account with 3 shares (because you cannot have duplicate names). Data folder for mysql and my database I am importing into mysql. The last share is my existing wordpress content.
version: '3.8'
services:
msmdb:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somepassword
MYSQL_DATABASE: dbname
MYSQL_USER: dbuser
MYSQL_PASSWORD: anotherpassword
volumes:
- msmdata2:/initdb.d/mikdb.sql:/docker-entrypoint-initdb.d/init.sql
- msmdata3:/data:/var/lib/mysql
command: --default-authentication-plugin=mysql_native_password
container_name: mikdb
msmphpmyadmin:
image: phpmyadmin
restart: always
environment:
PMA_ARBITRARY: 1
PMA_HOST: msmdb
UPLOAD_LIMIT: 300M
container_name: mikphpadmin
msmwordpress:
depends_on:
- msmdb
image: wordpress
restart: always
links:
- msmdb:mysql
volumes:
- msmdata:/wwwroot:/var/www/html
environment:
WORDPRESS_DB_HOST: msmdb:3306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: somepassword
WORDPRESS_DB_NAME: dbname
container_name: mikwordpress
volumes:
msmdata:
driver: azure_file
driver_opts:
share_name: mikwwwroot
storage_account_name: saname
msmdata2:
driver: azure_file
driver_opts:
share_name: mikdb
storage_account_name: saname
msmdata3:
driver: azure_file
driver_opts:
share_name: mikdbdata
storage_account_name: saname
https://redd.it/11l2yq4
@r_devops
Trying to get a multi container instance up into ACI, wordpress, mysql, phpmyadmin. Everything works locally. I'm using Azure file shares for persistent data. Followed the documentation [here](https://docs.docker.com/cloud/aci-integration/), also checked my script against this [one](https://www.reddit.com/r/AZURE/comments/s6819x/azure_fileshare_volume_for_docker_compose/) and some others from googling and searching here and in the [r/docker](https://www.reddit.com/r/docker/) sub. Containers get created but cannot start because the mounts are missing. the mounts show up when I check the properties of the container but the wordpress and mysql both complain about missing data.
I have a storage account with 3 shares (because you cannot have duplicate names). Data folder for mysql and my database I am importing into mysql. The last share is my existing wordpress content.
version: '3.8'
services:
msmdb:
image: mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somepassword
MYSQL_DATABASE: dbname
MYSQL_USER: dbuser
MYSQL_PASSWORD: anotherpassword
volumes:
- msmdata2:/initdb.d/mikdb.sql:/docker-entrypoint-initdb.d/init.sql
- msmdata3:/data:/var/lib/mysql
command: --default-authentication-plugin=mysql_native_password
container_name: mikdb
msmphpmyadmin:
image: phpmyadmin
restart: always
environment:
PMA_ARBITRARY: 1
PMA_HOST: msmdb
UPLOAD_LIMIT: 300M
container_name: mikphpadmin
msmwordpress:
depends_on:
- msmdb
image: wordpress
restart: always
links:
- msmdb:mysql
volumes:
- msmdata:/wwwroot:/var/www/html
environment:
WORDPRESS_DB_HOST: msmdb:3306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: somepassword
WORDPRESS_DB_NAME: dbname
container_name: mikwordpress
volumes:
msmdata:
driver: azure_file
driver_opts:
share_name: mikwwwroot
storage_account_name: saname
msmdata2:
driver: azure_file
driver_opts:
share_name: mikdb
storage_account_name: saname
msmdata3:
driver: azure_file
driver_opts:
share_name: mikdbdata
storage_account_name: saname
https://redd.it/11l2yq4
@r_devops
Docker Documentation
Cloud integrations
ACI and ECS integration information
How to quickly learn/understand the system architecture of any given application?
When you join a new project or switch jobs… how do you bring yourself up to speed with the new application and its inner workings. I feel the best (and most accurate) way is to observe the flow of traffic. How the requests come in and how they get routed to back-end.
https://redd.it/11l8wfa
@r_devops
When you join a new project or switch jobs… how do you bring yourself up to speed with the new application and its inner workings. I feel the best (and most accurate) way is to observe the flow of traffic. How the requests come in and how they get routed to back-end.
https://redd.it/11l8wfa
@r_devops
Reddit
r/devops on Reddit: How to quickly learn/understand the system architecture of any given application?
Posted by u/bashogaya - No votes and 6 comments
Proof-of-Concept: Pass environment variables as docker secrets during runtime in your container with Docker Compose v2
Hi all,
I was going through some changelogs as well as a release blog post from Docker about their Compose v2 features and happened to stumble upon an interesting feature where you can let environment variables be passed into your container as a docker secrets i.e. instead of passing it as an env var you can pass the value at runtime as a secrets file.
Here the Proof-of-Concept repo
This mitigates the case where credentials as env vars are visible when performing
https://redd.it/11lc061
@r_devops
Hi all,
I was going through some changelogs as well as a release blog post from Docker about their Compose v2 features and happened to stumble upon an interesting feature where you can let environment variables be passed into your container as a docker secrets i.e. instead of passing it as an env var you can pass the value at runtime as a secrets file.
Here the Proof-of-Concept repo
This mitigates the case where credentials as env vars are visible when performing
docker inspect or docker compose exec <service> envhttps://redd.it/11lc061
@r_devops
GitHub
GitHub - shantanoo-desai/docker-compose-secrets-envvars: Use Docker Compose v2 with Secrets and Environment Variables for a more…
Use Docker Compose v2 with Secrets and Environment Variables for a more secure deployment strategy - shantanoo-desai/docker-compose-secrets-envvars
Strange Sonarcloud error message ?
Hi all, so to give a quick overview of the situation I am trying to run a code check using sonarcloud/sonarscanner in my Amazon EC2 server but keep getting the error saying "main component does not belong to specified organization" .What does this error message actually mean ?
https://redd.it/11l9wu0
@r_devops
Hi all, so to give a quick overview of the situation I am trying to run a code check using sonarcloud/sonarscanner in my Amazon EC2 server but keep getting the error saying "main component does not belong to specified organization" .What does this error message actually mean ?
https://redd.it/11l9wu0
@r_devops
Reddit
r/devops on Reddit: Strange Sonarcloud error message ?
Posted by u/Peacekeeper2654 - 1 vote and no comments
Can one domain have two server locations?
I am using DigitalOcean with users in North America and Australia. My server is located in Canada. Is there a way I can add a second Australia server and redirect Australian users to the Australian server?
​
Or am I able to use a CDN or other third part service/software to help with this?
​
This is with the intention to increase the website load speed for Australian users.
https://redd.it/11l9uvg
@r_devops
I am using DigitalOcean with users in North America and Australia. My server is located in Canada. Is there a way I can add a second Australia server and redirect Australian users to the Australian server?
​
Or am I able to use a CDN or other third part service/software to help with this?
​
This is with the intention to increase the website load speed for Australian users.
https://redd.it/11l9uvg
@r_devops
Reddit
r/devops on Reddit: Can one domain have two server locations?
Posted by u/FoeTrades - No votes and 6 comments
Do you manage runbooks for operations and incident management?
Dear DevOps, I’m an indie developer developing a product to help DevOps engineers and software engineers generate runbooks and manage them up-to-date easily.
I would like to know if your company manages runbooks.
If you do,
What is the main purpose of runbooks?
Would you please share the runbook examples you have?
If you don’t,
Have you ever tried managing runbooks? Then what makes you stop using them?
How do you keep knowledge related to operations and incident management?
I wish to contribute to the DevOps community and industry, and your comments would be very helpful.
https://redd.it/11lg6r3
@r_devops
Dear DevOps, I’m an indie developer developing a product to help DevOps engineers and software engineers generate runbooks and manage them up-to-date easily.
I would like to know if your company manages runbooks.
If you do,
What is the main purpose of runbooks?
Would you please share the runbook examples you have?
If you don’t,
Have you ever tried managing runbooks? Then what makes you stop using them?
How do you keep knowledge related to operations and incident management?
I wish to contribute to the DevOps community and industry, and your comments would be very helpful.
https://redd.it/11lg6r3
@r_devops
Reddit
r/devops on Reddit: Do you manage runbooks for operations and incident management?
Posted by u/ssowonny - No votes and 1 comment
Best Enterprise engineering blogs
Some enterprises publish interesting blogs posts with their approaches of solving issues at large scale.
For example, this one from Atlassian about their CI/CD migration.
https://www.atlassian.com/engineering/how-we-migrated-complex-ci-cd-workflows-to-bitbucket-pipelines
And also enjoyed this post-incident post related with their 10-day partial outage
https://www.atlassian.com/engineering/post-incident-review-april-2022-outage
Another company that shares posts related to DevOps is Meta. https://engineering.fb.com/category/core-data/
​
What other large enterprises share their knowledge related with DevOps?
Share your favorite ones in the comments.
https://redd.it/11l9ag1
@r_devops
Some enterprises publish interesting blogs posts with their approaches of solving issues at large scale.
For example, this one from Atlassian about their CI/CD migration.
https://www.atlassian.com/engineering/how-we-migrated-complex-ci-cd-workflows-to-bitbucket-pipelines
And also enjoyed this post-incident post related with their 10-day partial outage
https://www.atlassian.com/engineering/post-incident-review-april-2022-outage
Another company that shares posts related to DevOps is Meta. https://engineering.fb.com/category/core-data/
​
What other large enterprises share their knowledge related with DevOps?
Share your favorite ones in the comments.
https://redd.it/11l9ag1
@r_devops
EKS , ALB , Route 53 help!
Hi , new on DevOps and AWS and im confused and maybe someone could give advice.
I followed the AWS Docs
So..
1.I created EKS Cluster
2.Deployed the ALB Controller /Cert-mng
3.Deployed my APP and Services ALB / ingress etc
4.Deployed mongodb as statefulset with an EBS Volume
5.Cluster autoscaler vertical
So it showed in the load balancers console as active without any target group.. BUT my questions are .
1.Is this the right method should i now create the Route53 ns record to point my subdomain to the ALB and should i worry about the DNS of ALB that changes , or how should i point Subdomain to ALB?
2.Or should i create from the console a ALB with a Target Group of my EC2 Node instances ?
3.When i deployed my app there are shown some Classic Load Balancers even it is deprecated from August 2021/22 IDK ?
4.My node group has an Auto-Scaling group , what happens with the EBS if my instances go up/down with it attach automatically with a instance or will it remain detached .
I see a lot of tutorials people implement different methods and i worry if it will not work correctly .
https://redd.it/11kxmtd
@r_devops
Hi , new on DevOps and AWS and im confused and maybe someone could give advice.
I followed the AWS Docs
So..
1.I created EKS Cluster
2.Deployed the ALB Controller /Cert-mng
3.Deployed my APP and Services ALB / ingress etc
4.Deployed mongodb as statefulset with an EBS Volume
5.Cluster autoscaler vertical
So it showed in the load balancers console as active without any target group.. BUT my questions are .
1.Is this the right method should i now create the Route53 ns record to point my subdomain to the ALB and should i worry about the DNS of ALB that changes , or how should i point Subdomain to ALB?
2.Or should i create from the console a ALB with a Target Group of my EC2 Node instances ?
3.When i deployed my app there are shown some Classic Load Balancers even it is deprecated from August 2021/22 IDK ?
4.My node group has an Auto-Scaling group , what happens with the EBS if my instances go up/down with it attach automatically with a instance or will it remain detached .
I see a lot of tutorials people implement different methods and i worry if it will not work correctly .
https://redd.it/11kxmtd
@r_devops
Reddit
r/devops on Reddit: EKS , ALB , Route 53 help!
Posted by u/Legitimate-Carry7285 - 1 vote and 2 comments
Storing build/deployment metadata in NoSQL
Many of our apps rely on key/value pairs during build / deployment phase.
These are stored in text format, as files, with source code.
Does it make sense to move all such metadata items, key/value pairs into a NoSQL database and implement webservices for persistence and retrieval ?
This is unstructured data sitting on files, moving it to a NoSQL DB, what is the benefit ?
Concern is:
1) Reliable system that is always up to serve key/value pairs for build/deployment
2) Over(ab)use pattern, since data is non-structured it is best to have it files. Otherwise a persistence layer has to be validated / developed/modified for every new artifact that dev team wants to create in a sprint.
Is this a Pro ?
1) Persistence API may help in enforcing some rules vs. free-form edits on a text file by development
Please suggest if this is something you would entertain in your project.
We could also have a mixed model, some info is in files (for mature, standard key/value pairs who need persistence validation) and some items (just started developing code/concept) may be in files.
https://redd.it/11i01nr
@r_devops
Many of our apps rely on key/value pairs during build / deployment phase.
These are stored in text format, as files, with source code.
Does it make sense to move all such metadata items, key/value pairs into a NoSQL database and implement webservices for persistence and retrieval ?
This is unstructured data sitting on files, moving it to a NoSQL DB, what is the benefit ?
Concern is:
1) Reliable system that is always up to serve key/value pairs for build/deployment
2) Over(ab)use pattern, since data is non-structured it is best to have it files. Otherwise a persistence layer has to be validated / developed/modified for every new artifact that dev team wants to create in a sprint.
Is this a Pro ?
1) Persistence API may help in enforcing some rules vs. free-form edits on a text file by development
Please suggest if this is something you would entertain in your project.
We could also have a mixed model, some info is in files (for mature, standard key/value pairs who need persistence validation) and some items (just started developing code/concept) may be in files.
https://redd.it/11i01nr
@r_devops
Reddit
r/devops on Reddit: Storing build/deployment metadata in NoSQL
Posted by u/RoamWave - 1 vote and 1 comment
RabbitMq consumer not processing messages
We’re having a situation in one of our datacenters in which for some reason one of the rabbitmq consumers stops consuming. Consequently, the queue reaches well over 1000 and one of our app’s components stops functioning. Rabbitmq is installed as a package in the VM, while this component runs as a container in the same VM. Currently the workaround is to just restart rabbitmq, after which the dead consumer suddenly springs to life and starts consuming again. Anyway, what could be causing this issue? I've checked the app's logs and those of rabbitmq, all look normal, no errors reported.
https://redd.it/11lo4fg
@r_devops
We’re having a situation in one of our datacenters in which for some reason one of the rabbitmq consumers stops consuming. Consequently, the queue reaches well over 1000 and one of our app’s components stops functioning. Rabbitmq is installed as a package in the VM, while this component runs as a container in the same VM. Currently the workaround is to just restart rabbitmq, after which the dead consumer suddenly springs to life and starts consuming again. Anyway, what could be causing this issue? I've checked the app's logs and those of rabbitmq, all look normal, no errors reported.
https://redd.it/11lo4fg
@r_devops
Reddit
r/devops on Reddit: RabbitMq consumer not processing messages
Posted by u/ncubez - No votes and 1 comment