Solving ArgoCD Secret Management with ArgoCD-Vault-Plugin
Hi everyone, i wanted to share an ArgoCD plugin that i have been working on that allows for connecting to Vault in a simple way that does not require an Operator or CRD. The plugin is in its early stages and only supports a couple backends but we look forward to any contributions/suggestions or ideas you may have!
https://werne2j.medium.com/argocd-secret-management-with-argocd-vault-plugin-539f104aff05
https://redd.it/lbpcpp
@r_devops
Hi everyone, i wanted to share an ArgoCD plugin that i have been working on that allows for connecting to Vault in a simple way that does not require an Operator or CRD. The plugin is in its early stages and only supports a couple backends but we look forward to any contributions/suggestions or ideas you may have!
https://werne2j.medium.com/argocd-secret-management-with-argocd-vault-plugin-539f104aff05
https://redd.it/lbpcpp
@r_devops
Medium
Solving ArgoCD Secret Management with the argocd-vault-plugin
argocd-vault-plugin is a solution for retrieving secrets from HashiCorp Vault and injecting them into Kubernetes YAML files
How do you automate AWS AMI updates?
I currently manage most of our infra with terraform. I have a module that returns the latest AWS AMI for a particular service (EKS, ECS, etc). This means that whenever we run a terraform plan for a project that uses the service, the plan will include an AMI update if AWS has released a newer AMI. This has worked fine but I'd like to make this a little bit more stable. I'd like to have the latest AMI run for a while in our non-prod environments and then have some sort of approval process so that production gets updated later. Any ideas on how to make this work? Or any ideas for an alternative approach?
https://redd.it/lbrfhs
@r_devops
I currently manage most of our infra with terraform. I have a module that returns the latest AWS AMI for a particular service (EKS, ECS, etc). This means that whenever we run a terraform plan for a project that uses the service, the plan will include an AMI update if AWS has released a newer AMI. This has worked fine but I'd like to make this a little bit more stable. I'd like to have the latest AMI run for a while in our non-prod environments and then have some sort of approval process so that production gets updated later. Any ideas on how to make this work? Or any ideas for an alternative approach?
https://redd.it/lbrfhs
@r_devops
reddit
How do you automate AWS AMI updates?
I currently manage most of our infra with terraform. I have a module that returns the latest AWS AMI for a particular service (EKS, ECS, etc)....
A guide to the best SRE tools
A beginner's guide to common SRE/DevOps tools and incident management automation (monitoring, oncall, IaC):
https://www.getcortexapp.com/post/a-guide-to-the-best-sre-tools
https://redd.it/lbtomt
@r_devops
A beginner's guide to common SRE/DevOps tools and incident management automation (monitoring, oncall, IaC):
https://www.getcortexapp.com/post/a-guide-to-the-best-sre-tools
https://redd.it/lbtomt
@r_devops
reddit
A guide to the best SRE tools
A beginner's guide to common SRE/DevOps tools and incident management automation (monitoring, oncall,...
transfer thousands of files of any size with optimization
We have been doing a mix of manual process and some scripts to transfer files of various sizes from one system to another. Basically there are shares where people may dump hundreds or thousands of files of varying sizes. We then move these files to another location.
We want to use a tool that would automatically optimize speed/perf based on file size and amount and transfer the files. (nifi maybe?)
https://redd.it/lboo0s
@r_devops
We have been doing a mix of manual process and some scripts to transfer files of various sizes from one system to another. Basically there are shares where people may dump hundreds or thousands of files of varying sizes. We then move these files to another location.
We want to use a tool that would automatically optimize speed/perf based on file size and amount and transfer the files. (nifi maybe?)
https://redd.it/lboo0s
@r_devops
reddit
transfer thousands of files of any size with optimization
We have been doing a mix of manual process and some scripts to transfer files of various sizes from one system to another. Basically there are...
Keeping track of the infrastructure
Hi there,
the cloud application my company is developing did not start with orchestration in mind (and it's to late to do so :D).
We have some hosted components in Azure (fixed set) together with some managed machines in OVH (scalable).
For two purposes:
\- dynamic topology
\- monitoring
to be able to have a service where every component can "check in" and that other applications can use to reliably get info on the current online components.
Do you guys have any suggestion?
Thanks a real lot,
https://redd.it/lbnw5t
@r_devops
Hi there,
the cloud application my company is developing did not start with orchestration in mind (and it's to late to do so :D).
We have some hosted components in Azure (fixed set) together with some managed machines in OVH (scalable).
For two purposes:
\- dynamic topology
\- monitoring
to be able to have a service where every component can "check in" and that other applications can use to reliably get info on the current online components.
Do you guys have any suggestion?
Thanks a real lot,
https://redd.it/lbnw5t
@r_devops
reddit
Keeping track of the infrastructure
Hi there, the cloud application my company is developing did not start with orchestration in mind (and it's to late to do so :D). We have some...
Prometheus exporter to retrieve the DockerHub rate limit counts as scrape target
This exporter allows you to retrieve the DockerHub rate limit counts as scrape target for Prometheus as Gauge metric.
Multi arch docker images are available (arm/arm64/amd64) with a complete docker-compose example.
I hope you find it useful.
Docker Hub Rate Limit Exporter Github Link
https://redd.it/lbigg8
@r_devops
This exporter allows you to retrieve the DockerHub rate limit counts as scrape target for Prometheus as Gauge metric.
Multi arch docker images are available (arm/arm64/amd64) with a complete docker-compose example.
I hope you find it useful.
Docker Hub Rate Limit Exporter Github Link
https://redd.it/lbigg8
@r_devops
GitHub
m47ik/drl-exporter
Prometheus exporter for dockerhub rate limits. Contribute to m47ik/drl-exporter development by creating an account on GitHub.
What do you use to manage on-call alerting on AWS?
Hi, we have a current system where we use cloudwatch and ms teams notifications for alerting if something happens in production.
However, management requires a 24/7 support. Therefore I'd like to use a system with scheduling for 24/7 support and ability to call/alert developer in case severe incident with production happens.
​
What are the best tools for the job?
https://redd.it/lbldob
@r_devops
Hi, we have a current system where we use cloudwatch and ms teams notifications for alerting if something happens in production.
However, management requires a 24/7 support. Therefore I'd like to use a system with scheduling for 24/7 support and ability to call/alert developer in case severe incident with production happens.
​
What are the best tools for the job?
https://redd.it/lbldob
@r_devops
reddit
What do you use to manage on-call alerting on AWS?
Hi, we have a current system where we use cloudwatch and ms teams notifications for alerting if something happens in production. However,...
Help me with setting this up
Hey guys, I'm working on a project which has a production database hosted in AWS RDS. We work on a separate local Postgres database from inside a Docker container. The APIs are to be uploaded to Lambda. This is where things get complicated. Whatever changes or migrations we make to the development database are not reflected in the RDS. I want the final changes to be applied to the RDS automatically through some pipeline. Is there any guide that helps with this kind of problem?
Sorry if I sound noob, this is my first time working with large techs. Thank you.
https://redd.it/lbl04s
@r_devops
Hey guys, I'm working on a project which has a production database hosted in AWS RDS. We work on a separate local Postgres database from inside a Docker container. The APIs are to be uploaded to Lambda. This is where things get complicated. Whatever changes or migrations we make to the development database are not reflected in the RDS. I want the final changes to be applied to the RDS automatically through some pipeline. Is there any guide that helps with this kind of problem?
Sorry if I sound noob, this is my first time working with large techs. Thank you.
https://redd.it/lbl04s
@r_devops
reddit
Help me with setting this up
Hey guys, I'm working on a project which has a production database hosted in AWS RDS. We work on a separate local Postgres database from inside a...
Need help from someone with AWS CI/CD and VPC experience
I hope this does not violate community guidelines, but I really need some help with an AWS project. I am working on a MENN App in AWS and we are having a lot of issues with CI/CD from CodeCommit --> CodeBuild --> CodeDeploy for Lamdas. I am also having no luck connecting Mongo Atlas to a Lambda. We are all full stack devs and I have a background in systems administration so I was able to setup VPC peering, but can't get IAM authentication to work for connecting to Mongo Atlas from Node.JS Lambda. I don't have a lot of money, but would be willing to pay if anyone could help.
https://redd.it/lc4b6q
@r_devops
I hope this does not violate community guidelines, but I really need some help with an AWS project. I am working on a MENN App in AWS and we are having a lot of issues with CI/CD from CodeCommit --> CodeBuild --> CodeDeploy for Lamdas. I am also having no luck connecting Mongo Atlas to a Lambda. We are all full stack devs and I have a background in systems administration so I was able to setup VPC peering, but can't get IAM authentication to work for connecting to Mongo Atlas from Node.JS Lambda. I don't have a lot of money, but would be willing to pay if anyone could help.
https://redd.it/lc4b6q
@r_devops
reddit
Need help from someone with AWS CI/CD and VPC experience
I hope this does not violate community guidelines, but I really need some help with an AWS project. I am working on a MENN App in AWS and we are...
Help with specific metrics around platform for increasing headcount
Unsurprising story: ask for head count fails because DevOps / platform / infrastructure isn’t easily quantifiable like revenue of an external product / feature.
Really need someone to share either/both:
1. specific measures they use that are effective when talking to executives
1. how these are scrapped / generated
Disclaimer: I’m aware of Accelerate metrics but this is one of those challenges beyond that. A product team that makes money can show they’re making more money by adjusting those metrics; it is difficult if you’re 1-2 deviations off.
https://redd.it/lc2q8y
@r_devops
Unsurprising story: ask for head count fails because DevOps / platform / infrastructure isn’t easily quantifiable like revenue of an external product / feature.
Really need someone to share either/both:
1. specific measures they use that are effective when talking to executives
1. how these are scrapped / generated
Disclaimer: I’m aware of Accelerate metrics but this is one of those challenges beyond that. A product team that makes money can show they’re making more money by adjusting those metrics; it is difficult if you’re 1-2 deviations off.
https://redd.it/lc2q8y
@r_devops
reddit
Help with specific metrics around platform for increasing headcount
Unsurprising story: ask for head count fails because DevOps / platform / infrastructure isn’t easily quantifiable like revenue of an external...
How does manual testing fit into CICD and trunk based development?
Struggling how to understand how people do CICD and trunk based development with or without manual testing. Surely you can go straight to prod if you pass all automated regression tests - but is anyone ever still performing manual tests? And at what point in the development cycle/process is manual testing performed? Is it a gate before release is actually "deployed"?
https://redd.it/lc10mf
@r_devops
Struggling how to understand how people do CICD and trunk based development with or without manual testing. Surely you can go straight to prod if you pass all automated regression tests - but is anyone ever still performing manual tests? And at what point in the development cycle/process is manual testing performed? Is it a gate before release is actually "deployed"?
https://redd.it/lc10mf
@r_devops
reddit
How does manual testing fit into CICD and trunk based development?
Struggling how to understand how people do CICD and trunk based development with or without manual testing. Surely you can go straight to prod if...
Career Advice I want to move from Civil Engineering to DevOps engineering?
I'm currently doing my bachelor's degree in Civil Engineering technology in South Africa and when I graduate I may become a civil technologist/engineer. However, I want to branch into DevOps. What is the best route for me to become a DevOps engineer ? Is there a bridging honours or masters I can do to become a DevOps Engineer?
https://redd.it/lc3u1z
@r_devops
I'm currently doing my bachelor's degree in Civil Engineering technology in South Africa and when I graduate I may become a civil technologist/engineer. However, I want to branch into DevOps. What is the best route for me to become a DevOps engineer ? Is there a bridging honours or masters I can do to become a DevOps Engineer?
https://redd.it/lc3u1z
@r_devops
reddit
Career Advice I want to move from Civil Engineering to DevOps...
I'm currently doing my bachelor's degree in Civil Engineering technology in South Africa and when I graduate I may become a civil...
Declarative API's
I am wondering whether there's actual use case or its an advanced users feature that is a nice-to-have.
​
Will declarative API's, infra as code capabilities affect your decision when choosing a tool/platform?
View Poll
https://redd.it/lc1qnb
@r_devops
I am wondering whether there's actual use case or its an advanced users feature that is a nice-to-have.
​
Will declarative API's, infra as code capabilities affect your decision when choosing a tool/platform?
View Poll
https://redd.it/lc1qnb
@r_devops
Looking for simple local build system
I'm looking for some kind of simple generic build system that will run entirely locally on my Windows machine (not Docker) that will basically do 4 things:
Execute a sequence of commands
Capture the commands and output
Collect generated files from a build and put them somewhere
Maintain the history of builds, logs, and files
Even better if it could automatically do a lot of things a CI/CD system would do, e.g.
Checkout a Git revision (from a locally hosted Git repo, or a Github repo)
Setup environment variables
Run tests
Generate some reports
Generate a manifest
Identify and collect artifacts
https://redd.it/lby4ta
@r_devops
I'm looking for some kind of simple generic build system that will run entirely locally on my Windows machine (not Docker) that will basically do 4 things:
Execute a sequence of commands
Capture the commands and output
Collect generated files from a build and put them somewhere
Maintain the history of builds, logs, and files
Even better if it could automatically do a lot of things a CI/CD system would do, e.g.
Checkout a Git revision (from a locally hosted Git repo, or a Github repo)
Setup environment variables
Run tests
Generate some reports
Generate a manifest
Identify and collect artifacts
https://redd.it/lby4ta
@r_devops
reddit
Looking for simple local build system
I'm looking for some kind of simple generic build system that will run entirely locally on my Windows machine (not Docker) that will basically do...
Which job should I pick?
I am a middle level DevOps engineer. I am familiar with all general DevOps tools, and have spent quite some effort on AWS (I have 3 certs already, 1 is Specialty), but not much real life experience.
Currently I am receiving two job offers (first of all, the salaries and company sizes are the same):
* Job 1:
* AWS
* Serverless
* No K8S (yet)
* Website and mobile app
* Possible working from home 60%
* Quite a distance from home
* Job 2:
* Azure
* K8S
* IoT
* Possible working from home partly
* 1/2 distance from my home, comparing with the job 1
Which one should I pick, or is there anything I should consider?
https://redd.it/lbwa5w
@r_devops
I am a middle level DevOps engineer. I am familiar with all general DevOps tools, and have spent quite some effort on AWS (I have 3 certs already, 1 is Specialty), but not much real life experience.
Currently I am receiving two job offers (first of all, the salaries and company sizes are the same):
* Job 1:
* AWS
* Serverless
* No K8S (yet)
* Website and mobile app
* Possible working from home 60%
* Quite a distance from home
* Job 2:
* Azure
* K8S
* IoT
* Possible working from home partly
* 1/2 distance from my home, comparing with the job 1
Which one should I pick, or is there anything I should consider?
https://redd.it/lbwa5w
@r_devops
reddit
Which job should I pick?
I am a middle level DevOps engineer. I am familiar with all general DevOps tools, and have spent quite some effort on AWS (I have 3 certs already,...
Can I bulk upload epics and features to a backlog?
Basically the title. I've got about 20 epics with multiple Features cascading under them. I want to be able to bulk upload everything.
https://redd.it/lbvzwd
@r_devops
Basically the title. I've got about 20 epics with multiple Features cascading under them. I want to be able to bulk upload everything.
https://redd.it/lbvzwd
@r_devops
reddit
Can I bulk upload epics and features to a backlog?
Basically the title. I've got about 20 epics with multiple Features cascading under them. I want to be able to bulk upload everything.
CI puppet code using docker image
We're developing puppet code to automate configuration for VMs shipped our customer.
For the moment a simple pipeline is set up to check code and synchronize modules in foreman. Each time we want to check the result, we need to connect to the VMs run the puppet agent and analyze the output.
I would like to setup a pipeline using customize centos/debian docker images (with systemd enable) running puppet server and agent to test new development.
I assume the result should be the same as if I was deploying the manifests into VMs.
Am I right to think that it would have the same effect in productive VMs? Does someone already tested?
https://redd.it/lbtdiv
@r_devops
We're developing puppet code to automate configuration for VMs shipped our customer.
For the moment a simple pipeline is set up to check code and synchronize modules in foreman. Each time we want to check the result, we need to connect to the VMs run the puppet agent and analyze the output.
I would like to setup a pipeline using customize centos/debian docker images (with systemd enable) running puppet server and agent to test new development.
I assume the result should be the same as if I was deploying the manifests into VMs.
Am I right to think that it would have the same effect in productive VMs? Does someone already tested?
https://redd.it/lbtdiv
@r_devops
reddit
CI puppet code using docker image
We're developing puppet code to automate configuration for VMs shipped our customer. For the moment a simple pipeline is set up to check code and...
Need tips on package managers
So my environment has Linux nodes windows nodes and docker images running on both virtual and physical servers all are in the same network
I would like to create a local repository to host windows packages, Linux packages, docker images and packer VM/iso templates in one location.
I believe Linux, docker and packer templates should not be a problem, but I am wondering about windows.
I would like everything to be in one virtual node
Does anyone have a ideas/ tips on what I can explore??
I am open to anything (open sourced of course)
Thanks in advance
https://redd.it/lced8d
@r_devops
So my environment has Linux nodes windows nodes and docker images running on both virtual and physical servers all are in the same network
I would like to create a local repository to host windows packages, Linux packages, docker images and packer VM/iso templates in one location.
I believe Linux, docker and packer templates should not be a problem, but I am wondering about windows.
I would like everything to be in one virtual node
Does anyone have a ideas/ tips on what I can explore??
I am open to anything (open sourced of course)
Thanks in advance
https://redd.it/lced8d
@r_devops
reddit
Need tips on package managers
So my environment has Linux nodes windows nodes and docker images running on both virtual and physical servers all are in the same network I would...
Looking for some good rules of thumb
Hi!
I'm a web app developer and when I have to deploy stuff I always choose the smallest tier, because I have no idea of what specs what traffic/requests can hold approximately.
So if someone with experience can help me in either of the 3 following things, that would be amazing:
1. For a basic JSON API backend server that let's say executes 1 database operation when gets a request (average speed framework for everything - it shouldn't be that big of a difference), how should I think about when choosing hardware, like if I'm expecting at max 5.000 requests/second what hardware can handle that and what about 10.000 req/sec, 20.000 req/sec and so on
2. The same for a basic static file sever that serves static html + css + js. Here again like if the sum of all is for example 3MB and I have X req/sec how should I think
3. Server Side Rendering HTML Server (React SSR or any MVC framework). This one is the hardest, but if someone has a lot of experience there's a chance that there are good rules of thumb for this one: how much heavier it is than a simple JSON server that executes a DB operation
If someone can help me with any of it or link me some good resources I would be very thankful!
https://redd.it/lcfayp
@r_devops
Hi!
I'm a web app developer and when I have to deploy stuff I always choose the smallest tier, because I have no idea of what specs what traffic/requests can hold approximately.
So if someone with experience can help me in either of the 3 following things, that would be amazing:
1. For a basic JSON API backend server that let's say executes 1 database operation when gets a request (average speed framework for everything - it shouldn't be that big of a difference), how should I think about when choosing hardware, like if I'm expecting at max 5.000 requests/second what hardware can handle that and what about 10.000 req/sec, 20.000 req/sec and so on
2. The same for a basic static file sever that serves static html + css + js. Here again like if the sum of all is for example 3MB and I have X req/sec how should I think
3. Server Side Rendering HTML Server (React SSR or any MVC framework). This one is the hardest, but if someone has a lot of experience there's a chance that there are good rules of thumb for this one: how much heavier it is than a simple JSON server that executes a DB operation
If someone can help me with any of it or link me some good resources I would be very thankful!
https://redd.it/lcfayp
@r_devops
reddit
Looking for some good rules of thumb
Hi! I'm a web app developer and when I have to deploy stuff I always choose the smallest tier, because I have no idea of what specs what...
Dell's ALM Tools
Does anybody know which ALM tools Dell is using? Are they using Jira? Azure DevOps? Something else? An in-house to? Looking at moving and wanted a heads-up on what tools I should be looking at.
https://redd.it/lbt87i
@r_devops
Does anybody know which ALM tools Dell is using? Are they using Jira? Azure DevOps? Something else? An in-house to? Looking at moving and wanted a heads-up on what tools I should be looking at.
https://redd.it/lbt87i
@r_devops
reddit
Dell's ALM Tools
Does anybody know which ALM tools Dell is using? Are they using Jira? Azure DevOps? Something else? An in-house to? Looking at moving and wanted a...
Which tool are you using to run workflows/pipelines in Kubernetes
There are two main contestants to be the de-facto standard for CI/CD, machine learning and other types of workflows/pipelines in Kubernetes. Those would be Tekton and Argo Workflows.
Which one do you prefer?
A video about Argo Workflows (Tekton is coming soon as well)...
\>>> https://youtu.be/UMaivwrAyTA
https://redd.it/lchp9y
@r_devops
There are two main contestants to be the de-facto standard for CI/CD, machine learning and other types of workflows/pipelines in Kubernetes. Those would be Tekton and Argo Workflows.
Which one do you prefer?
A video about Argo Workflows (Tekton is coming soon as well)...
\>>> https://youtu.be/UMaivwrAyTA
https://redd.it/lchp9y
@r_devops
YouTube
Argo Workflows and Pipelines - CI/CD, Machine Learning, and Other Kubernetes Workflows
Argo Workflows & Pipelines is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. It is a cloud-native solution designed from ground-up for Kubernetes.
#argo #workflows #pipelines #kubernetes
Timecodes ⏱:
00:00…
#argo #workflows #pipelines #kubernetes
Timecodes ⏱:
00:00…