What's the difference between GitOps and Ci/CD
It just sounds like fancy market talk.
Specifically this link here from github - https://github.com/readme/featured/defining-gitops
https://redd.it/10faw7u
@r_devops
It just sounds like fancy market talk.
Specifically this link here from github - https://github.com/readme/featured/defining-gitops
https://redd.it/10faw7u
@r_devops
GitHub
What’s in a name? Moving GitOps beyond buzzword
Unlike many other technical portmanteaus, “GitOps” is worth defining, says @rwwmike, and here’s why:
An application agnostic remote agent
Does anyone know of a generic remote agent tool? By this I mean some kind of agent software that can be installed to a remote system (likely on a different network outside of my control) that can be instructed by a control server to execute arbitrary tasks, preferably containers. I know of several application specific task handlers like GitHub's self-hosted runners, but I can't find anything that is application or language agnostic.
If there isn't such a tool, what level of interest do you think there would be in one?
https://redd.it/10fl0up
@r_devops
Does anyone know of a generic remote agent tool? By this I mean some kind of agent software that can be installed to a remote system (likely on a different network outside of my control) that can be instructed by a control server to execute arbitrary tasks, preferably containers. I know of several application specific task handlers like GitHub's self-hosted runners, but I can't find anything that is application or language agnostic.
If there isn't such a tool, what level of interest do you think there would be in one?
https://redd.it/10fl0up
@r_devops
reddit
An application agnostic remote agent
Does anyone know of a generic remote agent tool? By this I mean some kind of agent software that can be installed to a remote system (likely on a...
What are Terratests good for?
So I'm working on a new team that has a requirement to run terratests on all terraform modules. But the way they are currently implementing them is by checking that all the resource names match the expected output and they have all the right tags.
Looking at this, it seems kinda pointless. Terraform creates what we tell it to, and I don't really feel a need to test that terraform is creating what I laid out in the module. The only real benefit I see in running the terratest is seeing that the apply finishes without an error, but after that, I don't really need to check all the resource names. We use interpolation for some of the naming conventions, but they aren't that complex and creating terratests just for that naming convention seems like overkill.
I can think of some better uses for terratest, like to test the functionality of some of the more complex conditional logic we sometimes use in modules. But I was curios if anyone here has uses terratest for something more useful or if I'm missing what the main point of testing terraform is all about.
https://redd.it/10fbrcl
@r_devops
So I'm working on a new team that has a requirement to run terratests on all terraform modules. But the way they are currently implementing them is by checking that all the resource names match the expected output and they have all the right tags.
Looking at this, it seems kinda pointless. Terraform creates what we tell it to, and I don't really feel a need to test that terraform is creating what I laid out in the module. The only real benefit I see in running the terratest is seeing that the apply finishes without an error, but after that, I don't really need to check all the resource names. We use interpolation for some of the naming conventions, but they aren't that complex and creating terratests just for that naming convention seems like overkill.
I can think of some better uses for terratest, like to test the functionality of some of the more complex conditional logic we sometimes use in modules. But I was curios if anyone here has uses terratest for something more useful or if I'm missing what the main point of testing terraform is all about.
https://redd.it/10fbrcl
@r_devops
reddit
What are Terratests good for?
So I'm working on a new team that has a requirement to run terratests on all terraform modules. But the way they are currently implementing them...
Keda with kafka scaler won't scale from zero
We have multiple K8S clusters and in each cluster we have a kafka cluster. I want to use keda to scale up and down pods due to topics lag. The issue is when a new cluster is being created or if we recreate the topics, and the pods are on 0 replicas, the keda does not recognize that there's a new lag. It's happening because it can't identify the consumer group, since it was not created yet. Is there any solution for that? Thanks
https://redd.it/10fgtwb
@r_devops
We have multiple K8S clusters and in each cluster we have a kafka cluster. I want to use keda to scale up and down pods due to topics lag. The issue is when a new cluster is being created or if we recreate the topics, and the pods are on 0 replicas, the keda does not recognize that there's a new lag. It's happening because it can't identify the consumer group, since it was not created yet. Is there any solution for that? Thanks
https://redd.it/10fgtwb
@r_devops
reddit
Keda with kafka scaler won't scale from zero
We have multiple K8S clusters and in each cluster we have a kafka cluster. I want to use keda to scale up and down pods due to topics lag. The...
Fighting Slow and Flaky CI/CD Pipelines Starts with Observability
Observability in production is great, but what about our own CI/CD? Here's a tutorial on how to build intelligent data collection, dashboarding and alerting over Jenkins pipelines with open source tools like Prometheus and OpenSearch.
https://redd.it/10fhejs
@r_devops
Observability in production is great, but what about our own CI/CD? Here's a tutorial on how to build intelligent data collection, dashboarding and alerting over Jenkins pipelines with open source tools like Prometheus and OpenSearch.
https://redd.it/10fhejs
@r_devops
Logz.io
Learn how to monitor your Jenkins and the CI/CD Pipeline in 4 steps | Logz.io
CI/CD Pipelines keep failing and slowing you down? Observability to the rescue. Learn how to monitor your Jenkins in 4 steps using Elasticsearch, Prometheus, Jaeger and other tools
Keep working as developer or become cloud specialist?
I worked for an digital agency for 2 years, kind of a full-stack position, built a lot of websites and mobile apps. I've got a chance to setup cloud and did some DevOps works, just simple CI/CD and Docker.
Now I am getting 2 job offers, one backend developer offer from a bigger agency, and one cloud engineer offer from a large global shipping company.
I am very experienced in frontend, so I want to learn more in backend and DevOps/infrastructure. That's why I am struggling to decide which offer I should take.
While the agency offer pay a bit more and I can further my backend skills, the cloud engineer offer will expose me to K8S and I will still do some development works like internal frameworks for the company application team.
Any advice or anything I should take into consideration?
Should I work as backend developer and play with cloud in my free time? Or take the cloud engineer offer and do some side projects to keep sharpen my backend skills? Which one sounds more doable?
https://redd.it/10focyg
@r_devops
I worked for an digital agency for 2 years, kind of a full-stack position, built a lot of websites and mobile apps. I've got a chance to setup cloud and did some DevOps works, just simple CI/CD and Docker.
Now I am getting 2 job offers, one backend developer offer from a bigger agency, and one cloud engineer offer from a large global shipping company.
I am very experienced in frontend, so I want to learn more in backend and DevOps/infrastructure. That's why I am struggling to decide which offer I should take.
While the agency offer pay a bit more and I can further my backend skills, the cloud engineer offer will expose me to K8S and I will still do some development works like internal frameworks for the company application team.
Any advice or anything I should take into consideration?
Should I work as backend developer and play with cloud in my free time? Or take the cloud engineer offer and do some side projects to keep sharpen my backend skills? Which one sounds more doable?
https://redd.it/10focyg
@r_devops
reddit
Keep working as developer or become cloud specialist?
I worked for an digital agency for 2 years, kind of a full-stack position, built a lot of websites and mobile apps. I've got a chance to setup...
Is it hard to migrate a Mongo DB from one cloud to another?
Lets say I am on AWS and want to move my Mongo DB to Azure. How difficult is that to do? Is it simply just download all the info and reupload to the other cloud?
https://redd.it/10frkvy
@r_devops
Lets say I am on AWS and want to move my Mongo DB to Azure. How difficult is that to do? Is it simply just download all the info and reupload to the other cloud?
https://redd.it/10frkvy
@r_devops
reddit
Is it hard to migrate a Mongo DB from one cloud to another?
Lets say I am on AWS and want to move my Mongo DB to Azure. How difficult is that to do? Is it simply just download all the info and reupload to...
Posting information INTO postman from outside - how is it done?
I've used a variety of other tools to examine payloads from PUT or POST requests.
Pipedream is my current favourite for this - it provides me with an API endpoint that I can POST a json payload to so I can examine the payload and test everything thoroughly before posting to the intended downstream system.
How do I do this with postman? My IT team has set up a postman account that we can all save our work into to make it easier to share but they are not sure how to do this. The only documentation I can find from postman talks about receiving responses from when you POST to another system from postman.
I feel like we are all missing the obvious here - can you do this with postman and if so where is the documentation?
https://redd.it/10fr6u5
@r_devops
I've used a variety of other tools to examine payloads from PUT or POST requests.
Pipedream is my current favourite for this - it provides me with an API endpoint that I can POST a json payload to so I can examine the payload and test everything thoroughly before posting to the intended downstream system.
How do I do this with postman? My IT team has set up a postman account that we can all save our work into to make it easier to share but they are not sure how to do this. The only documentation I can find from postman talks about receiving responses from when you POST to another system from postman.
I feel like we are all missing the obvious here - can you do this with postman and if so where is the documentation?
https://redd.it/10fr6u5
@r_devops
reddit
Posting information INTO postman from outside - how is it done?
I've used a variety of other tools to examine payloads from PUT or POST requests. Pipedream is my current favourite for this - it provides me...
Reproducible builds locally and in the pipeline using docker?
Hey everyone, so I have been working on our pipelines at work and have had some questions for the community on if a similar implementation exists.
​
Currently using dotnet as an example we have the following pipeline setup in azure pipelines
​
Build --> Unit Test / Sonar analysis -- > Docker build & publish
​
all of these stages (bar the docker build) run in an azure-pipeline container job the build stage does a dotnet publish and uploaded the produced artifact, and the Unit test stage runs the sonarqube analysis on the dotnet test build and publishes the coverage/test result file to the azure DevOps server and sonarqube and then the docker stage creates a production-ready image which copies in the artifact published in the first step and pushes it to our private docker registry.
​
all works fine but after speaking to the devs they have requested we make this process a bit more repeatable so they can be azure what they are producing locally is the same as what the pipeline produces. I think is a good idea and started to dive into ways we can achieve this we agreed that we should take advantage of docker more for reproducibility and use a multi-stage build to run the application build, unit tests and obviously the final production-ready image so then in the pipeline we can run a simple docker build and have the same result as if it was running on the dev machine.
​
my only issue with this process is the Code analysis, sonarqube hooks into MSBuild to analyze the code but this is now running inside a docker build process so do we add java and sonarqube to the first multi-stage of the docker image? do we want devs to run this step locally and have their local code analyzed? or do we have a completely separate step with another build inside the pipeline purely for this code analysis?
​
I am struggling to find an elegant solution, everything seems to be very overkill and I am wondering if anyone else has managed to achieve something similar.
please feel free to ask any question, I feel like I have not really explained the situation well but I will try to clear up where I can.
https://redd.it/10fk7mp
@r_devops
Hey everyone, so I have been working on our pipelines at work and have had some questions for the community on if a similar implementation exists.
​
Currently using dotnet as an example we have the following pipeline setup in azure pipelines
​
Build --> Unit Test / Sonar analysis -- > Docker build & publish
​
all of these stages (bar the docker build) run in an azure-pipeline container job the build stage does a dotnet publish and uploaded the produced artifact, and the Unit test stage runs the sonarqube analysis on the dotnet test build and publishes the coverage/test result file to the azure DevOps server and sonarqube and then the docker stage creates a production-ready image which copies in the artifact published in the first step and pushes it to our private docker registry.
​
all works fine but after speaking to the devs they have requested we make this process a bit more repeatable so they can be azure what they are producing locally is the same as what the pipeline produces. I think is a good idea and started to dive into ways we can achieve this we agreed that we should take advantage of docker more for reproducibility and use a multi-stage build to run the application build, unit tests and obviously the final production-ready image so then in the pipeline we can run a simple docker build and have the same result as if it was running on the dev machine.
​
my only issue with this process is the Code analysis, sonarqube hooks into MSBuild to analyze the code but this is now running inside a docker build process so do we add java and sonarqube to the first multi-stage of the docker image? do we want devs to run this step locally and have their local code analyzed? or do we have a completely separate step with another build inside the pipeline purely for this code analysis?
​
I am struggling to find an elegant solution, everything seems to be very overkill and I am wondering if anyone else has managed to achieve something similar.
please feel free to ask any question, I feel like I have not really explained the situation well but I will try to clear up where I can.
https://redd.it/10fk7mp
@r_devops
reddit
Reproducible builds locally and in the pipeline using docker?
Hey everyone, so I have been working on our pipelines at work and have had some questions for the community on if a similar implementation...
U got hired as a DevOps engineer, but you are really a glorified Sysadmin. What do you do to change this?
Curious how people would approach this if this happened at a company. First thing that comes to mind is containerizing applications?
https://redd.it/10fx26f
@r_devops
Curious how people would approach this if this happened at a company. First thing that comes to mind is containerizing applications?
https://redd.it/10fx26f
@r_devops
reddit
U got hired as a DevOps engineer, but you are really a glorified...
Curious how people would approach this if this happened at a company. First thing that comes to mind is containerizing applications?
Need to learn about cert (security)
Hi guys,
I am working for a while in devops and used to be developer. Certs are always scare me away all the time, so never involved in working with them. But recently most of the issues in our env is because of cert whether it is kubernetes or openshift or kafka.
We are having different types of issues and for me its very difficult to understand when our team discuss about it in meeting.
Can you guide me where should I start learning about it and also suggest me if any certification courses will help as well. But my main target is, I should be ready to solve security problems related to certs / keys.
​
Thanks
https://redd.it/10fy6ki
@r_devops
Hi guys,
I am working for a while in devops and used to be developer. Certs are always scare me away all the time, so never involved in working with them. But recently most of the issues in our env is because of cert whether it is kubernetes or openshift or kafka.
We are having different types of issues and for me its very difficult to understand when our team discuss about it in meeting.
Can you guide me where should I start learning about it and also suggest me if any certification courses will help as well. But my main target is, I should be ready to solve security problems related to certs / keys.
​
Thanks
https://redd.it/10fy6ki
@r_devops
reddit
Need to learn about cert (security)
Hi guys, I am working for a while in devops and used to be developer. Certs are always scare me away all the time, so never involved in working...
CORS issue after attaching AWS WAF to load balancer
Guys,
I am facing "Access to fetch at ' ' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled." after I have attached my load balancer to AWS WAF. Otherwise, it works fine, so what may trigger the issue or which rules are responsible for this scenario?
https://redd.it/10fw4wl
@r_devops
Guys,
I am facing "Access to fetch at ' ' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled." after I have attached my load balancer to AWS WAF. Otherwise, it works fine, so what may trigger the issue or which rules are responsible for this scenario?
https://redd.it/10fw4wl
@r_devops
reddit
CORS issue after attaching AWS WAF to load balancer
Guys, I am facing **"Access to fetch at ' ' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested...
Beholder - Documentation search engine with K8S first approach
Hey everybody,
I just finalized the first version of my project: Beholder. When deployed to K8S it allows you to expose OpenAPI documentation for specifically labeled services.
It's the first version, I tested it as much as I could but there could be some lingering bugs. I would be more than grateful for the feedback.
https://github.com/gdulus/beholder
https://redd.it/10g1gxc
@r_devops
Hey everybody,
I just finalized the first version of my project: Beholder. When deployed to K8S it allows you to expose OpenAPI documentation for specifically labeled services.
It's the first version, I tested it as much as I could but there could be some lingering bugs. I would be more than grateful for the feedback.
https://github.com/gdulus/beholder
https://redd.it/10g1gxc
@r_devops
GitHub
GitHub - gdulus/beholder
Contribute to gdulus/beholder development by creating an account on GitHub.
Should I continue my self-taught journey to become Remote worker?
Overthinker here...
There's a thing where it demotivate me to continue my self-taught journey to become DevOps Engineer. I'm from a third world country where there's barely any software job there, It's just web dev with pretty much bad salary..Currently learning sysadmin and "Golang" but it doesn't stop there I know there's much more for sure. I have already made my road map and path. also I'm on my third year in college "computer engineering"But the issue is I hear may people say that DevOps requires you to to work as a sysadmin or software engineer first and get hands-on experience then you move to DevOps, also it's so hard to get remote job outside US. My plan is to get as much knowledge and build up my github so i can land a "Junior remote job role" even if it pays much more below average (not necessarily a DevOps role.. sysadmin or Cloud Specialist first is fine)It's fine for me to work any of those jobs first but remotely? eh.. even for DevOpsWhat do you think guys? should I stay motivated and keep learning? I'm worried that all my studies will go to waste
I do have a carefully considered road map/path. Spent months doing researches and watching videos
Edit: I can't travel outside my country. I'm here taking care of my family alone :q
https://redd.it/10g4hmw
@r_devops
Overthinker here...
There's a thing where it demotivate me to continue my self-taught journey to become DevOps Engineer. I'm from a third world country where there's barely any software job there, It's just web dev with pretty much bad salary..Currently learning sysadmin and "Golang" but it doesn't stop there I know there's much more for sure. I have already made my road map and path. also I'm on my third year in college "computer engineering"But the issue is I hear may people say that DevOps requires you to to work as a sysadmin or software engineer first and get hands-on experience then you move to DevOps, also it's so hard to get remote job outside US. My plan is to get as much knowledge and build up my github so i can land a "Junior remote job role" even if it pays much more below average (not necessarily a DevOps role.. sysadmin or Cloud Specialist first is fine)It's fine for me to work any of those jobs first but remotely? eh.. even for DevOpsWhat do you think guys? should I stay motivated and keep learning? I'm worried that all my studies will go to waste
I do have a carefully considered road map/path. Spent months doing researches and watching videos
Edit: I can't travel outside my country. I'm here taking care of my family alone :q
https://redd.it/10g4hmw
@r_devops
reddit
Should I continue my self-taught journey to become Remote worker?
Overthinker here... There's a thing where it demotivate me to continue my self-taught journey to become DevOps Engineer. I'm from a **third world...
designing guide | DevOps
I have a homogeneous infrastructure on Cloud, I need to design the DevOps way of managing it.
My design should be capable of configuration management, security updates, scaling, automation and automation.
I have a very good knowledge on Linux, storage and operations, but i have no clue about DEVOPS ways of designing.
So any book, or website you could refer me? please
https://redd.it/10g4t22
@r_devops
I have a homogeneous infrastructure on Cloud, I need to design the DevOps way of managing it.
My design should be capable of configuration management, security updates, scaling, automation and automation.
I have a very good knowledge on Linux, storage and operations, but i have no clue about DEVOPS ways of designing.
So any book, or website you could refer me? please
https://redd.it/10g4t22
@r_devops
reddit
designing guide | DevOps
I have a homogeneous infrastructure on Cloud, I need to design the DevOps way of managing it. My design should be capable of configuration...
Do you let devs deploy to production?
Just curious how other are doing. Here the devs need to open a Jira ticket requesting a specific build to be deployed to prod and then our team do the deployment with the cicd pipeline.
https://redd.it/10g3bcb
@r_devops
Just curious how other are doing. Here the devs need to open a Jira ticket requesting a specific build to be deployed to prod and then our team do the deployment with the cicd pipeline.
https://redd.it/10g3bcb
@r_devops
reddit
Do you let devs deploy to production?
Just curious how other are doing. Here the devs need to open a Jira ticket requesting a specific build to be deployed to prod and then our team do...
Azure Keyvault for multi-cloud use (AWS, Rancher onprem, and Azure)
Does anyone have experience utilizing Azure Keyvault outside of Azure? I've been tasked with identifying a multi-cloud solution for secrets management. We have an existing Hashicorp Vault setup, as well as an existing Azure Keyvault setup.
Is it possible to use Hashicorp vault as a secret store that pulls from Azure Keyvault? Alternatively, is it possible to use Azure Keyvault successfully in AWS kubernetes clusters or VMs, or Onprem kube clusters/VMs?
https://redd.it/10g9j9k
@r_devops
Does anyone have experience utilizing Azure Keyvault outside of Azure? I've been tasked with identifying a multi-cloud solution for secrets management. We have an existing Hashicorp Vault setup, as well as an existing Azure Keyvault setup.
Is it possible to use Hashicorp vault as a secret store that pulls from Azure Keyvault? Alternatively, is it possible to use Azure Keyvault successfully in AWS kubernetes clusters or VMs, or Onprem kube clusters/VMs?
https://redd.it/10g9j9k
@r_devops
reddit
Azure Keyvault for multi-cloud use (AWS, Rancher onprem, and Azure)
Does anyone have experience utilizing Azure Keyvault outside of Azure? I've been tasked with identifying a multi-cloud solution for secrets...
Hands-on examples of observability-driven development
https://tracetest.io/blog/observability-driven-development-with-go-and-tracetest
Based on one of my previous discussions about ODD, I wanted to go into more depth and explain how it works with a code demo using open-source tools like Go and Tracetest. The main point I think is that there are no mocks. Instead, you're running E2E and integration tests against real data. I think the biggest pain point in testing on the back end is the amount of coding you need to do to actually just make the test run. Mocking API responses, setting up credentials and env vars to access different services and databases. It's just a lot of hassle to run an integration test.
Disclosure: I am on the Tracetest team, so I'm passionately not disinterested in what you think about the whole ODD movement.
https://redd.it/10gab31
@r_devops
https://tracetest.io/blog/observability-driven-development-with-go-and-tracetest
Based on one of my previous discussions about ODD, I wanted to go into more depth and explain how it works with a code demo using open-source tools like Go and Tracetest. The main point I think is that there are no mocks. Instead, you're running E2E and integration tests against real data. I think the biggest pain point in testing on the back end is the amount of coding you need to do to actually just make the test run. Mocking API responses, setting up credentials and env vars to access different services and databases. It's just a lot of hassle to run an integration test.
Disclosure: I am on the Tracetest team, so I'm passionately not disinterested in what you think about the whole ODD movement.
https://redd.it/10gab31
@r_devops
tracetest.io
Observability-driven development with Go and Tracetest
Hands-on tutorial covering observability-driven development, how to develop microservices with Go & how to run trace-based tests with Tracetest.
mOVING FROM Puppet To Ansible - A few questions around structure and config drift.
So we're on Puppet right now - it's old, out of date, but at the core of everything we do.
We'd like to move to Ansible, which a lot of us are familar with, and I think is the better path forward for us as we're moving a lot of things to the cloud.
Now I have a few thoughts/questions for which I don't have an exact answer for:
1: Configuration Drift
We can make a playbook, chuck it into gitlab, have a pipeline run it...but then what?
What if someone makes a config change on the box but not in git? (it WILL happen)
Puppet runs every 45 minutes or so, without using Ansible Tower, how are people doing this?
Something like Rundeck?
An "Ansible Master" server at each DC running cron jobs every hour?
2: Structure or hierarchy of our Playbooks/Roles, with multiple DCs
There will be quite a few common roles that ALL server will need:
NTP, Security/SSH settings, Log rotation, Log shipping etc etc
Do we just create a playbook for each server type/location, chuck in the "Common" roles and then the app/location specific role into that playbook?
Seems like #2 could get messy quick with lots of servers, doing the same thing over multiple DCs.
e.g. I might want to only affect the mail servers at DC1 today, and then DC2 tomorrow, and DC 3,4,5 & 6 later...but now that means I got 6 versions of the same role to maintain?
EDIT: Damn text editior FORCES YOU TO BE IN CAPS EVEN WHEN YOU'RE NOT SO THE TITLE LOOKS LIKESHIT..
https://redd.it/10gc90g
@r_devops
So we're on Puppet right now - it's old, out of date, but at the core of everything we do.
We'd like to move to Ansible, which a lot of us are familar with, and I think is the better path forward for us as we're moving a lot of things to the cloud.
Now I have a few thoughts/questions for which I don't have an exact answer for:
1: Configuration Drift
We can make a playbook, chuck it into gitlab, have a pipeline run it...but then what?
What if someone makes a config change on the box but not in git? (it WILL happen)
Puppet runs every 45 minutes or so, without using Ansible Tower, how are people doing this?
Something like Rundeck?
An "Ansible Master" server at each DC running cron jobs every hour?
2: Structure or hierarchy of our Playbooks/Roles, with multiple DCs
There will be quite a few common roles that ALL server will need:
NTP, Security/SSH settings, Log rotation, Log shipping etc etc
Do we just create a playbook for each server type/location, chuck in the "Common" roles and then the app/location specific role into that playbook?
Seems like #2 could get messy quick with lots of servers, doing the same thing over multiple DCs.
e.g. I might want to only affect the mail servers at DC1 today, and then DC2 tomorrow, and DC 3,4,5 & 6 later...but now that means I got 6 versions of the same role to maintain?
EDIT: Damn text editior FORCES YOU TO BE IN CAPS EVEN WHEN YOU'RE NOT SO THE TITLE LOOKS LIKESHIT..
https://redd.it/10gc90g
@r_devops
reddit
mOVING FROM Puppet To Ansible - A few questions around structure...
So we're on Puppet right now - it's old, out of date, but at the core of everything we do. We'd like to move to Ansible, which a lot of us are...
"I Know So Much Stuff I Learned Over The Years I Forgot Half Of That By Now?"
I feel like my brain has a limited capacity to remember stuff I dont repeat from time to time.
As a DevOps/SysOps/SysAdmin w/e I had so many tools I had to learn how they work over the years that I lost track of half of them..
Example 10 years ago was using puppet. Could write configurations 1b1, it was super easy to understand and now I would have to remind myself most of it.. coz Im using mostly GA..
Am I just a bad engineer or the tools change so often from company to company its just impossible to remember all of them ? Maybe some ppl can/ or most ?
Just curious whats the other ppl experience in this regard.
https://redd.it/10gfegd
@r_devops
I feel like my brain has a limited capacity to remember stuff I dont repeat from time to time.
As a DevOps/SysOps/SysAdmin w/e I had so many tools I had to learn how they work over the years that I lost track of half of them..
Example 10 years ago was using puppet. Could write configurations 1b1, it was super easy to understand and now I would have to remind myself most of it.. coz Im using mostly GA..
Am I just a bad engineer or the tools change so often from company to company its just impossible to remember all of them ? Maybe some ppl can/ or most ?
Just curious whats the other ppl experience in this regard.
https://redd.it/10gfegd
@r_devops
reddit
"I Know So Much Stuff I Learned Over The Years I Forgot Half Of...
I feel like my brain has a limited capacity to remember stuff I dont repeat from time to time. As a DevOps/SysOps/SysAdmin w/e I had so many...
Monitoring stack demo using Grafana, Loki & Mimir
Wanted to share a demo/tutorial with everyone on how get started with a monitoring stack using grafana, loki and mimir with prometheus metrics & promtail log sender:
[https://github.com/wick02/monitoring](https://github.com/wick02/monitoring)
I also created a [video demo](https://www.youtube.com/watch?v=KPqbA7ys24o) of it working on a mac m1 along with a few of my old colleagues cloning it with no issues reported. I have around 6-7 years helping maintain logs and metric backends and this is my second video on Grafana which is available on [Grafana's youtube channel](https://www.youtube.com/watch?v=AgV5DoWcY6I&t=1544s) from a meetup in 2017.
**Goals of this repo:**
* To trim down to the very basics of each service, to isolate them from each other so you can pick and choose what you want to use from the demo.
* I've configured it in such a way where you can scale it in a cloud environment and to give something to the developers.
* It's not dependent on keeping volumes on the machine, so you can use something like Amazon ECS without managing the volumes and use spot servers to help cut costs.
* It's not a lot of code or configuration, it uses a lot of existing tutorials already but made in such a way that I think anyone with some operational experience can use and get started with.
* It's also built in a way where the metrics are pushed to an S3 like backend using minio so you can keep and persist all the logs and metrics.
* Lastly, it uses Tenant IDs, so you can isolate offenders if you need to use this as a massive shared service for the company by rate limiting them until they stop sending you too many metrics/logs as we all are accustomed to see when we manage these type of backends.
* Since it is simple to spin up a Mimir or Loki cluster with a design like this, you could make multiple clusters and isolate components away even further
I hope someone out there finds this useful. I hope to add Tempo in the future along with a terraform deployment process for this stack.
https://redd.it/10gfu0t
@r_devops
Wanted to share a demo/tutorial with everyone on how get started with a monitoring stack using grafana, loki and mimir with prometheus metrics & promtail log sender:
[https://github.com/wick02/monitoring](https://github.com/wick02/monitoring)
I also created a [video demo](https://www.youtube.com/watch?v=KPqbA7ys24o) of it working on a mac m1 along with a few of my old colleagues cloning it with no issues reported. I have around 6-7 years helping maintain logs and metric backends and this is my second video on Grafana which is available on [Grafana's youtube channel](https://www.youtube.com/watch?v=AgV5DoWcY6I&t=1544s) from a meetup in 2017.
**Goals of this repo:**
* To trim down to the very basics of each service, to isolate them from each other so you can pick and choose what you want to use from the demo.
* I've configured it in such a way where you can scale it in a cloud environment and to give something to the developers.
* It's not dependent on keeping volumes on the machine, so you can use something like Amazon ECS without managing the volumes and use spot servers to help cut costs.
* It's not a lot of code or configuration, it uses a lot of existing tutorials already but made in such a way that I think anyone with some operational experience can use and get started with.
* It's also built in a way where the metrics are pushed to an S3 like backend using minio so you can keep and persist all the logs and metrics.
* Lastly, it uses Tenant IDs, so you can isolate offenders if you need to use this as a massive shared service for the company by rate limiting them until they stop sending you too many metrics/logs as we all are accustomed to see when we manage these type of backends.
* Since it is simple to spin up a Mimir or Loki cluster with a design like this, you could make multiple clusters and isolate components away even further
I hope someone out there finds this useful. I hope to add Tempo in the future along with a terraform deployment process for this stack.
https://redd.it/10gfu0t
@r_devops
GitHub
GitHub - wick02/monitoring: Get a monitoring system up and rolling easily with a few steps
Get a monitoring system up and rolling easily with a few steps - wick02/monitoring