Icinga2 vs Instana for monitoring
What are your thoughts?
Instana is pretty great since it comes with easy out of the box everything.
Looking into ICinga2 I like how much power I have with Python Scripts but I been trying to set up the same ecosystem that Instana provides and I'm finding that to be much harder.
https://redd.it/kv7ujy
@r_devops
What are your thoughts?
Instana is pretty great since it comes with easy out of the box everything.
Looking into ICinga2 I like how much power I have with Python Scripts but I been trying to set up the same ecosystem that Instana provides and I'm finding that to be much harder.
https://redd.it/kv7ujy
@r_devops
reddit
Icinga2 vs Instana for monitoring
What are your thoughts? Instana is pretty great since it comes with easy out of the box everything. Looking into ICinga2 I like how much...
Kaholo feedback CI/CD
Has anyone here used kaholo for CI/CD?
Any feedback welcome!
https://redd.it/kv6xhr
@r_devops
Has anyone here used kaholo for CI/CD?
Any feedback welcome!
https://redd.it/kv6xhr
@r_devops
reddit
Kaholo feedback CI/CD
Has anyone here used kaholo for CI/CD? Any feedback welcome!
Modern CI/CD pipeline for front end projects
Here is what we do in my company:
`precommit` runs only on staged files (takes few seconds)
Every time a commit is pushed:
1 ) We build a #docker image & bundle cypress and other development dependencies. This allows us to run all subsequent tasks using the same Docker image.
It is fast. Takes 2-4 minutes. 🏎
2 ) We run 5 tasks concurrently to validate our build.
ESLint
TypeScript
jest Unit tests
cypress Integration tests
Fetch, validate & compile GraphQL schema
2 ) For every commit, we deploy a review app.
Review app:
Allows anyone to preview what is being developed.
Allows anyone to preview our storybook.
Allows to leave visual reviews (WIP)
3 ) Before changes can be merged to the main branch, we use GitLab to mandate at least 1 review from the team.
In addition, we use GitLab review system to advise who is the best person to review the code based on which files have changed.
5 ) When changes are merged to the main branch, we automatically deploy to production.
We use argocd to implement gitops. This means that we have a detail log of everything that has been deployed, and in case of a critical error, reverting is as simple as "git revert" Receipt
6 ) Finally, we push changes regularly to the main branch. Small incremental updates, dozens of times a day.
This means that if things break, they are typically small things and easy to revert / patch.
We use feature flags to hide any WIP features. 🏳
Originally posted:
https://twitter.com/kuizinas/status/1349177926105792514
https://redd.it/kw6pn3
@r_devops
Here is what we do in my company:
precommit and prepush \#git hooks are used to catch issues before they are pushed upstream.`precommit` runs only on staged files (takes few seconds)
prepush runs eslint typescript and unit tests (takes up to 20 seconds)Every time a commit is pushed:
1 ) We build a #docker image & bundle cypress and other development dependencies. This allows us to run all subsequent tasks using the same Docker image.
It is fast. Takes 2-4 minutes. 🏎
2 ) We run 5 tasks concurrently to validate our build.
ESLint
TypeScript
jest Unit tests
cypress Integration tests
Fetch, validate & compile GraphQL schema
2 ) For every commit, we deploy a review app.
Review app:
Allows anyone to preview what is being developed.
Allows anyone to preview our storybook.
Allows to leave visual reviews (WIP)
3 ) Before changes can be merged to the main branch, we use GitLab to mandate at least 1 review from the team.
In addition, we use GitLab review system to advise who is the best person to review the code based on which files have changed.
5 ) When changes are merged to the main branch, we automatically deploy to production.
We use argocd to implement gitops. This means that we have a detail log of everything that has been deployed, and in case of a critical error, reverting is as simple as "git revert" Receipt
6 ) Finally, we push changes regularly to the main branch. Small incremental updates, dozens of times a day.
This means that if things break, they are typically small things and easy to revert / patch.
We use feature flags to hide any WIP features. 🏳
Originally posted:
https://twitter.com/kuizinas/status/1349177926105792514
https://redd.it/kw6pn3
@r_devops
Twitter
Gajus Kuizinas
#frontend engineers: What is your dream CI/CD pipeline? 🥰 Here is what we have at @contrahq 👇👇👇
Grafana announces new free Grafana cloud tier with hosted Prometheus up to 10k series and 50gb of Loki logs
from their blog
Seems like a good thing for startups or people new to Loki or Prometheus that don’t need a ton of retention (limited to 14 days). small homelab projects you don’t want to deal with the extra overhead of self hosting would also be a good fit for this kind of thing
https://redd.it/kw09g1
@r_devops
from their blog
Seems like a good thing for startups or people new to Loki or Prometheus that don’t need a ton of retention (limited to 14 days). small homelab projects you don’t want to deal with the extra overhead of self hosting would also be a good fit for this kind of thing
https://redd.it/kw09g1
@r_devops
Grafana Labs
The new Grafana Cloud: the only composable observability stack for metrics, logs, and traces, now with free and paid plans to suit…
The new Grafana Cloud free and paid plans give you everything you need for monitoring: Prometheus and Graphite for metrics, Loki for logs, and Tempo for tracing, all integrated within Grafana.
CI/CD quick tip: Custom Slack deployment messages
To keep the dev team involved in production, and perhaps more importantly, share key dashboards and logs so they can quickly respond to issues, I've found custom Slack channel messages to be really useful. I made a very quick video that shows how to add a Slack app and send a custom deployment message in your CD pipeline:
https://youtu.be/UVeJINQ8MmY
How do you keep your dev team involved in production events?
https://redd.it/kw0tfg
@r_devops
To keep the dev team involved in production, and perhaps more importantly, share key dashboards and logs so they can quickly respond to issues, I've found custom Slack channel messages to be really useful. I made a very quick video that shows how to add a Slack app and send a custom deployment message in your CD pipeline:
https://youtu.be/UVeJINQ8MmY
How do you keep your dev team involved in production events?
https://redd.it/kw0tfg
@r_devops
YouTube
CI/CD quick tip: Custom Slack message on code deployment
Notify Slack on deployment with a message customized for your team. This video walks you through how to create a Slack app, register a webhook, and use that webhook in your continuous delivery pipeline to send a custom message to Slack. If you don't want…
Containerizing JBoss EAP with custom configuration
We have a lot of legacy apps on JBoss EAP in environments that are due for a refresh. I used to be fairly comfortable with EAP, but haven’t found much out there on modifying server configuration in the world of containers.
To be clear Red Hat’s documentation does cover clustering and some other aspects, but I hadn’t seen anything on managing arbitrary bits EAP configuration without having to manage all of the configuration XML.
After some experimentation I found it wasn’t too bad. I wrote it up at https://medium.com/@chethosey/configuring-eap-subsystems-with-galleon-9c824684a7bd in case it’s helpful to anyone else.
I also have some experience with configuring data source and injecting JDBC drivers via Galleon, which I could write up if anyone is interested.
https://redd.it/kw7jsq
@r_devops
We have a lot of legacy apps on JBoss EAP in environments that are due for a refresh. I used to be fairly comfortable with EAP, but haven’t found much out there on modifying server configuration in the world of containers.
To be clear Red Hat’s documentation does cover clustering and some other aspects, but I hadn’t seen anything on managing arbitrary bits EAP configuration without having to manage all of the configuration XML.
After some experimentation I found it wasn’t too bad. I wrote it up at https://medium.com/@chethosey/configuring-eap-subsystems-with-galleon-9c824684a7bd in case it’s helpful to anyone else.
I also have some experience with configuring data source and injecting JDBC drivers via Galleon, which I could write up if anyone is interested.
https://redd.it/kw7jsq
@r_devops
Medium
Configuring EAP subsystems with Galleon
While greenfield Java development is more likely to start with API-driven designs based on Spring Boot, Microprofile, or Quarkus, there’s…
PLEASE stop shoehorning devops where it doesn't belong OR WHERE YOU AREN'T READY FOR IT
Excuse my personal rant, but as a seasoned sysadmin, I'm pulling my hair out with this BS, and all these organizations that can't seem to grasp that what works for software development (Agile, scrum, devops etc) doesn't necessarily work for your infrastructure and operations teams the way it does for developer teams, yet you do your best time and time again to make us "cross functional" because it works so well for you.
Why? because you can't treat hardware, maintenance, compliance, and networking like software when it isn't. Listen. I get it. You read a book about all the cool things and infrastructure as code and software defined networking. Then you forgot that your infrastructure is aged and you have no budget or interest in adding what is necessary to make things redundant, high availability, or anything else. You don't know why ordering storage systems without dual controllers means that your entire stack has to go down to update the firmware.. which is why it hasn't been patched since it was purchased. You don't get why on-site servers aren't infinite resources like the cloud is, or why you can't "just use the cloud" to fix all of your problems. You don't understand that a network doesn't have limitless bandwidth and too many bright "decentralized" ideas clogs the pipes faster than eating at chipotle.
"What do you mean there's only so many IP's available" said the developer who automated the build out of containers that reserve their own IPs without checking IPAM because they've never heard of it.
I don't know when or why someone thought "empowering" developers meant to give them free reign on systems they don't understand in order to shit out "value" as fast as humanly possible, and then complain that trying to implement process and policy "slows things down". For the love of all that is holy this is purely unsustainable and this virus has apparently infected everyone.
I'm sure some of you in here can't relate because you're from competent strategic organizations that have implemented appropriate structure, but the rest of the industry is burning shit down underneath themselves. This helps to highlight why things like cybersecurity are a freaking pipe dream. It's all spit, lies, and bubblegum, and the world runs on it.. slow the F*** down!
https://redd.it/kvx9vh
@r_devops
Excuse my personal rant, but as a seasoned sysadmin, I'm pulling my hair out with this BS, and all these organizations that can't seem to grasp that what works for software development (Agile, scrum, devops etc) doesn't necessarily work for your infrastructure and operations teams the way it does for developer teams, yet you do your best time and time again to make us "cross functional" because it works so well for you.
Why? because you can't treat hardware, maintenance, compliance, and networking like software when it isn't. Listen. I get it. You read a book about all the cool things and infrastructure as code and software defined networking. Then you forgot that your infrastructure is aged and you have no budget or interest in adding what is necessary to make things redundant, high availability, or anything else. You don't know why ordering storage systems without dual controllers means that your entire stack has to go down to update the firmware.. which is why it hasn't been patched since it was purchased. You don't get why on-site servers aren't infinite resources like the cloud is, or why you can't "just use the cloud" to fix all of your problems. You don't understand that a network doesn't have limitless bandwidth and too many bright "decentralized" ideas clogs the pipes faster than eating at chipotle.
"What do you mean there's only so many IP's available" said the developer who automated the build out of containers that reserve their own IPs without checking IPAM because they've never heard of it.
I don't know when or why someone thought "empowering" developers meant to give them free reign on systems they don't understand in order to shit out "value" as fast as humanly possible, and then complain that trying to implement process and policy "slows things down". For the love of all that is holy this is purely unsustainable and this virus has apparently infected everyone.
I'm sure some of you in here can't relate because you're from competent strategic organizations that have implemented appropriate structure, but the rest of the industry is burning shit down underneath themselves. This helps to highlight why things like cybersecurity are a freaking pipe dream. It's all spit, lies, and bubblegum, and the world runs on it.. slow the F*** down!
https://redd.it/kvx9vh
@r_devops
reddit
PLEASE stop shoehorning devops where it doesn't belong OR WHERE...
Excuse my personal rant, but as a seasoned sysadmin, I'm pulling my hair out with this BS, and all these organizations that can't seem to grasp...
How the heck do I solve the problem of maintainable template projects?
I'm running the CI show at my clients shop where we do pretty much greenfield development for 9 different building security devices like card readers and such. There are two base Linux platforms (a beefy one and a tiny one) and each device is built from one of the platforms as a base and then a set of services and configurations are added on top of that to make the final firmware for each device.
We have an internal tool that let's developers create services, this tool creates a git repo and does some ghetto templating work by copying over a hello world project and replaces a few magic strings in the template files depending on the command line args passed to the tool.
This all works fine, until a change in the template is required. At this point it's just a horror show to keep all the service repos synchronized. The Cmake files for each project is customized so you can't just copy the new file over but you have to open it in each repo and manually perform the change. It's very tedious.
The minimum functionality I'm after is having the CI yell at me if I change the template and one of the repos is out of sync when it's committed. The optimal solution would be some automation to update all of the repos.
Is there some templating framework that supports this out of the box?
https://redd.it/kw2k2r
@r_devops
I'm running the CI show at my clients shop where we do pretty much greenfield development for 9 different building security devices like card readers and such. There are two base Linux platforms (a beefy one and a tiny one) and each device is built from one of the platforms as a base and then a set of services and configurations are added on top of that to make the final firmware for each device.
We have an internal tool that let's developers create services, this tool creates a git repo and does some ghetto templating work by copying over a hello world project and replaces a few magic strings in the template files depending on the command line args passed to the tool.
This all works fine, until a change in the template is required. At this point it's just a horror show to keep all the service repos synchronized. The Cmake files for each project is customized so you can't just copy the new file over but you have to open it in each repo and manually perform the change. It's very tedious.
The minimum functionality I'm after is having the CI yell at me if I change the template and one of the repos is out of sync when it's committed. The optimal solution would be some automation to update all of the repos.
Is there some templating framework that supports this out of the box?
https://redd.it/kw2k2r
@r_devops
reddit
How the heck do I solve the problem of maintainable template projects?
I'm running the CI show at my clients shop where we do pretty much greenfield development for 9 different building security devices like card...
CD for production env good idea?
We have had terrible experience, doing CD for well tested (what we thought) code. We always somehow end up with big failures, every time for some newly discovered reason. We are now doing updates in our production environments, manually, using simple scripts, doing updates one by one.
At this time, the only colleagues using CD's are frontend developers, only on testing environments, so they can see their code running inside a kubernetes environment instantly.
Where do other fellow devops use CD's? Anyone doing CD in production? If yes, what other tools are you using? If not, please share your update procedures.
https://redd.it/kw5rz8
@r_devops
We have had terrible experience, doing CD for well tested (what we thought) code. We always somehow end up with big failures, every time for some newly discovered reason. We are now doing updates in our production environments, manually, using simple scripts, doing updates one by one.
At this time, the only colleagues using CD's are frontend developers, only on testing environments, so they can see their code running inside a kubernetes environment instantly.
Where do other fellow devops use CD's? Anyone doing CD in production? If yes, what other tools are you using? If not, please share your update procedures.
https://redd.it/kw5rz8
@r_devops
reddit
CD for production env good idea?
We have had terrible experience, doing CD for well tested (what we thought) code. We always somehow end up with big failures, every time for some...
Is it possible to have a Jenkins linked choice param?
The issue I am trying to solve is when a dev pushes code to dev branch in GIT Jenkins triggers with a hook. Currently in the hook and job config I have to specify choices for server a server b, config a config b, profile a or profiler b, etc. What I want is if its Server A then choose Config A and Profile A. My hook url is getting crazy trying to add each param in the url where it would be much easier to say Server=A in the url and Jenkins knows the rest of the build information based on the server chosen.
https://redd.it/kw0kt0
@r_devops
The issue I am trying to solve is when a dev pushes code to dev branch in GIT Jenkins triggers with a hook. Currently in the hook and job config I have to specify choices for server a server b, config a config b, profile a or profiler b, etc. What I want is if its Server A then choose Config A and Profile A. My hook url is getting crazy trying to add each param in the url where it would be much easier to say Server=A in the url and Jenkins knows the rest of the build information based on the server chosen.
https://redd.it/kw0kt0
@r_devops
reddit
Is it possible to have a Jenkins linked choice param?
The issue I am trying to solve is when a dev pushes code to dev branch in GIT Jenkins triggers with a hook. Currently in the hook and job config I...
Best alternative for Opscode Chef
Hey guys.
I was using Opscode Chef for a quiet long time to setup my web host instance in AWS. After moving to another laptop I realized that after installing needed version of knife-zero and all needed chef programms it couldn't start because of deprecation of some dependencies. Now I think that it's best moment to move to another Configuration Management Tool.
Here are some my use-cases:
setup git repo
setup couple of cron jobs
start and setup some docker services
create folders, create files from templates
setup users, their ssh keys
install packages
I've been working with Ansible and it's my first candidate to try, but I would like to hear other open-source option from colleagues.
P.S.: Everything is running in AWS micro instance, so no K8s needed.
https://redd.it/kw2n0z
@r_devops
Hey guys.
I was using Opscode Chef for a quiet long time to setup my web host instance in AWS. After moving to another laptop I realized that after installing needed version of knife-zero and all needed chef programms it couldn't start because of deprecation of some dependencies. Now I think that it's best moment to move to another Configuration Management Tool.
Here are some my use-cases:
setup git repo
setup couple of cron jobs
start and setup some docker services
create folders, create files from templates
setup users, their ssh keys
install packages
I've been working with Ansible and it's my first candidate to try, but I would like to hear other open-source option from colleagues.
P.S.: Everything is running in AWS micro instance, so no K8s needed.
https://redd.it/kw2n0z
@r_devops
reddit
Best alternative for Opscode Chef
Hey guys. I was using Opscode Chef for a quiet long time to setup my web host instance in AWS. After moving to another laptop I realized that...
Which cloud providers offer the best career options AWS, azure or GCP?
Hey all! I'm currently a junior platform engineer going through the recruitment process. I've been working with AWS for around 3 years now and I'm debating whether moving to another role is a good idea. I have a few job interviews with companies thag use GCP and Azure. Is it better to build on my AWS skillset in my current role or broaden my horizons to GCP or Azure? Thanks!!
https://redd.it/kw28df
@r_devops
Hey all! I'm currently a junior platform engineer going through the recruitment process. I've been working with AWS for around 3 years now and I'm debating whether moving to another role is a good idea. I have a few job interviews with companies thag use GCP and Azure. Is it better to build on my AWS skillset in my current role or broaden my horizons to GCP or Azure? Thanks!!
https://redd.it/kw28df
@r_devops
reddit
Which cloud providers offer the best career options AWS, azure or GCP?
Hey all! I'm currently a junior platform engineer going through the recruitment process. I've been working with AWS for around 3 years now and I'm...
Did anyone test how rootless is a new docker rootless mode?
I think we could agree on that the docker security holes were highly related to that by default containers run with root privileges which is a bad practice.
I am wondering if somebody went to their new implementation of rootless mode and if it was a pain to update related images.
https://redd.it/kw0kyz
@r_devops
I think we could agree on that the docker security holes were highly related to that by default containers run with root privileges which is a bad practice.
I am wondering if somebody went to their new implementation of rootless mode and if it was a pain to update related images.
https://redd.it/kw0kyz
@r_devops
reddit
Did anyone test how rootless is a new docker rootless mode?
I think we could agree on that the docker security holes were highly related to that by default containers run with root privileges which is a bad...
Suggested modern DevOps books?
Can anyone suggest a good modern DevOps book? I'm looking for a book that focuses on the overall architecture of a DevOps environment. For example, how to manage an environment at scale with DevOps.
The reason why I'm interested in such a book is I want to ensure that I'm practicing DevOps in a way that wont be hindered by the challenges that happen when the environment becomes large. We all can be very successful at a small scale but its different when the environment grows.
https://redd.it/kvz4m6
@r_devops
Can anyone suggest a good modern DevOps book? I'm looking for a book that focuses on the overall architecture of a DevOps environment. For example, how to manage an environment at scale with DevOps.
The reason why I'm interested in such a book is I want to ensure that I'm practicing DevOps in a way that wont be hindered by the challenges that happen when the environment becomes large. We all can be very successful at a small scale but its different when the environment grows.
https://redd.it/kvz4m6
@r_devops
reddit
Suggested modern DevOps books?
Can anyone suggest a good modern DevOps book? I'm looking for a book that focuses on the overall architecture of a DevOps environment. For...
Deploying Software at GoCardless: Open-Sourcing our “Getting Started” Tutorial
My team at GoCardless have spent the last year rebuilding our infrastructure stack. Today, we've open-sourced our internal getting started tutorial, in hope it might help others understand how our tools (Kubernetes, ArgoCD, Jsonnet, etc) all work together:
https://medium.com/gocardless-tech/deploying-software-at-gocardless-open-sourcing-our-getting-started-tutorial-ab857aa91c9e
The work was motivated by an aggressive hiring target, and an increase in the frequency that application teams wanted to build new services/make infrastructure changes.
While we always tried for "you build it, you run it", our tools weren't very suited for it. Developers took a long time to onboard themselves to our dev tools, and it wasn't possible for a standard application engineer to deploy a new staging (pre-production) service without SRE involvement.
The new stack hinges on a framework we call Utopia, which is a combination of technologies. It is:
- The name of the directory in which we keep our organisation config files (anu/utopia)
- Jsonnet library of Kubernetes mixins that allow developers to write idiomatic Kubernetes deployments without boiletplate (anu/utopia/lib/utopia.libsonnet)
- Golang binary
And more useful for external readers, it leverages an opinionated mix of several tools like Kubernetes, Tekton, ArgoCD, and more.
This is not a batteries-included setup, but nor should it be a "draw the rest of the owl". It's intended to help others see how this stuff can work, and show people how we've enabled a route to hands-free service bootstrapping without compromising the security of our production environments.
It's also just out of the prototype stage, and we hope to kill several of the steps from the tutorial once we have smoother processes.
Either way, we hope it's useful!
https://redd.it/kvwxi3
@r_devops
My team at GoCardless have spent the last year rebuilding our infrastructure stack. Today, we've open-sourced our internal getting started tutorial, in hope it might help others understand how our tools (Kubernetes, ArgoCD, Jsonnet, etc) all work together:
https://medium.com/gocardless-tech/deploying-software-at-gocardless-open-sourcing-our-getting-started-tutorial-ab857aa91c9e
The work was motivated by an aggressive hiring target, and an increase in the frequency that application teams wanted to build new services/make infrastructure changes.
While we always tried for "you build it, you run it", our tools weren't very suited for it. Developers took a long time to onboard themselves to our dev tools, and it wasn't possible for a standard application engineer to deploy a new staging (pre-production) service without SRE involvement.
The new stack hinges on a framework we call Utopia, which is a combination of technologies. It is:
- The name of the directory in which we keep our organisation config files (anu/utopia)
- Jsonnet library of Kubernetes mixins that allow developers to write idiomatic Kubernetes deployments without boiletplate (anu/utopia/lib/utopia.libsonnet)
- Golang binary
utopia, which has a number of common developer commandsAnd more useful for external readers, it leverages an opinionated mix of several tools like Kubernetes, Tekton, ArgoCD, and more.
This is not a batteries-included setup, but nor should it be a "draw the rest of the owl". It's intended to help others see how this stuff can work, and show people how we've enabled a route to hands-free service bootstrapping without compromising the security of our production environments.
It's also just out of the prototype stage, and we hope to kill several of the steps from the tutorial once we have smoother processes.
Either way, we hope it's useful!
https://redd.it/kvwxi3
@r_devops
Medium
Deploying Software at GoCardless: Open-Sourcing our “Getting Started” Tutorial
To help those facing similar challenges, we’re open-sourcing our “Getting Started” tutorial to deploy apps into GoCardless Kubernetes
How should i be storing usernames & passwords for file access?
I want to use MinIO to store temporary copies of files to deliver to clients via the web dashboard feature. But I need to put some kind of authentication on the file access. Minio ships with a user account & user groups feature, so I can easily make a random username/password for a client's files, and set the files to auto-delete after e.g. 7 days. But where and how should I be storing these passwords on the server?
There are plenty of articles about proper encryption of user passwords, but what does an implementation of something like this look like when I just want to give someone access to a resource like this file server?
The entire process of
- make bucket
- import files to bucket
- make user account
- give user access to bucket
- email user the login credentials for the bucket
is easy to automate with a simple script. I am just not sure where these user credentials should be saved.
Maybe I could even get away with not saving them, and using the email notification as the only record of the password? I am intending for this to be a temporary file storage location, not permanent.
https://redd.it/kvwbp5
@r_devops
I want to use MinIO to store temporary copies of files to deliver to clients via the web dashboard feature. But I need to put some kind of authentication on the file access. Minio ships with a user account & user groups feature, so I can easily make a random username/password for a client's files, and set the files to auto-delete after e.g. 7 days. But where and how should I be storing these passwords on the server?
There are plenty of articles about proper encryption of user passwords, but what does an implementation of something like this look like when I just want to give someone access to a resource like this file server?
The entire process of
- make bucket
- import files to bucket
- make user account
- give user access to bucket
- email user the login credentials for the bucket
is easy to automate with a simple script. I am just not sure where these user credentials should be saved.
Maybe I could even get away with not saving them, and using the email notification as the only record of the password? I am intending for this to be a temporary file storage location, not permanent.
https://redd.it/kvwbp5
@r_devops
Windows Monitoring Suggestions
Looking for good ways to monitor Windows. Particularly individual services.
We use Prometheus to monitor the overall system mem/cpu usage and several other things. We have come across an issue recently where _something_ is not letting go of memory, requiring us to restart the Server VM. I'm hoping to be able to monitor the the services so that we can identify what exactly is holding onto the memory. There are usually a couple of services that are running hot making identifying the exact one difficult.
Can Prometheus do this? Would you recommend a simple POSH script to report services exceeding x resources or what? Appreciate the help!
https://redd.it/kv6nm9
@r_devops
Looking for good ways to monitor Windows. Particularly individual services.
We use Prometheus to monitor the overall system mem/cpu usage and several other things. We have come across an issue recently where _something_ is not letting go of memory, requiring us to restart the Server VM. I'm hoping to be able to monitor the the services so that we can identify what exactly is holding onto the memory. There are usually a couple of services that are running hot making identifying the exact one difficult.
Can Prometheus do this? Would you recommend a simple POSH script to report services exceeding x resources or what? Appreciate the help!
https://redd.it/kv6nm9
@r_devops
reddit
Windows Monitoring Suggestions
Looking for good ways to monitor Windows. Particularly individual services. We use Prometheus to monitor the overall system mem/cpu usage and...
What are best Practices for SSH Key Management?
We have a Proxmox installation with some KVM VMs and host everything else in two Kubernetes Clusters built on Rancher, one internally hosted on a proxmox KVM instance, and the other one hosted on public cloud provider. As our team grows we are starting to look into ways of automating provisioning and configuring our machines/containers and are currently looking into Terraform and Ansible (but are open to other solutions as well).
One thing that I am unsure about is on how to handle SSH public keys in a good way. What would be great is to use Ansible or Terraform to configure the machines through cloud-init (proxmox has native support for this) so that when a new key needs to be added we do it in one place and it is added everywhere. More importantly, when someone leaves the team, we can just delete the key in one place and Ansible / Terraform would do the rest.
Anyway, all easy tasks in my eyes together with Gitlab CI, but what I am unsure about is security. Where would you store the public keys? And more importantly how would you make sure that no other person can edit the public keys and give themselves access to machines that they shouldn't have access to?
Would be great to hear some best practices on this!
https://redd.it/kv6exn
@r_devops
We have a Proxmox installation with some KVM VMs and host everything else in two Kubernetes Clusters built on Rancher, one internally hosted on a proxmox KVM instance, and the other one hosted on public cloud provider. As our team grows we are starting to look into ways of automating provisioning and configuring our machines/containers and are currently looking into Terraform and Ansible (but are open to other solutions as well).
One thing that I am unsure about is on how to handle SSH public keys in a good way. What would be great is to use Ansible or Terraform to configure the machines through cloud-init (proxmox has native support for this) so that when a new key needs to be added we do it in one place and it is added everywhere. More importantly, when someone leaves the team, we can just delete the key in one place and Ansible / Terraform would do the rest.
Anyway, all easy tasks in my eyes together with Gitlab CI, but what I am unsure about is security. Where would you store the public keys? And more importantly how would you make sure that no other person can edit the public keys and give themselves access to machines that they shouldn't have access to?
Would be great to hear some best practices on this!
https://redd.it/kv6exn
@r_devops
reddit
What are best Practices for SSH Key Management?
We have a Proxmox installation with some KVM VMs and host everything else in two Kubernetes Clusters built on Rancher, one internally hosted on a...
A pure bash clojureish CI pipeline
I thought the r/devops subreddit might be interested in this project I just found!
https://github.com/rosineygp/lines
If you like this, I do a weekly roundup of open source projects that includes an interview with one of the devs you can subscribe to.
https://redd.it/kwkdeh
@r_devops
I thought the r/devops subreddit might be interested in this project I just found!
https://github.com/rosineygp/lines
If you like this, I do a weekly roundup of open source projects that includes an interview with one of the devs you can subscribe to.
https://redd.it/kwkdeh
@r_devops
GitHub
GitHub - rosineygp/lines: A pure bash clojureish CI pipeline
A pure bash clojureish CI pipeline. Contribute to rosineygp/lines development by creating an account on GitHub.
Ory Hydra: Open Source OAuth2/OIDC Provider
Hey I hope it is ok, if I make a post promoting an open source project I have been contributing to for about a year now. We just saw a somewhat major release, and since the project is open-source and free I thought you might enjoy it. Please let me know if that goes against posting guidelines. Also feel free to ask me anything related to the project :)
ORY Hydra 1.9 has been released yesterday!
ORY Hydra is an OAuth 2.0 and Certified OpenID Connect Provider and implements all the requirements stated by the OpenID Foundation.
It issues OAuth 2.0 Access, Refresh, and ID Tokens that enable third-parties to access your APIs in the name of your users.
The open-source project has been built by the ORY community for about six years and we are proud to have handled more than 10 billion API requests in December 2020 from over 23.0000 different production environments.
ORY Hydra is written completely in Go, security first, high performance and developer friendly.
We value our community greatly and most development is driven by input from the community.
Check if ORY Hydra is the right fit for you!
ORY Hydra 5 Minute Tutorial: Set up and use ORY Hydra using Docker Compose in under 5 Minutes. Good for quickly hacking a Proof of Concept. (The same tutorial in video form)
Visit our Discussions on Github or our chat if you have any questions or feedback.
https://redd.it/kwe9ph
@r_devops
Hey I hope it is ok, if I make a post promoting an open source project I have been contributing to for about a year now. We just saw a somewhat major release, and since the project is open-source and free I thought you might enjoy it. Please let me know if that goes against posting guidelines. Also feel free to ask me anything related to the project :)
ORY Hydra 1.9 has been released yesterday!
ORY Hydra is an OAuth 2.0 and Certified OpenID Connect Provider and implements all the requirements stated by the OpenID Foundation.
It issues OAuth 2.0 Access, Refresh, and ID Tokens that enable third-parties to access your APIs in the name of your users.
The open-source project has been built by the ORY community for about six years and we are proud to have handled more than 10 billion API requests in December 2020 from over 23.0000 different production environments.
ORY Hydra is written completely in Go, security first, high performance and developer friendly.
We value our community greatly and most development is driven by input from the community.
Check if ORY Hydra is the right fit for you!
ORY Hydra 5 Minute Tutorial: Set up and use ORY Hydra using Docker Compose in under 5 Minutes. Good for quickly hacking a Proof of Concept. (The same tutorial in video form)
Visit our Discussions on Github or our chat if you have any questions or feedback.
https://redd.it/kwe9ph
@r_devops
GitHub
Release v1.9.0 · ory/hydra
Today, we are very excited to announce the stable release of ORY Hydra 1.9! This release contains significant internal code refactoring, making ORY Hydra more reliable, lightweight, and even more s...
installed vault on mac, opened the zip file and then ran the binary(?) and it still is not showing vault
do I need to export my PATH variable or other? it did ask me to change my shell to zsh so I did, but still get this:
z@Mac-Users-Apple-Computer ~ % vault
zsh: command not found: vault
https://redd.it/kwgt5a
@r_devops
do I need to export my PATH variable or other? it did ask me to change my shell to zsh so I did, but still get this:
z@Mac-Users-Apple-Computer ~ % vault
zsh: command not found: vault
https://redd.it/kwgt5a
@r_devops
reddit
installed vault on mac, opened the zip file and then ran the...
do I need to export my PATH variable or other? it did ask me to change my shell to zsh so I did, but still get this: ...