Update notification tool
Any tool that notify you when a list of software that you decide receive an update?
Ideally with changelogs and filter on type of update to notify (major, minor, bug)?
https://redd.it/11s23tt
@r_devops
Any tool that notify you when a list of software that you decide receive an update?
Ideally with changelogs and filter on type of update to notify (major, minor, bug)?
https://redd.it/11s23tt
@r_devops
Reddit
r/devops on Reddit: Update notification tool
Posted by u/smark91 - No votes and 1 comment
AZ-900 Microsoft Azure Fundamentals Study Revision Notes
AZ-900 Microsoft Azure Fundamentals Study Revision Notes
https://redd.it/11rnjgy
@r_devops
AZ-900 Microsoft Azure Fundamentals Study Revision Notes
https://redd.it/11rnjgy
@r_devops
AWS Cloud And Azure Cloud Certification Study Notes
AZ-900 Microsoft Azure Fundamentals
If you are studying for Microsoft Azure Fundamentals Exam, this guide will help you with quick revision before the exam. it can use as study notes for your preparation.
Personal docs
Hi,
I wanted to ask if anyone here using some free tool for personal documentation?
It can be related to work or something else.
Besides tool, how do you organize these docs?
I want to start documenting things, so any useful informations or recommendations will be helpful.
Thank you
https://redd.it/11s6aki
@r_devops
Hi,
I wanted to ask if anyone here using some free tool for personal documentation?
It can be related to work or something else.
Besides tool, how do you organize these docs?
I want to start documenting things, so any useful informations or recommendations will be helpful.
Thank you
https://redd.it/11s6aki
@r_devops
Reddit
r/devops on Reddit: Personal docs
Posted by u/misso998 - No votes and 2 comments
manage old dockerhub images
How do you keep track and solve the problem of old dockerhub images ? my own images are updated and rebuilt every day. no problems here. But i just noticed that i am using old images from dockerhub ( one of them is 5 years old :) ).
i was considering valid options arround this :
1. create a dockerfile using "FROM dockerhub/image" and adding apt-get update & upgrade. set the cicd and then using this image instead of dockerhub's
2. find a fork and use it
3. find a concurent project and migrate my config and use it
4. develop it my way
https://redd.it/11r91zj
@r_devops
How do you keep track and solve the problem of old dockerhub images ? my own images are updated and rebuilt every day. no problems here. But i just noticed that i am using old images from dockerhub ( one of them is 5 years old :) ).
i was considering valid options arround this :
1. create a dockerfile using "FROM dockerhub/image" and adding apt-get update & upgrade. set the cicd and then using this image instead of dockerhub's
2. find a fork and use it
3. find a concurent project and migrate my config and use it
4. develop it my way
https://redd.it/11r91zj
@r_devops
Reddit
r/devops on Reddit: manage old dockerhub images
Posted by u/rafipiccolo - 1 vote and 6 comments
Any recommended automations ?
Recently moved into a DevOps role and I’m relatively new to the DevOps realm.
Though, I have a decent understanding on how the tools work together and end goals.
With that said, I was curious to know if anyone implemented any automations in their day to day work that they would recommend ?
https://redd.it/11s93r9
@r_devops
Recently moved into a DevOps role and I’m relatively new to the DevOps realm.
Though, I have a decent understanding on how the tools work together and end goals.
With that said, I was curious to know if anyone implemented any automations in their day to day work that they would recommend ?
https://redd.it/11s93r9
@r_devops
Reddit
r/devops on Reddit: Any recommended automations ?
Posted by u/Jay9044 - No votes and 1 comment
Sign up for tomorrow's webinar - how to protect your software supply chain with open source tools
Register here >
Open source tools that'll be covered:
Snyk
Sonarqube
Syft
Nexus
Hashicorp vault
Sigstore/cosign/rekor
OPA
and more
https://redd.it/11sbe8c
@r_devops
Register here >
Open source tools that'll be covered:
Snyk
Sonarqube
Syft
Nexus
Hashicorp vault
Sigstore/cosign/rekor
OPA
and more
https://redd.it/11sbe8c
@r_devops
Redhat
How to protect your software supply chain with open source technologies
Building cloud-native applications often leads to sprawling software supply chains consisting of tools and code from both trusted and unverified sources. Applying security and governance to cloud-native supply chains can be challenging without understanding…
Deploying to multiple environments using Gitlab-Terraform
So I am currently making the transition from cloud formation to gitlab-terraform and I'm trying to wrap my head around trunk based deployment. We've done branch-per-environment in the past and have seen the pitfalls of that, so I am trying to see if this is a better solution.
A few hurdles we have is that we have three environments: Dev, Test, and Prod. And we have two gitlab servers. One responsible for deploying to Dev and the other is responsible for deploying to Test and Prod. It's a requirement for security purposes that I don't really want to get into, but syncing code between servers isn't the issue I am having.
I am confused on using one branch to deploy to multiple environments. I currently have the standard gitlab ci-yml working and we also pull in auto.tfvars file during the setup phase to control environment specific values. On the success of a development merge, we sync the code to the 2nd gitlab server, so that it can be used in the test / prod environment.
I guess my question is how am I supposed to handle deploying to test and prod with a single MR? I could definitely do it sequentially, where I run through my stages [setup, validate, plan, deploy\] again and swap out the environment variable so that the correct tfvars comes. But that seems clunky / wrong?
What is the cleanest way of doing this? Am I supposed to have a pipeline that goes sequentially? Or is there a slicker way of doing test / prod in parallel where I have two plans at the same time representing both environments that can be manually deployed? I feel like I am missing something here and haven't really been able to find a full solution yet.
https://redd.it/11r7q66
@r_devops
So I am currently making the transition from cloud formation to gitlab-terraform and I'm trying to wrap my head around trunk based deployment. We've done branch-per-environment in the past and have seen the pitfalls of that, so I am trying to see if this is a better solution.
A few hurdles we have is that we have three environments: Dev, Test, and Prod. And we have two gitlab servers. One responsible for deploying to Dev and the other is responsible for deploying to Test and Prod. It's a requirement for security purposes that I don't really want to get into, but syncing code between servers isn't the issue I am having.
I am confused on using one branch to deploy to multiple environments. I currently have the standard gitlab ci-yml working and we also pull in auto.tfvars file during the setup phase to control environment specific values. On the success of a development merge, we sync the code to the 2nd gitlab server, so that it can be used in the test / prod environment.
I guess my question is how am I supposed to handle deploying to test and prod with a single MR? I could definitely do it sequentially, where I run through my stages [setup, validate, plan, deploy\] again and swap out the environment variable so that the correct tfvars comes. But that seems clunky / wrong?
What is the cleanest way of doing this? Am I supposed to have a pipeline that goes sequentially? Or is there a slicker way of doing test / prod in parallel where I have two plans at the same time representing both environments that can be manually deployed? I feel like I am missing something here and haven't really been able to find a full solution yet.
https://redd.it/11r7q66
@r_devops
Gitlab
Terraform integration in merge requests | GitLab
GitLab product documentation.
I want to be able to deploy apps as quickly as possible to on-prem k8s. I was looking at Jenkins-x with their jx create command, looks pretty powerful, but it looks complicated to setup. Any easier alternatives?
What other cli tools are available to build an app on k8s?
https://redd.it/11sdyem
@r_devops
What other cli tools are available to build an app on k8s?
https://redd.it/11sdyem
@r_devops
Reddit
r/devops on Reddit: I want to be able to deploy apps as quickly as possible to on-prem k8s. I was looking at Jenkins-x with their…
Posted by u/kaigoman - No votes and 1 comment
Does anyone using chatGPT in day to day task/ projects?
I have used it for a couple of projects i have been working on. And as well to produce best texts.
I would kike to know if how other professionals in our field are using this amazing tool.
https://redd.it/11sdwjq
@r_devops
I have used it for a couple of projects i have been working on. And as well to produce best texts.
I would kike to know if how other professionals in our field are using this amazing tool.
https://redd.it/11sdwjq
@r_devops
Reddit
r/devops on Reddit: Does anyone using chatGPT in day to day task/ projects?
Posted by u/Middle-Sprinkles-165 - No votes and 8 comments
How to manage and release features to different customers in both SaaS and self-hosted environments
Many organisations struggle with how to maintain a single repository and master branch to continuously deliver their software to their customers who require different features.
Some customers use online SaaS services, but some customers need to deploy in a private and self-hosted environment. You're struggling with how to keep the same released version.
If a new feature is only built for one particular customer at the beginning, you're struggling with how to canary ship a new feature to that particular customer and keep the same code for all the other customers.
You also struggle with how to respond quickly to customer requirements without involving too many engineers. If a customer success team can do that without engineers, that's perfect.
How to mitigate the situation?
Feature Flags Management Service is the must-have technology to solve these scenarios. Feature Flags is a modern engineering technology that decouples code deployments from feature releases, giving you control over who sees each feature and when they see it.
Feature Flags can be categorized into four pillars (Release Flag, Experimentation Flag, Operational Flag, and Permission Flag) in its lifecycle through development to customer success.
Operational flags and permission flags can be used to manage entitlements in software, which refers to controlling what features or functionality a user has access to based on their subscription or payment plan.
Release flags and experiment flags can help you to deliver a feature to a specific customer with minimal risk. It allows teams to test new features in production and progressively (percentage rollout) release the new feature to targeted customers to reduce the "blast radius".
I wrote an article of how to use feature flags to manage and release features to different customers in both SaaS and self-hosted environments. I hope this can help and get feedbacks.
How to manage and release features to different customers in both SaaS and self-hosted environments (featbit.co)
https://redd.it/11shfm5
@r_devops
Many organisations struggle with how to maintain a single repository and master branch to continuously deliver their software to their customers who require different features.
Some customers use online SaaS services, but some customers need to deploy in a private and self-hosted environment. You're struggling with how to keep the same released version.
If a new feature is only built for one particular customer at the beginning, you're struggling with how to canary ship a new feature to that particular customer and keep the same code for all the other customers.
You also struggle with how to respond quickly to customer requirements without involving too many engineers. If a customer success team can do that without engineers, that's perfect.
How to mitigate the situation?
Feature Flags Management Service is the must-have technology to solve these scenarios. Feature Flags is a modern engineering technology that decouples code deployments from feature releases, giving you control over who sees each feature and when they see it.
Feature Flags can be categorized into four pillars (Release Flag, Experimentation Flag, Operational Flag, and Permission Flag) in its lifecycle through development to customer success.
Operational flags and permission flags can be used to manage entitlements in software, which refers to controlling what features or functionality a user has access to based on their subscription or payment plan.
Release flags and experiment flags can help you to deliver a feature to a specific customer with minimal risk. It allows teams to test new features in production and progressively (percentage rollout) release the new feature to targeted customers to reduce the "blast radius".
I wrote an article of how to use feature flags to manage and release features to different customers in both SaaS and self-hosted environments. I hope this can help and get feedbacks.
How to manage and release features to different customers in both SaaS and self-hosted environments (featbit.co)
https://redd.it/11shfm5
@r_devops
FeatBit Blog
How to manage and release features to different customers in both SaaS and self-hosted environments
Modeling of edge application on VM instance
I possess an edge application and I aim to decrease the latency for end users by implementing microservices. However, being new to this domain, I'm curious if there are alternative methods to accomplish this goal.
https://redd.it/11qnkep
@r_devops
I possess an edge application and I aim to decrease the latency for end users by implementing microservices. However, being new to this domain, I'm curious if there are alternative methods to accomplish this goal.
https://redd.it/11qnkep
@r_devops
Reddit
r/devops on Reddit: Modeling of edge application on VM instance
Posted by u/Automatic-Heron3777 - 1 vote and no comments
What is JWT? How does it work?
Learn what JWT is and how it works in this informative post.
Get a quick and easy-to-understand summary of this important security technology that's widely used in modern web applications. Check it out now!
https://mojoauth.com/blog/what-is-jwt/
https://redd.it/11r31py
@r_devops
Learn what JWT is and how it works in this informative post.
Get a quick and easy-to-understand summary of this important security technology that's widely used in modern web applications. Check it out now!
https://mojoauth.com/blog/what-is-jwt/
https://redd.it/11r31py
@r_devops
What is JWT? How does it work? | MojoAuth Blog
JWT or JSON Web Tokens are the new industry standards for securing APIs to and from the server. But what exactly is JWT? How does it work? Let us understand it more in detail.
Two devs are trying to find out if AWS Application Composer really is worth anyone's time
Designing serverless apps visually sounds good on paper. My friends made some practical projects to find out if AWS App Composer really does that well enough. Their conclusions is that the tool is not yet ready for commercial work, but it does have a promise. If you want to view their App Composer projects with details and code, I invite you to check it out.
https://redd.it/11qf38k
@r_devops
Designing serverless apps visually sounds good on paper. My friends made some practical projects to find out if AWS App Composer really does that well enough. Their conclusions is that the tool is not yet ready for commercial work, but it does have a promise. If you want to view their App Composer projects with details and code, I invite you to check it out.
https://redd.it/11qf38k
@r_devops
The Software House
Can AWS Application Composer help you save time on designing serverless apps?
With AWS Application Composer you can design serverless apps faster. Is it production-ready? We made some projects just to find out the answer!
📢 DEPRECATION ALERT: Mar 20 traffic from the old Kubernetes registry k8s.gcr.io will be redirected to registry.k8s.io
📢ICYMI this Monday, Mar 20, traffic from the older k8s.gcr.io Kubernetes registry will be redirected to registry.k8s.io
If you run in a restricted environment and apply strict domain name or IP address access policies limited to k8s.gcr.io, the image pulls will not function after k8s.gcr.io starts redirecting to the new registry.
How can you know if you're affected? it only takes a single line kubectl command to find images from the old registry! (see on the below post)
The deprecated k8s.gcr.io registry will be phased out at some point. Please update your manifests as soon as possible to point to registry.k8s.io.
This is actually good news, as the new Kubernetes community image registry registry.k8s.io will save major egress traffic costs for users not running on Google cloud ☁️
Read more on this blog post:
https://kubernetes.io/blog/2023/03/10/image-registry-redirect/
https://redd.it/11sm10g
@r_devops
📢ICYMI this Monday, Mar 20, traffic from the older k8s.gcr.io Kubernetes registry will be redirected to registry.k8s.io
If you run in a restricted environment and apply strict domain name or IP address access policies limited to k8s.gcr.io, the image pulls will not function after k8s.gcr.io starts redirecting to the new registry.
How can you know if you're affected? it only takes a single line kubectl command to find images from the old registry! (see on the below post)
The deprecated k8s.gcr.io registry will be phased out at some point. Please update your manifests as soon as possible to point to registry.k8s.io.
This is actually good news, as the new Kubernetes community image registry registry.k8s.io will save major egress traffic costs for users not running on Google cloud ☁️
Read more on this blog post:
https://kubernetes.io/blog/2023/03/10/image-registry-redirect/
https://redd.it/11sm10g
@r_devops
Kubernetes
k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know
On Monday, March 20th, the k8s.gcr.io registry will be redirected to the community owned registry, registry.k8s.io .
TL;DR: What you need to know about this change On Monday, March 20th, traffic from the older k8s.gcr.io registry will be redirected to registry.k8s.io…
TL;DR: What you need to know about this change On Monday, March 20th, traffic from the older k8s.gcr.io registry will be redirected to registry.k8s.io…
Question About Linking Repository To CI/CD
So I have a user account on the company Gitlab server.
When I want to link a repository to the company CI/CD tool by adding a custom private SSH key to CI/CD, am I adding the private SSH key that is linked to the public key of my own user account in Gitlab?
Am I correct in assuming this can't be right as everytime the CI/CD tool uses that REPO, it will have to pull the repo from my own Gitlab user account?
If this is the incorrect way to do it, what is the correct way to do it?
https://redd.it/11qescr
@r_devops
So I have a user account on the company Gitlab server.
When I want to link a repository to the company CI/CD tool by adding a custom private SSH key to CI/CD, am I adding the private SSH key that is linked to the public key of my own user account in Gitlab?
Am I correct in assuming this can't be right as everytime the CI/CD tool uses that REPO, it will have to pull the repo from my own Gitlab user account?
If this is the incorrect way to do it, what is the correct way to do it?
https://redd.it/11qescr
@r_devops
Reddit
r/devops on Reddit: Question About Linking Repository To CI/CD
Posted by u/DevOps_Noob1 - No votes and 7 comments
Test data for performance testing
There's some overlap here with data engineering and QA, but I'm more looking for information about how this problem is addressed in other companies and the role of devops/platform in it.
We badly need performance tests. Our service is used by hundreds of thousands of people all around the world. It has fallen over more times than I care to admit and I'm still a bit gobsmacked that we don't have any.
A sticking point we have is that the data in our non-production env where we would do these tests does not have anything like the same volume of production. We have many production RDS databases running on our platform. Our dot on the horizon is for the data from all these DBs to be ingested into a data warehouse where it can then be forwarded to multiple different endpoints. One of those would be the DBs on a non-production environment with a masking layer in between to scramble any sensitive columns. I'm glad we're agreed on the plan, but it feels quite ambitious and the data team who are building this aren't going to have it ready for a long time.
In the meantime, we need something a bit more straightforward. My first thought is to generate dummy data with a similar volume as production. It wouldn't be as good as data sourced from production but it would still allow us to get some value out of performance tests. Creating it looks to me like something that would be driven by developers and QAs, but I have little experience of doing it myself so I'm not sure how feasible it really is.
Can anyone share anything about how they've seen this problem tackled? Also setting up performance tests seems to be a task that involves different expertise working together (dev, platform, QA, data etc), so I'm curious about the different responsibilities that each role typically takes on. Thanks.
https://redd.it/11so5pi
@r_devops
There's some overlap here with data engineering and QA, but I'm more looking for information about how this problem is addressed in other companies and the role of devops/platform in it.
We badly need performance tests. Our service is used by hundreds of thousands of people all around the world. It has fallen over more times than I care to admit and I'm still a bit gobsmacked that we don't have any.
A sticking point we have is that the data in our non-production env where we would do these tests does not have anything like the same volume of production. We have many production RDS databases running on our platform. Our dot on the horizon is for the data from all these DBs to be ingested into a data warehouse where it can then be forwarded to multiple different endpoints. One of those would be the DBs on a non-production environment with a masking layer in between to scramble any sensitive columns. I'm glad we're agreed on the plan, but it feels quite ambitious and the data team who are building this aren't going to have it ready for a long time.
In the meantime, we need something a bit more straightforward. My first thought is to generate dummy data with a similar volume as production. It wouldn't be as good as data sourced from production but it would still allow us to get some value out of performance tests. Creating it looks to me like something that would be driven by developers and QAs, but I have little experience of doing it myself so I'm not sure how feasible it really is.
Can anyone share anything about how they've seen this problem tackled? Also setting up performance tests seems to be a task that involves different expertise working together (dev, platform, QA, data etc), so I'm curious about the different responsibilities that each role typically takes on. Thanks.
https://redd.it/11so5pi
@r_devops
Reddit
r/devops on Reddit: Test data for performance testing
Posted by u/jjsmyth1 - No votes and no comments
Stuck on this fluentd parsing issue
tl;dr: I want to pull 2 fields from a log file that is a mix of json and log headers.
I'm a bit new to doing anything with fluentd that's doing anything other than fancy regex. Now, I'm trying to parse some logs that are partially JSON, then extract a few field and forward them on to their destination (stdout for now). Here's a log sample:
2023-03-09 00:00:00,029 (threadpool-12345) INFO [HiIAmALog] {statusMessage":"uh oh","status":"FAIL","totalTime":5,"code":34}
And here's my source:
<source>
@type tail
@id tail_log
tag log
path /tmp/log
pos_file /tmp/log.pos
time_format %Y-%m-%dT%H:%M:%S.%L%Z
keep_time_key false
read_from_head true
open_on_every_update true
<parse>
@type none
</parse>
</source>
I'm chopping off everything below the JSON string using a filter and a ruby gsub:
<filter log>
@type record_transformer
enable_ruby
<record>
message ${record["message"].gsub(/20.*HiIAmALog\] /,'')}
</record>
</filter>
This gives me a nice, clean JSON string as output:
{message":"uh oh","status":"FAIL","totalTime":5,"code":34}
Next, I'm trying a filter like this to get just statusMessage and code:
<filter log>
<record>
statusMessage ${record["statusMessage"]}
code ${record["message"]["code"]}
</record>
type record_transformer
enable_ruby
</filter>
I know at this point this is just a string, not a JSON object, so I can't actually parse the fields. I've tried using ruby to_json method to transform it, but it's not working. Does anyone have any suggestions? I've been banging my head on this for too long. Thanks in advance for any help you can give.
https://redd.it/11snkem
@r_devops
tl;dr: I want to pull 2 fields from a log file that is a mix of json and log headers.
I'm a bit new to doing anything with fluentd that's doing anything other than fancy regex. Now, I'm trying to parse some logs that are partially JSON, then extract a few field and forward them on to their destination (stdout for now). Here's a log sample:
2023-03-09 00:00:00,029 (threadpool-12345) INFO [HiIAmALog] {statusMessage":"uh oh","status":"FAIL","totalTime":5,"code":34}
And here's my source:
<source>
@type tail
@id tail_log
tag log
path /tmp/log
pos_file /tmp/log.pos
time_format %Y-%m-%dT%H:%M:%S.%L%Z
keep_time_key false
read_from_head true
open_on_every_update true
<parse>
@type none
</parse>
</source>
I'm chopping off everything below the JSON string using a filter and a ruby gsub:
<filter log>
@type record_transformer
enable_ruby
<record>
message ${record["message"].gsub(/20.*HiIAmALog\] /,'')}
</record>
</filter>
This gives me a nice, clean JSON string as output:
{message":"uh oh","status":"FAIL","totalTime":5,"code":34}
Next, I'm trying a filter like this to get just statusMessage and code:
<filter log>
<record>
statusMessage ${record["statusMessage"]}
code ${record["message"]["code"]}
</record>
type record_transformer
enable_ruby
</filter>
I know at this point this is just a string, not a JSON object, so I can't actually parse the fields. I've tried using ruby to_json method to transform it, but it's not working. Does anyone have any suggestions? I've been banging my head on this for too long. Thanks in advance for any help you can give.
https://redd.it/11snkem
@r_devops
Reddit
r/devops on Reddit: Stuck on this fluentd parsing issue
Posted by u/zerokey - No votes and no comments
How often do you do deployments at your startup/company? A poll!
Just to get a feel for how DevOps/SRE culture has impacted the deployment frequency at various companies/startups.
Thank you very much for your answer!
View Poll
https://redd.it/11spacb
@r_devops
Just to get a feel for how DevOps/SRE culture has impacted the deployment frequency at various companies/startups.
Thank you very much for your answer!
View Poll
https://redd.it/11spacb
@r_devops
Reddit
r/devops on Reddit: How often do you do deployments at your startup/company? A poll!
Posted by u/Bubbly_Penalty6048 - No votes and no comments
Transfer from Ops to DevOps
Lets say I am working as an «operations operator» (directly translated, no idea what the actual title is) for an internet company and study software development bachelors on the side. How hard would it be to get a DevOps job right after graduating?
https://redd.it/11qh0mu
@r_devops
Lets say I am working as an «operations operator» (directly translated, no idea what the actual title is) for an internet company and study software development bachelors on the side. How hard would it be to get a DevOps job right after graduating?
https://redd.it/11qh0mu
@r_devops
Reddit
r/devops on Reddit: Transfer from Ops to DevOps
Posted by u/Bjosk98 - No votes and 6 comments
any pentester that you would recommend?
preferably modern ones and can pentest desktop and mobile
https://redd.it/11su6uv
@r_devops
preferably modern ones and can pentest desktop and mobile
https://redd.it/11su6uv
@r_devops
Reddit
r/devops on Reddit: any pentester that you would recommend?
Posted by u/linux_n00by - No votes and no comments
An adventure with SLOs, generic Prometheus alerting rules, and complex PromQL queries
I'm working on a library called Autometrics that makes it easy to add metrics to a code base and recently worked on support for SLOs/alerts. We ended up with a solution that enables us to have a single set of Prometheus recording/alerting rules that will work for any autometrics-instrumented project and the libraries use some fun label tricks to enable specific rules.
I wrote up a blog post about this experience here in case others are interested: https://fiberplane.com/blog/an-adventure-with-slos-generic-prometheus-alerting-rules-and-complex-promql-queries
https://redd.it/11svszk
@r_devops
I'm working on a library called Autometrics that makes it easy to add metrics to a code base and recently worked on support for SLOs/alerts. We ended up with a solution that enables us to have a single set of Prometheus recording/alerting rules that will work for any autometrics-instrumented project and the libraries use some fun label tricks to enable specific rules.
I wrote up a blog post about this experience here in case others are interested: https://fiberplane.com/blog/an-adventure-with-slos-generic-prometheus-alerting-rules-and-complex-promql-queries
https://redd.it/11svszk
@r_devops
GitHub
Autometrics
Easily add metrics to your code that actually help you spot and debug issues in production. Built on Prometheus and OpenTelemetry. - Autometrics