Manage supervisord running inside different containers through my app?
I have several docker containers whose entrypoints are
I'm trying to build a unified control panel of sorts inside one of the containers, which essentially runs a Flask app. From this control panel, I want to view the programs running under supervisord in its own container and other containers, and also be able to stop/start/restart the programs as required.
How can I go about achieving this?
https://redd.it/1006eqz
@r_devops
I have several docker containers whose entrypoints are
supervisord with multiple programs running on each via their config files.I'm trying to build a unified control panel of sorts inside one of the containers, which essentially runs a Flask app. From this control panel, I want to view the programs running under supervisord in its own container and other containers, and also be able to stop/start/restart the programs as required.
How can I go about achieving this?
https://redd.it/1006eqz
@r_devops
reddit
Manage supervisord running inside different containers through my app?
I have several docker containers whose entrypoints are `supervisord` with multiple programs running on each via their config files. I'm trying to...
Best way to redeploy containers on server after build in TeamCity succeeds?
Hey, I am currently running a stack with docker compose deployments on my server that uses watchtower to automatically redeploy new images that are being built in TeamCity. I wanted to get into webhooks, as it gives me more control. I was using the Dockerized version of https://github.com/adnanh/webhook, but I could not get it working right. Is there any other way to trigger docker compose redeployments from the internet?
https://redd.it/1006shq
@r_devops
Hey, I am currently running a stack with docker compose deployments on my server that uses watchtower to automatically redeploy new images that are being built in TeamCity. I wanted to get into webhooks, as it gives me more control. I was using the Dockerized version of https://github.com/adnanh/webhook, but I could not get it working right. Is there any other way to trigger docker compose redeployments from the internet?
https://redd.it/1006shq
@r_devops
GitHub
GitHub - adnanh/webhook: webhook is a lightweight incoming webhook server to run shell commands
webhook is a lightweight incoming webhook server to run shell commands - adnanh/webhook
Right way for multiple Jenkins job dependency management
I am working on a complex CI system of a huge project with several git projects that have to be built together. Hence as part of the CI process, I have many different Jenkins jobs created. However now I am looking for a better way for these jobs to depend on each other so that I can start creating a flow whenever individual Pull request is created. What would be the right way to do this in Jenkins. Upstream/downstream jobs are too tightly coupled and I would like one central place where I can manage these dependencies rather than inside each jenkins job. I have heard Apache airflow can be used for workflow management but I want to know if there is some easier way to manage these in Jenkins itself.
https://redd.it/1006doo
@r_devops
I am working on a complex CI system of a huge project with several git projects that have to be built together. Hence as part of the CI process, I have many different Jenkins jobs created. However now I am looking for a better way for these jobs to depend on each other so that I can start creating a flow whenever individual Pull request is created. What would be the right way to do this in Jenkins. Upstream/downstream jobs are too tightly coupled and I would like one central place where I can manage these dependencies rather than inside each jenkins job. I have heard Apache airflow can be used for workflow management but I want to know if there is some easier way to manage these in Jenkins itself.
https://redd.it/1006doo
@r_devops
reddit
Right way for multiple Jenkins job dependency management
I am working on a complex CI system of a huge project with several git projects that have to be built together. Hence as part of the CI process, I...
Devops intern
hello there :) , im currently in my final year ( master's degree ) and i wanted to ask where can i apply to a devops internship (remote or non-remote) and it doesnt matter paid or unpaid i just want to learn more .
https://redd.it/1008rw1
@r_devops
hello there :) , im currently in my final year ( master's degree ) and i wanted to ask where can i apply to a devops internship (remote or non-remote) and it doesnt matter paid or unpaid i just want to learn more .
https://redd.it/1008rw1
@r_devops
reddit
Devops intern
hello there :) , im currently in my final year ( master's degree ) and i wanted to ask where can i apply to a devops internship (remote or...
Documentation: Any tips on that (especially for DevOps) ?
Hello everyone!
I''m a Junior DevOps Engineer currently working at software house, so the amount of devops/infrastructure documentation is growing as the amount of client grows. I just pull an all nighter making some docs and I don't think it'll be sufficient to do in the long run.
My current setup is mkdocs with plugins to export them to pdf based on past projects. The sole reason for using mkdocs (as my head of engineer said) is so that it can be deployed and exported to pdf.
Incremental docs writing has been listed on my "to be remembered" for future projects, do you guys have any other tips to add? I'm open to other method or tool to optimize the process of making docs, maybe there is a markdown editor with collaboration capabilities? back and forth sending exported pdf to PM is a bit tedious :) .
Hopefully you've got some tips in hand, thank you!
https://redd.it/zzo6gz
@r_devops
Hello everyone!
I''m a Junior DevOps Engineer currently working at software house, so the amount of devops/infrastructure documentation is growing as the amount of client grows. I just pull an all nighter making some docs and I don't think it'll be sufficient to do in the long run.
My current setup is mkdocs with plugins to export them to pdf based on past projects. The sole reason for using mkdocs (as my head of engineer said) is so that it can be deployed and exported to pdf.
Incremental docs writing has been listed on my "to be remembered" for future projects, do you guys have any other tips to add? I'm open to other method or tool to optimize the process of making docs, maybe there is a markdown editor with collaboration capabilities? back and forth sending exported pdf to PM is a bit tedious :) .
Hopefully you've got some tips in hand, thank you!
https://redd.it/zzo6gz
@r_devops
reddit
Documentation: Any tips on that (especially for DevOps) ?
Hello everyone! I''m a Junior DevOps Engineer currently working at software house, so the amount of devops/infrastructure documentation is...
Which is a more Valuable Certification?
Just passed the Hashicorp Terraform Associate certification!!!
This was’t difficult at all considering I’ve been using it in my home labs private cloud (ESXI) for nearly 3 years, using the vSphere provider. Manage to ace it with 90%.
Two other things I use and relatively enjoy: Python and Kubernetes.
I have 6 node cluster running in my home-lab environment which I deploy my python apps to.
If you had a choice of:
CKAD(Certified Kubernetes Application Developer)
Or
PCAP(Python Certified Associate Programming)
Which cert is more valuable in the DevOps world?
P.s: Background Infrastructure/Networking support Engineer with a BIG appetite for automation over everything! transitioning to DevOps…
https://redd.it/100dtti
@r_devops
Just passed the Hashicorp Terraform Associate certification!!!
This was’t difficult at all considering I’ve been using it in my home labs private cloud (ESXI) for nearly 3 years, using the vSphere provider. Manage to ace it with 90%.
Two other things I use and relatively enjoy: Python and Kubernetes.
I have 6 node cluster running in my home-lab environment which I deploy my python apps to.
If you had a choice of:
CKAD(Certified Kubernetes Application Developer)
Or
PCAP(Python Certified Associate Programming)
Which cert is more valuable in the DevOps world?
P.s: Background Infrastructure/Networking support Engineer with a BIG appetite for automation over everything! transitioning to DevOps…
https://redd.it/100dtti
@r_devops
reddit
Which is a more Valuable Certification?
Just passed the Hashicorp Terraform Associate certification!!! This was’t difficult at all considering I’ve been using it in my home labs...
Taming the cost of observability
My organization is currently using grafana and elastic and our observability spend is not scaling with the size of our application and infrastructure. I am guessing we are not unique when it comes to not able to justify the ROI on observability as we scale.
How are others dealing with taming their spend on observability as they scale? Is it an artifact of the tool (grafana and elastic) we are using or it’s just how things are?
Any pointers will help. Thanks
https://redd.it/zzlpdp
@r_devops
My organization is currently using grafana and elastic and our observability spend is not scaling with the size of our application and infrastructure. I am guessing we are not unique when it comes to not able to justify the ROI on observability as we scale.
How are others dealing with taming their spend on observability as they scale? Is it an artifact of the tool (grafana and elastic) we are using or it’s just how things are?
Any pointers will help. Thanks
https://redd.it/zzlpdp
@r_devops
reddit
Taming the cost of observability
My organization is currently using grafana and elastic and our observability spend is not scaling with the size of our application and...
Set of Powershell scripts to Trigger Build and Deploy releases parallely in Azure Devops From command line
Hello,
I wanted to share with you all about a new script I developed to help streamline the deployment process in Azure DevOps. This is my first script that I have developed and shared with others, so I am still learning and trying to improve.
As a developer, I often found myself manually deploying code to different environments manually by Creating release, approving it and then wait for it to complete deployment, which can be time-consuming and error-prone. To address this issue, I created a script using PowerShell that allows me to deploy code in a parallel fashion in azure-devops from the command line. This means that I can deploy multiple releases at once, and can get status of all triggered release in a single screen. rather than having to deploy each one individually.
I hope that others can find value in this script, and I welcome any suggestions or ideas for improvement. As a first-time script developer, I am open to feedback and am grateful for the opportunity to learn and grow.
Thank you for reading, and I hope this script can be helpful to you in your work with Azure DevOps. Let me know if you have any questions or feedback
https://github.com/thangeshbabu/hydra
https://redd.it/zzrehe
@r_devops
Hello,
I wanted to share with you all about a new script I developed to help streamline the deployment process in Azure DevOps. This is my first script that I have developed and shared with others, so I am still learning and trying to improve.
As a developer, I often found myself manually deploying code to different environments manually by Creating release, approving it and then wait for it to complete deployment, which can be time-consuming and error-prone. To address this issue, I created a script using PowerShell that allows me to deploy code in a parallel fashion in azure-devops from the command line. This means that I can deploy multiple releases at once, and can get status of all triggered release in a single screen. rather than having to deploy each one individually.
I hope that others can find value in this script, and I welcome any suggestions or ideas for improvement. As a first-time script developer, I am open to feedback and am grateful for the opportunity to learn and grow.
Thank you for reading, and I hope this script can be helpful to you in your work with Azure DevOps. Let me know if you have any questions or feedback
https://github.com/thangeshbabu/hydra
https://redd.it/zzrehe
@r_devops
GitHub
GitHub - thangeshbabu/hydra
Contribute to thangeshbabu/hydra development by creating an account on GitHub.
Do Devops Need an Internal Developer Portal?
There is a lot of focus on Internal Developer Portal solutions. There is a handful of use cases for the developer's use of a developer portal.
Im the founder of port (operating in these areas), lately, I have had many conversations with DevOps teams across different organizations that mentioned several compelling use cases for the use of DevOps teams of a developer portal.
For example:
Devops need a centralized, single source of truth of the software architecture (microservices, environments, deployments, cloud resources, regions, and more).
Devops need one interface for change management to keep track of changes that took place and see the history of changes across the entire stack, such as deployments, infrastructure modifications, versions, configurations, etc
Devops need visibility for troubleshooting & root cause analysis \- Since all metadata is managed in a single source of truth, performing root cause analysis becomes more accessible.
FinOps & Cost control \- seeing assets managed within the developer portal with the associated owner allows them to see cloud expenses from the organizational structure point of view.
​
I wrote a short piece about it, I would love to hear your point of view on Portals for DevOps, is it the exact solution for developers and DevOps as one with different views? A separate solution?
​
https://www.getport.io/blog/do-devops-need-an-internal-developer-portal
https://redd.it/100hyr3
@r_devops
There is a lot of focus on Internal Developer Portal solutions. There is a handful of use cases for the developer's use of a developer portal.
Im the founder of port (operating in these areas), lately, I have had many conversations with DevOps teams across different organizations that mentioned several compelling use cases for the use of DevOps teams of a developer portal.
For example:
Devops need a centralized, single source of truth of the software architecture (microservices, environments, deployments, cloud resources, regions, and more).
Devops need one interface for change management to keep track of changes that took place and see the history of changes across the entire stack, such as deployments, infrastructure modifications, versions, configurations, etc
Devops need visibility for troubleshooting & root cause analysis \- Since all metadata is managed in a single source of truth, performing root cause analysis becomes more accessible.
FinOps & Cost control \- seeing assets managed within the developer portal with the associated owner allows them to see cloud expenses from the organizational structure point of view.
​
I wrote a short piece about it, I would love to hear your point of view on Portals for DevOps, is it the exact solution for developers and DevOps as one with different views? A separate solution?
​
https://www.getport.io/blog/do-devops-need-an-internal-developer-portal
https://redd.it/100hyr3
@r_devops
www.getport.io
Do Devops Need an Internal Developer Portal?
DevOps need internal developer portals too. Tracking all services, resources, and devops tools in a multi-cloud multi-region environment is too much.
Where to learn about about k8s and EKS/ECR?
For one my courseworks I had a cloud project and I was interested in it out of all the courses I have taken. But the kubernetes stuff confused me and the coursework I had , which was about deploying a k8s cluster using minikube and doing a ci/cd pipeline on any cloud service. I attempted this project and deployed a small flask app and dockerized it before getting stuck at the ECR/EKS stuff. I googled a ton of tutorials on youtube and on google but it was super confusing. I guess its because im jumping right in without learning the basics of kubernetes.
I want to learn more about cloud stuff and devops. How do I learn these techologies? I've tried youtube but I feel like half of them are outdated or sort of jump right in and leaves me confused. Any good courses from start to end about kubernetes and using aws services like ECR/EKS? I've looked at some on udemy but unsure which ones are good.
TL:DR Any course recommendations starting from scratch all the way to development of EKS/ECR and Kubernetes.
Thanks in advance.
https://redd.it/100hlu0
@r_devops
For one my courseworks I had a cloud project and I was interested in it out of all the courses I have taken. But the kubernetes stuff confused me and the coursework I had , which was about deploying a k8s cluster using minikube and doing a ci/cd pipeline on any cloud service. I attempted this project and deployed a small flask app and dockerized it before getting stuck at the ECR/EKS stuff. I googled a ton of tutorials on youtube and on google but it was super confusing. I guess its because im jumping right in without learning the basics of kubernetes.
I want to learn more about cloud stuff and devops. How do I learn these techologies? I've tried youtube but I feel like half of them are outdated or sort of jump right in and leaves me confused. Any good courses from start to end about kubernetes and using aws services like ECR/EKS? I've looked at some on udemy but unsure which ones are good.
TL:DR Any course recommendations starting from scratch all the way to development of EKS/ECR and Kubernetes.
Thanks in advance.
https://redd.it/100hlu0
@r_devops
reddit
Where to learn about about k8s and EKS/ECR?
For one my courseworks I had a cloud project and I was interested in it out of all the courses I have taken. But the kubernetes stuff confused me...
Monthly 'Shameless Self Promotion' thread - 2023/01
Feel free to post your personal projects here. Just keep it to one project per comment thread.
https://redd.it/100p6ma
@r_devops
Feel free to post your personal projects here. Just keep it to one project per comment thread.
https://redd.it/100p6ma
@r_devops
reddit
Monthly 'Shameless Self Promotion' thread - 2023/01
Feel free to post your personal projects here. Just keep it to one project per comment thread.
SRE: What tool do you use for Incident Response Runbook/Playbook
Is there any SREs/Admins that can share what they use for their Incident Response automation and playbook?
I am familiar with security incident playbook that we have a category of tool called "SOAR" that can do process-flow based (semi)automation and manual activities during a security incident.
But from the SRE side, what tool do you to document Runbook "checklist" or process flow, and how do you automate some of the responses?
https://redd.it/100le28
@r_devops
Is there any SREs/Admins that can share what they use for their Incident Response automation and playbook?
I am familiar with security incident playbook that we have a category of tool called "SOAR" that can do process-flow based (semi)automation and manual activities during a security incident.
But from the SRE side, what tool do you to document Runbook "checklist" or process flow, and how do you automate some of the responses?
https://redd.it/100le28
@r_devops
reddit
SRE: What tool do you use for Incident Response Runbook/Playbook
Is there any SREs/Admins that can share what they use for their Incident Response automation and playbook? I am familiar with security incident...
Assignment from technical interview could have been used.
Allright.
A while back I asked some questions about what is normal for technical interviews. In the meantime I have landed a job, but I'd like to share an experience I had so others might learn from it as well.
One of the applications I made was for a position of a full stack engineer. They gave me a huge take home assignment after the first interview. I got the weekend to work on it. I didn’t want to spend a lot of time on it, but since the first interview went great, I decided to do it anyways.
I am not very strong in frontend and had stated that in the first interview. They were fine with it and said my work and assignment would be geared towards backend. I got 1 backend question, which basically was just a copy pasta from 1337 code. The rest of the assignment was mostly to fix up a bunch of sh*t in React, like poor performing chat, issues with props drilling, misused hooks. But really a LOT of stuff. I'd say a solid 10 to 12 hours of work.
I managed to get it all sorted over the weekend, albeit I was very annoyed, and handed it in. After that total radio silence. After several weeks the recruiter came back and told me without further feedback I was not hired.
Funny how the chat page on their website is now working correctly.
From now on every employer who comes up with assignments that take more then 2 hours of my time can stick it where the sun don't shine. I'm rather unemployed.
https://redd.it/100xhib
@r_devops
Allright.
A while back I asked some questions about what is normal for technical interviews. In the meantime I have landed a job, but I'd like to share an experience I had so others might learn from it as well.
One of the applications I made was for a position of a full stack engineer. They gave me a huge take home assignment after the first interview. I got the weekend to work on it. I didn’t want to spend a lot of time on it, but since the first interview went great, I decided to do it anyways.
I am not very strong in frontend and had stated that in the first interview. They were fine with it and said my work and assignment would be geared towards backend. I got 1 backend question, which basically was just a copy pasta from 1337 code. The rest of the assignment was mostly to fix up a bunch of sh*t in React, like poor performing chat, issues with props drilling, misused hooks. But really a LOT of stuff. I'd say a solid 10 to 12 hours of work.
I managed to get it all sorted over the weekend, albeit I was very annoyed, and handed it in. After that total radio silence. After several weeks the recruiter came back and told me without further feedback I was not hired.
Funny how the chat page on their website is now working correctly.
From now on every employer who comes up with assignments that take more then 2 hours of my time can stick it where the sun don't shine. I'm rather unemployed.
https://redd.it/100xhib
@r_devops
reddit
Assignment from technical interview could have been used.
Allright. A while back I asked some questions about what is normal for technical interviews. In the meantime I have landed a job, but I'd like to...
Pass values from Terraform to Argocd/Kustomize
My terraform module spit outputs that I would like to inject into my kubernetes yaml files.
For example cert-manager with aws with irls authentication via kubernetes service accounts.
So first I would need to create the iam with the correct permission, then create the namespace for cert-manager, then create the service account and pass the iam value in the kubernetes provider.
Then in argocd repo, I set the helm chart cert-manager values to use the existing serviceAccount created by terraform.
So far so good, im okay with this.
But now I need to pass the arn of the certificate to the AWS cert manager controller issuer, I cannot create the AWS issuer object without first creating the CRDs, sure I can install the crds alone in terraform and pass the arn there
but now i have this vague and messy setup where some stuff is in the terraform code base and the other is in the argocd repo.
Since the only reason for this splitting is the need to take terraform output and pass them to argocd, is there a way to actually do it without having to port everything into the terraform world? Does kustomize have a way to patch things from a configmap?
https://redd.it/100t7i7
@r_devops
My terraform module spit outputs that I would like to inject into my kubernetes yaml files.
For example cert-manager with aws with irls authentication via kubernetes service accounts.
So first I would need to create the iam with the correct permission, then create the namespace for cert-manager, then create the service account and pass the iam value in the kubernetes provider.
Then in argocd repo, I set the helm chart cert-manager values to use the existing serviceAccount created by terraform.
So far so good, im okay with this.
But now I need to pass the arn of the certificate to the AWS cert manager controller issuer, I cannot create the AWS issuer object without first creating the CRDs, sure I can install the crds alone in terraform and pass the arn there
but now i have this vague and messy setup where some stuff is in the terraform code base and the other is in the argocd repo.
Since the only reason for this splitting is the need to take terraform output and pass them to argocd, is there a way to actually do it without having to port everything into the terraform world? Does kustomize have a way to patch things from a configmap?
https://redd.it/100t7i7
@r_devops
reddit
Pass values from Terraform to Argocd/Kustomize
My terraform module spit outputs that I would like to inject into my kubernetes yaml files. For example cert-manager with aws with irls...
How do the odds of interruption on a spot instance scale over time?
AWS, for instance, quotes a "less than five percent chance" of interruption for many of its spot types. But they provide no information about how long they expect a typical instance to run if it isn't preempted, so I'm not sure what to make of it. Specifically, I'm considering using Google spot instances for ci -- it'd be really useful to know the odds of interruption for a five minute job vs a ten minute one vs one that lasts an hour and so on. Can anyone give anecdotal information on this?
https://redd.it/1011523
@r_devops
AWS, for instance, quotes a "less than five percent chance" of interruption for many of its spot types. But they provide no information about how long they expect a typical instance to run if it isn't preempted, so I'm not sure what to make of it. Specifically, I'm considering using Google spot instances for ci -- it'd be really useful to know the odds of interruption for a five minute job vs a ten minute one vs one that lasts an hour and so on. Can anyone give anecdotal information on this?
https://redd.it/1011523
@r_devops
reddit
How do the odds of interruption on a spot instance scale over time?
AWS, for instance, quotes a "less than five percent chance" of interruption for many of its spot types. But they provide no information about how...
Understanding kubernetes labels
I came across following deployment configuration in kubernetes docs:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx # 1
spec:
replicas: 3
selector:
matchLabels:
app: nginx # 2
template:
metadata:
labels:
app: nginx # 3
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
I don't get what is the purpose of specifying
> The
I guess, this maps
https://redd.it/100kika
@r_devops
I came across following deployment configuration in kubernetes docs:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx # 1
spec:
replicas: 3
selector:
matchLabels:
app: nginx # 2
template:
metadata:
labels:
app: nginx # 3
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
I don't get what is the purpose of specifying
app: nginx three times in above deployment configuration. The same web page says:> The
.spec.selector field defines how the created ReplicaSet finds which Pods to manage. In this case, you select a label that is defined in the Pod template (app: nginx). I guess, this maps
#2 to #3. If I am correct, then what is the purpose of #1? Can someone please explain?https://redd.it/100kika
@r_devops
Kubernetes
Deployments
A Deployment manages a set of Pods to run an application workload, usually one that doesn't maintain state.
Use Docker with full IPv6 support
I‘ve wrote a blog post how to configure Docker so that container will get the correct IPv6 address of the request.
In the default configuration it is just the IPv4 address of the docker interface.
Maybe helpful for one or the other :)
https://www.manuel-bauer.net/blog/docker-with-full-ipv6-support
https://redd.it/100upzi
@r_devops
I‘ve wrote a blog post how to configure Docker so that container will get the correct IPv6 address of the request.
In the default configuration it is just the IPv4 address of the docker interface.
Maybe helpful for one or the other :)
https://www.manuel-bauer.net/blog/docker-with-full-ipv6-support
https://redd.it/100upzi
@r_devops
Use Docker with full IPv6 support - Manuel Bauer
<p>This guide shows how to use enable full IPv6 support in Docker, which provides as benefit, that the original IPv6 address of the incoming request will reach the container. This is essential when
Anomaly Detection - A human+machine approach
Hey folks!
I am a statistician/data scientist who just started working in the SRE domain. I see a wide prevalence of blind-use (misuse) of ML algorithms. I mean just slapping Prophet or some other forecasting on some metric in Grafana will obviously detect every slight perturbation. NOISE!
How can we inject more meaning and domain knowledge into this while still enjoying the automated ease of algorithms? With the use of simple algorithms that allow users to get alerts they exactly seek from the metric?
Here are only a few algorithm examples
a) Algo that detects high/low value (could account for seasonal patterns too)
b) Algo that detects a change in baseline behaviour
c) Algo that detect unexpected missing data
d) Algo that detects a rogue metric that behaves differently from its peers
e) Algo that detects increasing/decreasing trend
Domain experts can choose the algorithm that best describes "what is broken" for a metric.
Do you think there is a need of such algorithms that allow users to get alerts that they seek?
A) No!
B) Maybe
C) Yes!
https://redd.it/100owkn
@r_devops
Hey folks!
I am a statistician/data scientist who just started working in the SRE domain. I see a wide prevalence of blind-use (misuse) of ML algorithms. I mean just slapping Prophet or some other forecasting on some metric in Grafana will obviously detect every slight perturbation. NOISE!
How can we inject more meaning and domain knowledge into this while still enjoying the automated ease of algorithms? With the use of simple algorithms that allow users to get alerts they exactly seek from the metric?
Here are only a few algorithm examples
a) Algo that detects high/low value (could account for seasonal patterns too)
b) Algo that detects a change in baseline behaviour
c) Algo that detect unexpected missing data
d) Algo that detects a rogue metric that behaves differently from its peers
e) Algo that detects increasing/decreasing trend
Domain experts can choose the algorithm that best describes "what is broken" for a metric.
Do you think there is a need of such algorithms that allow users to get alerts that they seek?
A) No!
B) Maybe
C) Yes!
https://redd.it/100owkn
@r_devops
reddit
Anomaly Detection - A human+machine approach
Hey folks! I am a statistician/data scientist who just started working in the SRE domain. I see a wide prevalence of blind-use (misuse) of ML...
JWT Token
Is it possible to obtain a JWT token if I store user credentials in an RDS Aurora database?
https://redd.it/10197li
@r_devops
Is it possible to obtain a JWT token if I store user credentials in an RDS Aurora database?
https://redd.it/10197li
@r_devops
reddit
JWT Token
Is it possible to obtain a JWT token if I store user credentials in an RDS Aurora database?
Looking for guidance (container management)
Hello there!
I'm currently running a bunch of projects on a close-to-regular docker host using docker compose. This includes traefik (as reverse proxy), home assistant, a matrix setup, and some more things. Nothing business critical, everything for my own private fun. That host is not very efficient, it eats too much power for not enough performance. So, I built a new server.
This new server is running windows server (for reasons) with hyper-v. I'm now looking for suitable alternatives for running the mentioned docker containers. This can go anywhere from running a linux vm with docker and compose up to running a single node k8s cluster.
I'm not sure which way to do from here. So, I'm asking you: Tell me your ideas for this. No matter if they are crazy or not, as long as they make sense I'll consider them.
Oh, and happy new year!
https://redd.it/100ow0n
@r_devops
Hello there!
I'm currently running a bunch of projects on a close-to-regular docker host using docker compose. This includes traefik (as reverse proxy), home assistant, a matrix setup, and some more things. Nothing business critical, everything for my own private fun. That host is not very efficient, it eats too much power for not enough performance. So, I built a new server.
This new server is running windows server (for reasons) with hyper-v. I'm now looking for suitable alternatives for running the mentioned docker containers. This can go anywhere from running a linux vm with docker and compose up to running a single node k8s cluster.
I'm not sure which way to do from here. So, I'm asking you: Tell me your ideas for this. No matter if they are crazy or not, as long as they make sense I'll consider them.
Oh, and happy new year!
https://redd.it/100ow0n
@r_devops
reddit
Looking for guidance (container management)
Hello there! I'm currently running a bunch of projects on a close-to-regular docker host using docker compose. This includes traefik (as reverse...
envsubst with template file vs using CD Tools
Hi, at my work we are using Gitlab CI to build our pipelines where we build, test, upload images, etc AND deploy
The deploy job (k8s resources) would use deployments/services/ingress templates like this:
apiVersion: v1
kind: Service
metadata:
name: NAME
spec:
ports:
- protocol: TCP
port: PORT
targetPort: TARGETPORT
that are stored in a templates repository, then we clone the repo in the pipeline and run ENVSUBST to replace the NAME,PORT for the desired values.
Then we apply the manifiest to the cluster (gitlab pipeline)
deploy:
stage: deploy-dev
image: $IMAGE
script:
- git clone template-repo
- cd template-repo
- envsubst '$NAME' < service.yml
- k apply -f service.yml
I'm aware that CD tools like Flux, Argo exist for the 'deploying part' but I can't understand why that is 'better' than having templates and filling the desired values in the pipeline
https://redd.it/100lwck
@r_devops
Hi, at my work we are using Gitlab CI to build our pipelines where we build, test, upload images, etc AND deploy
The deploy job (k8s resources) would use deployments/services/ingress templates like this:
apiVersion: v1
kind: Service
metadata:
name: NAME
spec:
ports:
- protocol: TCP
port: PORT
targetPort: TARGETPORT
that are stored in a templates repository, then we clone the repo in the pipeline and run ENVSUBST to replace the NAME,PORT for the desired values.
Then we apply the manifiest to the cluster (gitlab pipeline)
deploy:
stage: deploy-dev
image: $IMAGE
script:
- git clone template-repo
- cd template-repo
- envsubst '$NAME' < service.yml
- k apply -f service.yml
I'm aware that CD tools like Flux, Argo exist for the 'deploying part' but I can't understand why that is 'better' than having templates and filling the desired values in the pipeline
https://redd.it/100lwck
@r_devops
reddit
envsubst with template file vs using CD Tools
Hi, at my work we are using Gitlab CI to build our pipelines where we build, test, upload images, etc AND deploy The deploy job (k8s resources)...