Tagging releases in AWS Fargate
We have created a release-config.properties file where we manually update the date so that we can tag our releases and keep track of them. This works for us. But I want to automate this process instead of manually updating in GitHub before merging the PR. How can I do this? or is there any other method to do this?
https://redd.it/108ycm4
@r_devops
We have created a release-config.properties file where we manually update the date so that we can tag our releases and keep track of them. This works for us. But I want to automate this process instead of manually updating in GitHub before merging the PR. How can I do this? or is there any other method to do this?
https://redd.it/108ycm4
@r_devops
reddit
Tagging releases in AWS Fargate
We have created a release-config.properties file where we manually update the date so that we can tag our releases and keep track of them. This...
Artifactory Pypi repo uploads in offline environment
I have a janky environment where I have an Artifactory instance running a Pypi repo on a network with no internet connection. I'm trying to upload some Python3 packages and was able to get the wheel files using pip3 download on another machine. I can successfully install them locally with pip install --no-index /dir/<packagename>.whl/tar.gz but want to make it so I can just install the packages from a requirements.txt and point pip to my corporate artifactory pypi repo.
I setup a .pypirc file and have verified that I can auth to my Artifactory pypi repo. Where I'm stuck is with understand what I need to do to upload public packages to my Artifactory pypi repo. Do I have to create a setup.py (jfrog doc) file with all of the metadata for each package (there are dozens but I'll do it if it's the fastest way?). Appreciate any help!
https://redd.it/10976nh
@r_devops
I have a janky environment where I have an Artifactory instance running a Pypi repo on a network with no internet connection. I'm trying to upload some Python3 packages and was able to get the wheel files using pip3 download on another machine. I can successfully install them locally with pip install --no-index /dir/<packagename>.whl/tar.gz but want to make it so I can just install the packages from a requirements.txt and point pip to my corporate artifactory pypi repo.
I setup a .pypirc file and have verified that I can auth to my Artifactory pypi repo. Where I'm stuck is with understand what I need to do to upload public packages to my Artifactory pypi repo. Do I have to create a setup.py (jfrog doc) file with all of the metadata for each package (there are dozens but I'll do it if it's the fastest way?). Appreciate any help!
https://redd.it/10976nh
@r_devops
JFrog
ARTIFACTORY: How to deploy a PyPI package to the Artifactory's local repository?
Yuvarajan Johnpaul 2023-01-22 11:09 This article describes the steps to configure your Python client to publish packages to the JFrog Artifactory's PyPI repository. What's needed for a Python package to be published? Step-1: First, you need to add Artifactory…
Monitoring infra cost: which tool do you use?
Hey everyone,
To monitor the costs of your infrastructure, what tools do you use? Those provided by cloud providers (e.g. aws cost explorer), third-party services or a solution created by yourself?
FYI, I ask this question because we are building an open-source version for cloud monitoring and we are trying to understand what people are using today
https://redd.it/10999w4
@r_devops
Hey everyone,
To monitor the costs of your infrastructure, what tools do you use? Those provided by cloud providers (e.g. aws cost explorer), third-party services or a solution created by yourself?
FYI, I ask this question because we are building an open-source version for cloud monitoring and we are trying to understand what people are using today
https://redd.it/10999w4
@r_devops
reddit
Monitoring infra cost: which tool do you use?
Hey everyone, To monitor the costs of your infrastructure, what tools do you use? Those provided by cloud providers (e.g. aws cost explorer),...
Worthness of K8s when running application on single node cluster
I am tasked to rewrite an application developed in decade+ old technology. This application lags severely on some days to the extent that it is rendered unusable. I re-wrote some modules of the application in (react and spring boot) and demoed it running in minikube.
I was asked:
Q1. Why I need k8s? Cant I make it run multi-threaded to utilize full resources of the single node cluster instead of parallelizing it through k8s? Wont k8s on single slow the application down instead of making it more responsive?
Q2. What benefits k8s will bring on single node?
This is what I thought as a possible benefits of k8s on single node:
Answer to Q1.
- IBM research paper shows docker has very close performance to being native. Since, k8s use docker under the hood, they wont experience any significant performance overhead.
- When application is run in multiple containers on the single node, crash of one container will not affect other containers, which might not be the case with the single multi threaded process instance running directly on the bare metal.
- K8s auto recover / restart crashed containers
- Some code in any programming language may not be multi threaded out of the box. For example, in Java we have to explicitly implement multi threading with Thread class and java.util.concurrent package classes. With multiple containers running same application, we have full parallelism. (Note that famous frameworks like spring boot may do multi threading out of the box. But I am talking about multi threading all application code / every line of code.)
Answer to Q2.
- Easy environment configuration:
- There will be lesser bugs due to differences in the developer's local environment and production environment if we use k8s.
- (Considering there is a plan of adding more servers to the application in the future) Adding new container that exactly matches the environment will be very easy as compared to adding another machine that exactly matches the configuration.
- Also in case if we have more than one node in the cluster, it will be easy to do changes to containers by changing the corresponding configuration k8s YAML and redeploying the cluster against making changes to the servers (uninstalling / installing manually).
I have following doubts:
D1. Am I correct with above answers?
D2. Is there anything I missed?
D3. I feel answer to Q2 is sufficient. Also does the convenience of environment configuration management mean we should always go for containerization?
https://redd.it/109etr8
@r_devops
I am tasked to rewrite an application developed in decade+ old technology. This application lags severely on some days to the extent that it is rendered unusable. I re-wrote some modules of the application in (react and spring boot) and demoed it running in minikube.
I was asked:
Q1. Why I need k8s? Cant I make it run multi-threaded to utilize full resources of the single node cluster instead of parallelizing it through k8s? Wont k8s on single slow the application down instead of making it more responsive?
Q2. What benefits k8s will bring on single node?
This is what I thought as a possible benefits of k8s on single node:
Answer to Q1.
- IBM research paper shows docker has very close performance to being native. Since, k8s use docker under the hood, they wont experience any significant performance overhead.
- When application is run in multiple containers on the single node, crash of one container will not affect other containers, which might not be the case with the single multi threaded process instance running directly on the bare metal.
- K8s auto recover / restart crashed containers
- Some code in any programming language may not be multi threaded out of the box. For example, in Java we have to explicitly implement multi threading with Thread class and java.util.concurrent package classes. With multiple containers running same application, we have full parallelism. (Note that famous frameworks like spring boot may do multi threading out of the box. But I am talking about multi threading all application code / every line of code.)
Answer to Q2.
- Easy environment configuration:
- There will be lesser bugs due to differences in the developer's local environment and production environment if we use k8s.
- (Considering there is a plan of adding more servers to the application in the future) Adding new container that exactly matches the environment will be very easy as compared to adding another machine that exactly matches the configuration.
- Also in case if we have more than one node in the cluster, it will be easy to do changes to containers by changing the corresponding configuration k8s YAML and redeploying the cluster against making changes to the servers (uninstalling / installing manually).
I have following doubts:
D1. Am I correct with above answers?
D2. Is there anything I missed?
D3. I feel answer to Q2 is sufficient. Also does the convenience of environment configuration management mean we should always go for containerization?
https://redd.it/109etr8
@r_devops
anyone take the devops online course from UChicago?
I'm trying to start my sibling down the devops career.
anyone have experience with this 8-week online course offered by UChicago?
DevOps | UChicago
​
if not, whats do you recommend/suggest?
​
thanks
https://redd.it/109fa13
@r_devops
I'm trying to start my sibling down the devops career.
anyone have experience with this 8-week online course offered by UChicago?
DevOps | UChicago
​
if not, whats do you recommend/suggest?
​
thanks
https://redd.it/109fa13
@r_devops
University of Chicago Professional Education
DevOps
Explore the software life cycle and drive faster, more efficient outcomes.
Air travel across US thrown into chaos after computer outage
https://apnews.com/article/flight-delays-us-faa-updates-5805d15f520de8eadf52abb7b170487f
Anyone with knowledge of this NOTAM system care to share?
https://redd.it/109d3p8
@r_devops
https://apnews.com/article/flight-delays-us-faa-updates-5805d15f520de8eadf52abb7b170487f
Anyone with knowledge of this NOTAM system care to share?
https://redd.it/109d3p8
@r_devops
What are your must-have scripts/playbooks for on-prem?
I’m currently working on a Terraform module to automate VMware Windows/Linux VM deployments. Possibly also reference an Ansible Playbook to join our domain and other time-consuming tasks.
What do you guys use to improve your lives tremendously when not using cloud?
https://redd.it/109jnee
@r_devops
I’m currently working on a Terraform module to automate VMware Windows/Linux VM deployments. Possibly also reference an Ansible Playbook to join our domain and other time-consuming tasks.
What do you guys use to improve your lives tremendously when not using cloud?
https://redd.it/109jnee
@r_devops
reddit
What are your must-have scripts/playbooks for on-prem?
I’m currently working on a Terraform module to automate VMware Windows/Linux VM deployments. Possibly also reference an Ansible Playbook to join...
Chef Workstation on Ubuntu 22.10
Can you install Chef Workstation on Ubuntu 22.10?
I can't seem to find anything on the net for it. Only thing I find for is Ubuntu 18.04 and possibly 20.04 but nothing for newer versions.
https://redd.it/109l34y
@r_devops
Can you install Chef Workstation on Ubuntu 22.10?
I can't seem to find anything on the net for it. Only thing I find for is Ubuntu 18.04 and possibly 20.04 but nothing for newer versions.
https://redd.it/109l34y
@r_devops
reddit
Chef Workstation on Ubuntu 22.10
Can you install Chef Workstation on Ubuntu 22.10? I can't seem to find anything on the net for it. Only thing I find for is Ubuntu 18.04 and...
Propagating image changes to a k8s cluster
I have a CI loop in a repository that automatically builds and publishes a container on merge to the
The deployment for Kubernetes is pointing to the
https://redd.it/109c7p2
@r_devops
I have a CI loop in a repository that automatically builds and publishes a container on merge to the
main branch. Usually, images are tagged per git hash, however, when they are proven stable, they are additionally tagged with latest. The deployment for Kubernetes is pointing to the
latest tag. How would I automate updating the Kubernetes deployment when a new image is tagged with latest? Or am I simply going about this the wrong way?https://redd.it/109c7p2
@r_devops
reddit
Propagating image changes to a k8s cluster
I have a CI loop in a repository that automatically builds and publishes a container on merge to the `main` branch. Usually, images are tagged per...
A review I wrote about Sprkl Observability
https://itnext.io/sprkl-more-than-a-vscode-extension-5ee4411fe204
https://redd.it/109bqhb
@r_devops
https://itnext.io/sprkl-more-than-a-vscode-extension-5ee4411fe204
https://redd.it/109bqhb
@r_devops
Medium
Sprkl — more than a VScode extension
Sprkl, the observability tool, has improved a lot in the last months by adding GitHub Actions and Kubernetes integration
Expensive Metrics: Why Your Monitoring Data and Bill Get Out Of Hand
Why does our metric data volume and our bill get out of control? How is it related to cardinality? And how can DevOps and SRE proactively manage it?
listing some cost factors to consider in this blog:
https://horovits.medium.com/expensive-metrics-why-your-monitoring-data-and-bill-get-out-of-hand-e5724619e3f1
https://redd.it/109dhqb
@r_devops
Why does our metric data volume and our bill get out of control? How is it related to cardinality? And how can DevOps and SRE proactively manage it?
listing some cost factors to consider in this blog:
https://horovits.medium.com/expensive-metrics-why-your-monitoring-data-and-bill-get-out-of-hand-e5724619e3f1
https://redd.it/109dhqb
@r_devops
Medium
Expensive Metrics: Why Your Monitoring Data and Bill Get Out Of Hand
Why does our metric data volume and our bill get out of control? How is it related to cardinality? And how can we proactively manage it?
Ever Reach the Point Where Despite Using Containers You Still Get “Works on my Machine”
I’m on hour 3 of debugging a CI pipeline where it runs 100% of the time when calling my Molecule test directly. If I call it through pytest which we use to parallelize those tests, fails every time. I didn’t write the pipeline so mostly just reading code and the spaghetti of how it’s all wired up.
Just thought I’d seek commiseration and funny stories of still hitting the “works on my machine” wall despite using containers.
https://redd.it/109swkt
@r_devops
I’m on hour 3 of debugging a CI pipeline where it runs 100% of the time when calling my Molecule test directly. If I call it through pytest which we use to parallelize those tests, fails every time. I didn’t write the pipeline so mostly just reading code and the spaghetti of how it’s all wired up.
Just thought I’d seek commiseration and funny stories of still hitting the “works on my machine” wall despite using containers.
https://redd.it/109swkt
@r_devops
reddit
Ever Reach the Point Where Despite Using Containers You Still Get...
I’m on hour 3 of debugging a CI pipeline where it runs 100% of the time when calling my Molecule test directly. If I call it through pytest which...
How can I check if my ISP is filtering some hosts?
I seriously suspect that my ISP is limiting traffic to Reddit, I'd like to prove if this is true.
I'm a Software BE engineer and have some knowledge of networking and infrastructure and can handle a terminal. Someone could somehow guide me? I don't know how feasible could be.
Thanks!
https://redd.it/109d7g3
@r_devops
I seriously suspect that my ISP is limiting traffic to Reddit, I'd like to prove if this is true.
I'm a Software BE engineer and have some knowledge of networking and infrastructure and can handle a terminal. Someone could somehow guide me? I don't know how feasible could be.
Thanks!
https://redd.it/109d7g3
@r_devops
reddit
How can I check if my ISP is filtering some hosts?
I seriously suspect that my ISP is limiting traffic to Reddit, I'd like to prove if this is true. I'm a Software BE engineer and have some...
I wrote this guide on how to move from TDD to ODD
https://tracetest.io/blog/the-difference-between-tdd-and-odd
E2E testing on the back end is tricky because mocking data on the back end is tricky.
A way to avoid this is trace-based testing where you can run tests and assert against traces. You're really running tests against actual data—no more mocking data and wasting your time.
https://redd.it/108iclz
@r_devops
https://tracetest.io/blog/the-difference-between-tdd-and-odd
E2E testing on the back end is tricky because mocking data on the back end is tricky.
A way to avoid this is trace-based testing where you can run tests and assert against traces. You're really running tests against actual data—no more mocking data and wasting your time.
https://redd.it/108iclz
@r_devops
tracetest.io
The difference between test-driven development and observability-driven development
We’re entering a new era of observability-driven development (ODD), which emphasizes using instrumentation in back-end code as assertions in tests. With Tracetest, you can generate E2E tests from OpenTelemetry-based traces, enforce quality—and encourage velocity—in…
Network security on Azure and GCP
So, in our web app, which is hosted in multiple data centers (DCs), we can post content in a form field. When posting content like "<script>test</script>" in on-prem DCs, it works, no problem. However, in some Azure and GCP hosted DCs, the post fails with ERR CONNECTION RESET. We checked the firewalls where the post fails and no packets were dropped. Could there be another Azure or GCP network configuration that blocks content in JavaScript script tags? Where should I begin to troubleshoot this, since our web app's codebase is the same in all the DCs.
https://redd.it/109vin6
@r_devops
So, in our web app, which is hosted in multiple data centers (DCs), we can post content in a form field. When posting content like "<script>test</script>" in on-prem DCs, it works, no problem. However, in some Azure and GCP hosted DCs, the post fails with ERR CONNECTION RESET. We checked the firewalls where the post fails and no packets were dropped. Could there be another Azure or GCP network configuration that blocks content in JavaScript script tags? Where should I begin to troubleshoot this, since our web app's codebase is the same in all the DCs.
https://redd.it/109vin6
@r_devops
reddit
Network security on Azure and GCP
So, in our web app, which is hosted in multiple data centers (DCs), we can post content in a form field. When posting content like ...
Free OpenTelemetry-based observability for Kubernetes - works great with Okteto (video)
https://youtu.be/XY6Z5l68ZqY
https://redd.it/109wrg5
@r_devops
https://youtu.be/XY6Z5l68ZqY
https://redd.it/109wrg5
@r_devops
YouTube
Sprkl integration with Okteto
Today we will demonstrate our integration with Okteto and the benefits of integrating both!
Sprkl is a Personal Observability Platform that provides personalized feedback on your code changes. With Sprkl Personal Observability, you immediately see the impact…
Sprkl is a Personal Observability Platform that provides personalized feedback on your code changes. With Sprkl Personal Observability, you immediately see the impact…
DevOps for ML
MLOps, also known as DevOps for Machine Learning, is an invaluable practice for Data Science teams. This guide sheds light on the adoption and education - MLOps for data science
disclaimer: part of the team.
https://redd.it/109wcjn
@r_devops
MLOps, also known as DevOps for Machine Learning, is an invaluable practice for Data Science teams. This guide sheds light on the adoption and education - MLOps for data science
disclaimer: part of the team.
https://redd.it/109wcjn
@r_devops
kanger.dev
MLOps for Data Science teams
MLOps is essential for data science teams to collaborate, communicate, deploy, and maintain ML models in production reliably and efficiently.
Manage Multiple GitHub Repositories with Renovate and CircleCI
How to automatically update dependencies with PRs on GitHub after successful build and automated tests with Renovate and CircleCI: https://piotrminkowski.com/2023/01/12/manage-multiple-github-repositories-with-renovate-and-circleci/
https://redd.it/109y3n0
@r_devops
How to automatically update dependencies with PRs on GitHub after successful build and automated tests with Renovate and CircleCI: https://piotrminkowski.com/2023/01/12/manage-multiple-github-repositories-with-renovate-and-circleci/
https://redd.it/109y3n0
@r_devops
Piotr's TechBlog
Manage Multiple GitHub Repositories with Renovate and CircleCI
In this article, you will learn how to automatically update your GitHub repositories with Renovate and CircleCI.
Creating a VPN to control access to a dev environment?
Currently, I have 1 server running Kubernetes (k3s) where I deployed a preview version of a website for the dev team. The site is currently exposed to the internet.
I would like to somehow create something like VPN to restrict access to the site from the public internet. The problem is I only have 1 server and that server is in a network to which I don't have direct access to. It just has 1 public IP assigned and I managed it thorough SSH.
What would be an ideal solution for this? I have only seen an environment where there is a VPN gateway and a network of multiple servers in a private network, but I have just one.
https://redd.it/108dykv
@r_devops
Currently, I have 1 server running Kubernetes (k3s) where I deployed a preview version of a website for the dev team. The site is currently exposed to the internet.
I would like to somehow create something like VPN to restrict access to the site from the public internet. The problem is I only have 1 server and that server is in a network to which I don't have direct access to. It just has 1 public IP assigned and I managed it thorough SSH.
What would be an ideal solution for this? I have only seen an environment where there is a VPN gateway and a network of multiple servers in a private network, but I have just one.
https://redd.it/108dykv
@r_devops
reddit
Creating a VPN to control access to a dev environment?
Currently, I have 1 server running Kubernetes (k3s) where I deployed a preview version of a website for the dev team. The site is currently...
Lambda function and Web API
Lamda function has two arguments
def handlername(event, context)
...
return somevalue
I see function has two arguments event and context.
How to pass http parameter to lambda function and return http response ?
My objective is to create a Web API using Lambda function and publish it in API Gateway.
https://redd.it/10a27m8
@r_devops
Lamda function has two arguments
def handlername(event, context)
...
return somevalue
I see function has two arguments event and context.
How to pass http parameter to lambda function and return http response ?
My objective is to create a Web API using Lambda function and publish it in API Gateway.
https://redd.it/10a27m8
@r_devops
reddit
Lambda function and Web API
Lamda function has two arguments def handler_name(event, context) ... return some_value I see function has two arguments event and...
Saving load test results for comparison
We've introduced an automated load test pipeline for regularly validating the performance of our website. It's written in .NET and uses Azure Pipeline to run. I'm now looking for a way to properly visualize the results and compare them with previous results.
Does anybody have a suggestion on how to do this properly? Or does anybody know some generic software where you can 'push' data-entries (like a JSON object with date, number of requests, average response time, etc.) and create some diagrams on those data-entries?
https://redd.it/10a4mvk
@r_devops
We've introduced an automated load test pipeline for regularly validating the performance of our website. It's written in .NET and uses Azure Pipeline to run. I'm now looking for a way to properly visualize the results and compare them with previous results.
Does anybody have a suggestion on how to do this properly? Or does anybody know some generic software where you can 'push' data-entries (like a JSON object with date, number of requests, average response time, etc.) and create some diagrams on those data-entries?
https://redd.it/10a4mvk
@r_devops
reddit
Saving load test results for comparison
We've introduced an automated load test pipeline for regularly validating the performance of our website. It's written in .NET and uses Azure...