How do you guys securely document your machines?
I've been running into issues working with my machines where it feels like everything lives in the heads of a few people. Any time I need to ssh into an instance in AWS I have to wait for someone to reply with the IP address, even though my credentials are already whitelisted in that instance which becomes annoying. On top of that, there's been a lot of issues with me thinking things are networked one way when they're actually networked another.
Have you all created any single source of truths like a diagram of all your machines with a list of things like their IP addresses/What credentials are needed to access that you stored securely so it doesn't just live in people's heads?
https://redd.it/ojo0h8
@r_devops
I've been running into issues working with my machines where it feels like everything lives in the heads of a few people. Any time I need to ssh into an instance in AWS I have to wait for someone to reply with the IP address, even though my credentials are already whitelisted in that instance which becomes annoying. On top of that, there's been a lot of issues with me thinking things are networked one way when they're actually networked another.
Have you all created any single source of truths like a diagram of all your machines with a list of things like their IP addresses/What credentials are needed to access that you stored securely so it doesn't just live in people's heads?
https://redd.it/ojo0h8
@r_devops
reddit
How do you guys securely document your machines?
I've been running into issues working with my machines where it feels like everything lives in the heads of a few people. Any time I need to ssh...
Ci/CD stages
I have read several articles and wanted to make sure my understanding is correct wrt ci/cd stages.
Stage 1 : Source where the pipeline picks up code from the commit which triggered the build.
Stage 2 : Build - where the code is built into a war file / docker image etc
Stage 3 : Test - where the compiled artifact is tested using some sort of unit tests.
Stage 4 : Release - where once the above stage is successful the build artifact is deployed to the environment.
The same stages would then repeat in the higher environment using the same commit I'd code/ or be completely independent and only trigger once the changes are merged into master.
https://redd.it/ojoxat
@r_devops
I have read several articles and wanted to make sure my understanding is correct wrt ci/cd stages.
Stage 1 : Source where the pipeline picks up code from the commit which triggered the build.
Stage 2 : Build - where the code is built into a war file / docker image etc
Stage 3 : Test - where the compiled artifact is tested using some sort of unit tests.
Stage 4 : Release - where once the above stage is successful the build artifact is deployed to the environment.
The same stages would then repeat in the higher environment using the same commit I'd code/ or be completely independent and only trigger once the changes are merged into master.
https://redd.it/ojoxat
@r_devops
reddit
Ci/CD stages
I have read several articles and wanted to make sure my understanding is correct wrt ci/cd stages. Stage 1 : Source where the pipeline picks up ...
Automated Change Management for Oracle EBS
Has anyone successfully completed 'continuous deployment/delivery' of patches (customization, standard) to Oracle EBS 12.x without using Oracle AMS?
If yes, what were/are the challenges? Did you had to revamp the automation a lot for changes in underlying Oracle tools, standards?
https://redd.it/ojqlsm
@r_devops
Has anyone successfully completed 'continuous deployment/delivery' of patches (customization, standard) to Oracle EBS 12.x without using Oracle AMS?
If yes, what were/are the challenges? Did you had to revamp the automation a lot for changes in underlying Oracle tools, standards?
https://redd.it/ojqlsm
@r_devops
reddit
r/devops - Automated Change Management for Oracle EBS
0 votes and 0 comments so far on Reddit
Getting started in a DevOps / WD. what do I search for? What was your first job?
I have a fair understanding of throughout the process of planning to completion ( not perfect at all), what is a good entry position/ what was your first job?
Should I search for “ media” as a category? Or what narrows it down to websites ?
https://redd.it/ojq1c8
@r_devops
I have a fair understanding of throughout the process of planning to completion ( not perfect at all), what is a good entry position/ what was your first job?
Should I search for “ media” as a category? Or what narrows it down to websites ?
https://redd.it/ojq1c8
@r_devops
reddit
r/devops - Getting started in a DevOps / WD. what do I search for? What was your first job?
0 votes and 0 comments so far on Reddit
I'm stuck career-wise and I could use some advice.
I'm a sysadmin of 4 years, and this is my first job out of college. I majored in CS and decided I wanted to be more on the operations side of things, and just use my coding knowledge for automation and to get a leg up.
Now I'm at a place where I want to move into Linux Admin/DevOps/SRE/Cloud (it seems that the entire field is moving there anyway), but I don't have the professional dev experience to do so. Like, sure I can just practice a language or something in my free time, but the killer is my lack of enterprise coding experience. I have nothing to put on a resume, and to be honest I feel like I'd be totally stumped in a dev interview.
I definitely don't want to take a gigantic pay cut and become a junior dev/intern, so I just feel completely stuck and helpless at the moment. Any advice is appriciated.
https://redd.it/ojs8jw
@r_devops
I'm a sysadmin of 4 years, and this is my first job out of college. I majored in CS and decided I wanted to be more on the operations side of things, and just use my coding knowledge for automation and to get a leg up.
Now I'm at a place where I want to move into Linux Admin/DevOps/SRE/Cloud (it seems that the entire field is moving there anyway), but I don't have the professional dev experience to do so. Like, sure I can just practice a language or something in my free time, but the killer is my lack of enterprise coding experience. I have nothing to put on a resume, and to be honest I feel like I'd be totally stumped in a dev interview.
I definitely don't want to take a gigantic pay cut and become a junior dev/intern, so I just feel completely stuck and helpless at the moment. Any advice is appriciated.
https://redd.it/ojs8jw
@r_devops
reddit
I'm stuck career-wise and I could use some advice.
I'm a sysadmin of 4 years, and this is my first job out of college. I majored in CS and decided I wanted to be more on the operations side of...
Datadog ECS network monitoring
I’ve recently deployed Datadog agents as an [ECS daemon service](https://docs.datadoghq.com/agent/docker/), and it works great, exposing almost all the metrics we need. There are two things I can’t figure out from the docs:
1. How to enable the [Network](https://docs.datadoghq.com/agent/docker/) integration. The docs say it’s on by default but I’m not seeing them come through
2. Enable ENA monitoring. Our containers are using AWSVPC networking, so they’re being allocated ENAs, which are handling almost/all of our traffic. Based on [this PR](https://github.com/DataDog/integrations-core/pull/8331) it seems like there is a config collect_aws_ena_metrics in the network conf.yaml, but that doesn’t seem configurable via Docker, and the comment seems to say it only applies to hosts
We could potentially deploy both ECS and host-based agents, but there is some duplication and additional complexity there that would ideally be avoided.
Thanks!
https://redd.it/ojs9wp
@r_devops
I’ve recently deployed Datadog agents as an [ECS daemon service](https://docs.datadoghq.com/agent/docker/), and it works great, exposing almost all the metrics we need. There are two things I can’t figure out from the docs:
1. How to enable the [Network](https://docs.datadoghq.com/agent/docker/) integration. The docs say it’s on by default but I’m not seeing them come through
2. Enable ENA monitoring. Our containers are using AWSVPC networking, so they’re being allocated ENAs, which are handling almost/all of our traffic. Based on [this PR](https://github.com/DataDog/integrations-core/pull/8331) it seems like there is a config collect_aws_ena_metrics in the network conf.yaml, but that doesn’t seem configurable via Docker, and the comment seems to say it only applies to hosts
We could potentially deploy both ECS and host-based agents, but there is some duplication and additional complexity there that would ideally be avoided.
Thanks!
https://redd.it/ojs9wp
@r_devops
Datadog Infrastructure and Application Monitoring
Docker Agent
Datadog, the leading service for cloud-scale monitoring.
PagerDuty Required?
I am currently in a devops role that requires me to be on PagerDuty overnight and on weekends. If I want to continue to be a devops engineer, should I expect this role to always require me to be on PagerDuty? Do all companies expect this or do they understand that is what SREs are for
View Poll
https://redd.it/ojuq7v
@r_devops
I am currently in a devops role that requires me to be on PagerDuty overnight and on weekends. If I want to continue to be a devops engineer, should I expect this role to always require me to be on PagerDuty? Do all companies expect this or do they understand that is what SREs are for
View Poll
https://redd.it/ojuq7v
@r_devops
I developed a tool that allows you to perform automated security audits and code reviews cloud applications by showing vulnerabilities in an easy-to-follow architectural pathway code.
So, the title speaks for itself.
I've been in permanent contact with DevOps Engineers due to my field of work and they've been finding my tool extremely helpful. In a simple manner, CodeShield (name of my tool) performs automated security audits and code reviews cloud applications by showing vulnerabilities in an easy-to-follow architectural pathway code.
If you're experienced with AWS Cloud-Native Apps, I'm pretty sure you might be interested in the working features it offers you.
The early adopters have been finding it tremendously useful and I could not be more excited for the next phase - consolidation.
If I caught your curiosity, feel free to visit Codeshield.io to freely run a test scan on your code and let me know how was your experience doing so.
Would love to hear your feedback!
https://redd.it/ojjmr6
@r_devops
So, the title speaks for itself.
I've been in permanent contact with DevOps Engineers due to my field of work and they've been finding my tool extremely helpful. In a simple manner, CodeShield (name of my tool) performs automated security audits and code reviews cloud applications by showing vulnerabilities in an easy-to-follow architectural pathway code.
If you're experienced with AWS Cloud-Native Apps, I'm pretty sure you might be interested in the working features it offers you.
The early adopters have been finding it tremendously useful and I could not be more excited for the next phase - consolidation.
If I caught your curiosity, feel free to visit Codeshield.io to freely run a test scan on your code and let me know how was your experience doing so.
Would love to hear your feedback!
https://redd.it/ojjmr6
@r_devops
codeshield.io
CodeShield - Simplify AWS IAM Security
Use CodeShield to secure everything you build and run in your cloud environment efficently.
AWS RDS
Hey guys,
I am a DevOps at a data company and deal mainly today with the CI/CD pipelines of the company, writing infra as code and get the infra of the company to a better state.
We use Postgres for new services which are small and run on as an RDS instance.
The company started out at the beginning with a main database in MySQL which is also an RDS instance. However this instance is hard at work, since it records many events and data. After around 2-3 years of activity it reached 1TB of storage need. It has now been upgraded to scale to 1.5TB as a lifeline.
From my point of view it seems like this might be a problem? I mean, pricing wise it does anybody know if an RDS with a bigger storage is priced differently than having an EC2 with the DB and a big EBS attached? Would this be better at large storages deployments?
In addition, how would you approach such a problem? Clean up events from the db which are less relevant for today into a cheaper storage solution and ETL them somehow for querying purposes?
I have less experience working with such dataset sizes so looking for some guidance
Thx for replying 🤩☺️
https://redd.it/ojwnlk
@r_devops
Hey guys,
I am a DevOps at a data company and deal mainly today with the CI/CD pipelines of the company, writing infra as code and get the infra of the company to a better state.
We use Postgres for new services which are small and run on as an RDS instance.
The company started out at the beginning with a main database in MySQL which is also an RDS instance. However this instance is hard at work, since it records many events and data. After around 2-3 years of activity it reached 1TB of storage need. It has now been upgraded to scale to 1.5TB as a lifeline.
From my point of view it seems like this might be a problem? I mean, pricing wise it does anybody know if an RDS with a bigger storage is priced differently than having an EC2 with the DB and a big EBS attached? Would this be better at large storages deployments?
In addition, how would you approach such a problem? Clean up events from the db which are less relevant for today into a cheaper storage solution and ETL them somehow for querying purposes?
I have less experience working with such dataset sizes so looking for some guidance
Thx for replying 🤩☺️
https://redd.it/ojwnlk
@r_devops
reddit
AWS RDS
Hey guys, I am a DevOps at a data company and deal mainly today with the CI/CD pipelines of the company, writing infra as code and get the infra...
Devops Linux course suggestion
Hi Guys,
I work on AWS and python. Currently on a roadmap to become a DevOps Engineer. I know linux basics and wanted to know much Linux knowledge is essential for a DevOps Engineer.
I lack at networking side. Please suggest some playlists on youtube or paid courses for the same is also ok.
Thanks
https://redd.it/ojxgxo
@r_devops
Hi Guys,
I work on AWS and python. Currently on a roadmap to become a DevOps Engineer. I know linux basics and wanted to know much Linux knowledge is essential for a DevOps Engineer.
I lack at networking side. Please suggest some playlists on youtube or paid courses for the same is also ok.
Thanks
https://redd.it/ojxgxo
@r_devops
reddit
Devops Linux course suggestion
Hi Guys, I work on AWS and python. Currently on a roadmap to become a DevOps Engineer. I know linux basics and wanted to know much Linux...
Recommendation for a monitoring tool
At our company, we've been using new relic from the start. We kept on getting good deals and it was working pretty good, starting from august 2021 the plans have changed and the cost is too much for the usage. Please suggest APM tools. I'm aware of datadog, but not sure if it's better/cheaper than newrelic.
View Poll
https://redd.it/ojw7ij
@r_devops
At our company, we've been using new relic from the start. We kept on getting good deals and it was working pretty good, starting from august 2021 the plans have changed and the cost is too much for the usage. Please suggest APM tools. I'm aware of datadog, but not sure if it's better/cheaper than newrelic.
View Poll
https://redd.it/ojw7ij
@r_devops
reddit
r/devops - Recommendation for a monitoring tool
0 votes and 4 comments so far on Reddit
Advice on how to list down happy paths and negative paths for test cases in Azure DevOps?
Hi Everyone!
In our current setup, we have concerns on how test cases are being added in Azure DevOps. Right now, the test cases are more leading towards a "happy path", which potentially misses out on other test cases.
One general example of a "happy path" in our case is that it follows a step-by-step procedure that satisfies the acceptance criteria of a product backlog item without leading towards an error.
Do you have any suggestions or thoughts how test cases are being segregated and list down in Azure DevOps to cover not only the "happy path" but also the "negative path"?
Thanks!
https://redd.it/ojz99m
@r_devops
Hi Everyone!
In our current setup, we have concerns on how test cases are being added in Azure DevOps. Right now, the test cases are more leading towards a "happy path", which potentially misses out on other test cases.
One general example of a "happy path" in our case is that it follows a step-by-step procedure that satisfies the acceptance criteria of a product backlog item without leading towards an error.
Do you have any suggestions or thoughts how test cases are being segregated and list down in Azure DevOps to cover not only the "happy path" but also the "negative path"?
Thanks!
https://redd.it/ojz99m
@r_devops
reddit
Advice on how to list down happy paths and negative paths for test...
Hi Everyone! In our current setup, we have concerns on how test cases are being added in Azure DevOps. Right now, the test cases are more leading...
Sonarqube community edition
Hi guys. Does anyone here use Sonarqube community edition? Is dev or enterprise edition worth it in your opinion?
https://redd.it/ojztpv
@r_devops
Hi guys. Does anyone here use Sonarqube community edition? Is dev or enterprise edition worth it in your opinion?
https://redd.it/ojztpv
@r_devops
reddit
Sonarqube community edition
Hi guys. Does anyone here use Sonarqube community edition? Is dev or enterprise edition worth it in your opinion?
Configuration as code
We are building a microservices project and need to upgrade and create automated release notes. What si the best practices.
I am considering a general database for dev and test environment to hold all secrets and configurations and do a diff between them at every release to ease the process.
I would have to have ideas
https://redd.it/ok03s5
@r_devops
We are building a microservices project and need to upgrade and create automated release notes. What si the best practices.
I am considering a general database for dev and test environment to hold all secrets and configurations and do a diff between them at every release to ease the process.
I would have to have ideas
https://redd.it/ok03s5
@r_devops
reddit
Configuration as code
We are building a microservices project and need to upgrade and create automated release notes. What si the best practices. I am considering a...
Still using Docker Hub? you can now publish images to GitHub Packages container registry
Hi folks 👋
I guess that like a lot of you, I've been pushing my Docker images to Docker Hub, which has been and still is a good registry.
Though, if you've been following the open source ecosystem development recently, then GitHub Actions and with it GitHub Packages registry is now being more widely adopted.
I wrote up a blog article on how to manage your Node.js Docker images in GitHub Packages using GitHub Actions, which includes building and publishing them to the GitHub packages registry: https://snyk.io/blog/managing-node-js-docker-images-in-github-packages-using-github-actions/
https://redd.it/ok14tp
@r_devops
Hi folks 👋
I guess that like a lot of you, I've been pushing my Docker images to Docker Hub, which has been and still is a good registry.
Though, if you've been following the open source ecosystem development recently, then GitHub Actions and with it GitHub Packages registry is now being more widely adopted.
I wrote up a blog article on how to manage your Node.js Docker images in GitHub Packages using GitHub Actions, which includes building and publishing them to the GitHub packages registry: https://snyk.io/blog/managing-node-js-docker-images-in-github-packages-using-github-actions/
https://redd.it/ok14tp
@r_devops
Snyk
Managing Node.js Docker images in GitHub Packages using GitHub Actions | Snyk
Learn how to publish Node.js projects as Docker images and then push them to the GitHub Packages container registry.
Cloud IaaS with DevOps pipeline
How do you do your devops pipelines with IaaS?
Speaking based on Azure. I do have ARM templates which declaratively describe desired infrastructure and its configuration. Then I'm deploying applications to infrastructure (whether it is PaaS service like App Service or Azure Kubernetes Service doesn't really matter).
At the beginning - in the simpler cases - I had an approach to create a single devops pipeline which firstly applies ARM templates and then deploys application. The drawback is that even a very simple application change results in rerunning ARM template and with a little bit more complex infra it can take some time even when infra (ARM) didn't change at all.
With the k8s and microservices it makes even less sense to apply ARM template with each microservice deployment.
So right now I think it's probably the best to have 2 separate pipelines:
1. Pipeline for infrastructure - applies ARM templates, triggered only when infra was really changed
2. Pipeline for application code - doesn't touch infra at all. Just deploys application/specific microservice.
In some cases deploying new version of application might need also change in infra (new component like queue, redis, whatever) but I think those are rare cases and then pipelines will just need to be run in correct order.
Any thoughts based on your experience?
https://redd.it/ok1bss
@r_devops
How do you do your devops pipelines with IaaS?
Speaking based on Azure. I do have ARM templates which declaratively describe desired infrastructure and its configuration. Then I'm deploying applications to infrastructure (whether it is PaaS service like App Service or Azure Kubernetes Service doesn't really matter).
At the beginning - in the simpler cases - I had an approach to create a single devops pipeline which firstly applies ARM templates and then deploys application. The drawback is that even a very simple application change results in rerunning ARM template and with a little bit more complex infra it can take some time even when infra (ARM) didn't change at all.
With the k8s and microservices it makes even less sense to apply ARM template with each microservice deployment.
So right now I think it's probably the best to have 2 separate pipelines:
1. Pipeline for infrastructure - applies ARM templates, triggered only when infra was really changed
2. Pipeline for application code - doesn't touch infra at all. Just deploys application/specific microservice.
In some cases deploying new version of application might need also change in infra (new component like queue, redis, whatever) but I think those are rare cases and then pipelines will just need to be run in correct order.
Any thoughts based on your experience?
https://redd.it/ok1bss
@r_devops
reddit
Cloud IaaS with DevOps pipeline
How do you do your devops pipelines with IaaS? Speaking based on Azure. I do have ARM templates which declaratively describe desired...
Deploy docker-compose from Github Action to remote server
I want to be able to deploy the latest docker-compose from Github Actions to a remote QA server that is accessible through SSH. One option I can think of is to get the file from git into the remote server and do docker-compose up manually. Are there any standard options available?
https://redd.it/ok0zxh
@r_devops
I want to be able to deploy the latest docker-compose from Github Actions to a remote QA server that is accessible through SSH. One option I can think of is to get the file from git into the remote server and do docker-compose up manually. Are there any standard options available?
https://redd.it/ok0zxh
@r_devops
reddit
r/devops - Deploy docker-compose from Github Action to remote server
3 votes and 2 comments so far on Reddit
Install specific version of a package
I have a pretty simple manifest for packages that needs to be installed. It has an array of package names, and then ensures they're installed:
$basicpackagelist = 'p7zip-full','unzip','python3','tzdata','make','build-essential',
exec { 'apt-update':
command => '/usr/bin/apt-get update',
}
Exec'apt-update' -> Package <| |>
package { $basicpackagelist:ensure => 'installed'}
Thing is, some packages need to be installed on a specific version.
In that same manifest, is it possible to create some sort of dictionary that would specify the version that the package has to be?
Thanks ahead!
https://redd.it/ok0ufp
@r_devops
I have a pretty simple manifest for packages that needs to be installed. It has an array of package names, and then ensures they're installed:
$basicpackagelist = 'p7zip-full','unzip','python3','tzdata','make','build-essential',
exec { 'apt-update':
command => '/usr/bin/apt-get update',
}
Exec'apt-update' -> Package <| |>
package { $basicpackagelist:ensure => 'installed'}
Thing is, some packages need to be installed on a specific version.
In that same manifest, is it possible to create some sort of dictionary that would specify the version that the package has to be?
Thanks ahead!
https://redd.it/ok0ufp
@r_devops
reddit
Install specific version of a package
I have a pretty simple manifest for packages that needs to be installed. It has an array of package names, and then ensures they're installed: ...
Would you rather give your code or your container images to a third party service?
Hello!
I need some help from the collective experience of the DevOps people!
I wrote service called WunderPreview which gives you a running staging environment for all your branches/pull requests/commits. It works similar like a CI system in the way that it s triggered by GitHub when a change in your code happens, WunderPreview then grabs your code, builds and deploys your Docker container and gives you the URL to the running staging system.
We spoke to a lot of people and some where saying: No, I don't want to give you access to my Code, can't you just grab our Docker image build by our CI system and just deploy this?
I am now want to know what works better for you and your company:
A) when you give access to your code to a third party service to build and deploy your containers
or
B) give access to your Docker image to deploy your existing container images
Which version do software companies you work for prefer?
Thanks for the help!
https://redd.it/ok59xc
@r_devops
Hello!
I need some help from the collective experience of the DevOps people!
I wrote service called WunderPreview which gives you a running staging environment for all your branches/pull requests/commits. It works similar like a CI system in the way that it s triggered by GitHub when a change in your code happens, WunderPreview then grabs your code, builds and deploys your Docker container and gives you the URL to the running staging system.
We spoke to a lot of people and some where saying: No, I don't want to give you access to my Code, can't you just grab our Docker image build by our CI system and just deploy this?
I am now want to know what works better for you and your company:
A) when you give access to your code to a third party service to build and deploy your containers
or
B) give access to your Docker image to deploy your existing container images
Which version do software companies you work for prefer?
Thanks for the help!
https://redd.it/ok59xc
@r_devops
WunderPreview
WunderPreview: Zero Setup App Previews. | WunderPreview
With WunderPreview, everyone on your dev team can independently preview code changes at any time. Instantly. Simplified feedback loops lead to faster development and clearer communication.
How to deploy Hashicorp Vault on Kubernetes?
I started a blog series where I show you how to deploy Hashicorp Vault into Kubernetes using a Helm chart.
In this first part we will explore using a the vault Helm chart to deploy it on our Local Kubernetes cluster.
https://marcofranssen.nl/install-hashicorp-vault-on-kubernetes-using-helm-part-1
In the second part I will cover deploying on AWS EKS using a High available configuration utilizing AWS KMS for auto unsealing of vault.
https://redd.it/ok3g09
@r_devops
I started a blog series where I show you how to deploy Hashicorp Vault into Kubernetes using a Helm chart.
In this first part we will explore using a the vault Helm chart to deploy it on our Local Kubernetes cluster.
https://marcofranssen.nl/install-hashicorp-vault-on-kubernetes-using-helm-part-1
In the second part I will cover deploying on AWS EKS using a High available configuration utilizing AWS KMS for auto unsealing of vault.
https://redd.it/ok3g09
@r_devops
marcofranssen.nl
Install Hashicorp Vault on Kubernetes using Helm - Part 1 | Marco Franssen
In this blogpost I want to show you how to deploy Hashicorp Vault using Helm on Kubernetes. We will look at deploying on your local machine for development and experimental purposes but also at how to deploy a high available setup on AWS using Hashicorp Consul…
Could Kubernetes Pods Ever Become Deprecated?
Hi /r/DevOps,
Today I published an article that explores Kubernetes deprecation policy and rules. In the article I explain how could all kinds of Kubernetes objects (including core and stable APIs) become deprecated, which I think might be interesting to some of the Kubernetes folks around here.
Here's link to the article: https://towardsdatascience.com/could-kubernetes-pods-ever-become-deprecated-e8ee6b4b8066
Feedback is very much appreciated!
https://redd.it/ok3q2i
@r_devops
Hi /r/DevOps,
Today I published an article that explores Kubernetes deprecation policy and rules. In the article I explain how could all kinds of Kubernetes objects (including core and stable APIs) become deprecated, which I think might be interesting to some of the Kubernetes folks around here.
Here's link to the article: https://towardsdatascience.com/could-kubernetes-pods-ever-become-deprecated-e8ee6b4b8066
Feedback is very much appreciated!
https://redd.it/ok3q2i
@r_devops
Medium
Could Kubernetes Pods Ever Become Deprecated?
Could resources such as Pods, Services or Deployments ever become deprecated and be removed from Kubernetes and how would that happen?