ECS Task Failed
I am running an AWS ECS Fargate Service Task by AWS CI/CD.
Service Tas shows STOPPED status soon after starting up.
ECS Fargate > Service > Task status shows this message :
>STOPPED (Task failed ELB health checks in (target-group arn:aws:elasticloadbalancing:ap-southeast-1:xxxxxxxxx:targetgroup/test-tg/123456789))
How do you fix this ?
Is it an issue with health check settings in ECS Service ?
Or
Is it an issue with health check settings in ALB ?
Or
Is it an issue with health check settings in Target group ?
I am confused what to look at and where ?
https://redd.it/gb09hk
@r_devops
I am running an AWS ECS Fargate Service Task by AWS CI/CD.
Service Tas shows STOPPED status soon after starting up.
ECS Fargate > Service > Task status shows this message :
>STOPPED (Task failed ELB health checks in (target-group arn:aws:elasticloadbalancing:ap-southeast-1:xxxxxxxxx:targetgroup/test-tg/123456789))
How do you fix this ?
Is it an issue with health check settings in ECS Service ?
Or
Is it an issue with health check settings in ALB ?
Or
Is it an issue with health check settings in Target group ?
I am confused what to look at and where ?
https://redd.it/gb09hk
@r_devops
reddit
ECS Task Failed
I am running an AWS ECS Fargate Service Task by AWS CI/CD. Service Tas shows STOPPED status soon after starting up. ECS Fargate > Service >...
Octopus Deploy status to GitHub Commit
https://library.octopus.com/step-templates/fb3137e5-f062-4dcd-9a56-b15321072a21/actiontemplate-github-report-deployment
This is a new library step added in the new release of Octopus Deploy. I am wondering how this works, and how to add the $commit.CommitID from:
https://octopus.com/docs/projects/variables/system-variables#release-package-build-information
The Goal:
Octopus Deploy would Hopefully have a status on the commit of GitHub. If it deployed successfully, have the green check mark in GH. If not, the red X.
TeamCity has a very similar solution where it will send the build status of a commit or branch or pull request to that GitHub repo.
Does anyone use this plugin? How does it work?
https://redd.it/gb04k6
@r_devops
https://library.octopus.com/step-templates/fb3137e5-f062-4dcd-9a56-b15321072a21/actiontemplate-github-report-deployment
This is a new library step added in the new release of Octopus Deploy. I am wondering how this works, and how to add the $commit.CommitID from:
https://octopus.com/docs/projects/variables/system-variables#release-package-build-information
The Goal:
Octopus Deploy would Hopefully have a status on the commit of GitHub. If it deployed successfully, have the green check mark in GH. If not, the red X.
TeamCity has a very similar solution where it will send the build status of a commit or branch or pull request to that GitHub repo.
Does anyone use this plugin? How does it work?
https://redd.it/gb04k6
@r_devops
Octopus
Octopus Deploy Library
The site that powers library.octopus.com
DevOps virtual events
My mind is a little numb after attending 12 DevOps related virtual events in the last 4 weeks. Currently on Deserted Island DevOps, the most unique twitch + animal crossing + discord format I have experienced. It's a blast but a bit of over stimulation. What event have you enjoyed the most? Or element from. Are we going to hit virtual event burnout?
https://redd.it/gaz69k
@r_devops
My mind is a little numb after attending 12 DevOps related virtual events in the last 4 weeks. Currently on Deserted Island DevOps, the most unique twitch + animal crossing + discord format I have experienced. It's a blast but a bit of over stimulation. What event have you enjoyed the most? Or element from. Are we going to hit virtual event burnout?
https://redd.it/gaz69k
@r_devops
reddit
DevOps virtual events
My mind is a little numb after attending 12 DevOps related virtual events in the last 4 weeks. Currently on Deserted Island DevOps, the most...
Current stack evolution advices
Hi guys, hope everyone is doing well with the lockdown !
I'm currently working on an app and had some questions about elements of my stack. Basically, I have a monorepo containing all my code separated in modules (back, front, common tools / types / helpers), from which I build some docker images. These images end up in Helm configurations that control my cluster. I have several configs (one for the monitoring, one for the vpn, one for the storage etc ...). I know that by using Helm I will be able to properly deploy changes to the cluster on my CD pipeline after all the possible tests are made on the commit / merge.
My issue right now is simple: Github Actions. The CI tool provided by github seems extremelly inconsistent to me.
The amount of outages that Github had in the last few months combined with all the possible errors in the CI make it my number one pain point in my stack.
What I currently do is simple, my first job installs the dependencies of all the monorepo. Dependencies are cached (so if lockfiles are not edited basically the install step will only call lifecycle methods and will not download anything). This greatly speeds up the pipeline and lets me properly separate steps. My issue now is that this cache is very unstable. Github will fail downloading its own cache way too often, making my whole CI fail everytime as dependencies are not fetched.
I am here today because I want to refactor my CI/CD pipeline. I am looking for the best platform that works in combo with Github (I cannot switch from github as our backlog requires us to use it, and because everything else except Actions works properly). All I need is a platform that is known for its reliability and speed, is able to communicate with github (status updates) and has a smart dependency management solution in the pipeline (caching that actually works or anything else).
Also I am looking for a good CD platform that offers more integration than just the ability to run jobs where I would download my helm / k8s cli and manually connect to my cluster etc (connect to k8s from the platform, interpret my helm / k8s configs etc).
The only solution that seems viable to me is Gitlab but when I try to setup a pipeline from an external repository, my github repo gets imported and it does seem like Gitlab is trying to make me use them as my VCS, which is not possible unfortunately.
What do you guys suggest as the best CI, CD or CI/CD platforms that would suite my need ?
Thank you for your time, and have a nice day :)
https://redd.it/ga76nh
@r_devops
Hi guys, hope everyone is doing well with the lockdown !
I'm currently working on an app and had some questions about elements of my stack. Basically, I have a monorepo containing all my code separated in modules (back, front, common tools / types / helpers), from which I build some docker images. These images end up in Helm configurations that control my cluster. I have several configs (one for the monitoring, one for the vpn, one for the storage etc ...). I know that by using Helm I will be able to properly deploy changes to the cluster on my CD pipeline after all the possible tests are made on the commit / merge.
My issue right now is simple: Github Actions. The CI tool provided by github seems extremelly inconsistent to me.
The amount of outages that Github had in the last few months combined with all the possible errors in the CI make it my number one pain point in my stack.
What I currently do is simple, my first job installs the dependencies of all the monorepo. Dependencies are cached (so if lockfiles are not edited basically the install step will only call lifecycle methods and will not download anything). This greatly speeds up the pipeline and lets me properly separate steps. My issue now is that this cache is very unstable. Github will fail downloading its own cache way too often, making my whole CI fail everytime as dependencies are not fetched.
I am here today because I want to refactor my CI/CD pipeline. I am looking for the best platform that works in combo with Github (I cannot switch from github as our backlog requires us to use it, and because everything else except Actions works properly). All I need is a platform that is known for its reliability and speed, is able to communicate with github (status updates) and has a smart dependency management solution in the pipeline (caching that actually works or anything else).
Also I am looking for a good CD platform that offers more integration than just the ability to run jobs where I would download my helm / k8s cli and manually connect to my cluster etc (connect to k8s from the platform, interpret my helm / k8s configs etc).
The only solution that seems viable to me is Gitlab but when I try to setup a pipeline from an external repository, my github repo gets imported and it does seem like Gitlab is trying to make me use them as my VCS, which is not possible unfortunately.
What do you guys suggest as the best CI, CD or CI/CD platforms that would suite my need ?
Thank you for your time, and have a nice day :)
https://redd.it/ga76nh
@r_devops
reddit
Current stack evolution advices
Hi guys, hope everyone is doing well with the lockdown ! I'm currently working on an app and had some questions about elements of my stack....
This Week in DevOps - 2 new cloud regions and more
This week in DevOps – Another AWS region was opened in Milan, Private AKS clusters are now generally available on Azure and Digital Ocean announced a VPC offering. Google Cloud also announced a new region in Las Vegas while Hashicorp Consul Service on Azure has moved from private to public beta.
Has anyone tried the new Digital Ocean VPC yet?
You can read more here: [https://thisweekindevops.com/2020/05/01/weekly-roundup-may-1st-2020/](https://thisweekindevops.com/2020/05/01/weekly-roundup-may-1st-2020/)
https://redd.it/gbewhy
@r_devops
This week in DevOps – Another AWS region was opened in Milan, Private AKS clusters are now generally available on Azure and Digital Ocean announced a VPC offering. Google Cloud also announced a new region in Las Vegas while Hashicorp Consul Service on Azure has moved from private to public beta.
Has anyone tried the new Digital Ocean VPC yet?
You can read more here: [https://thisweekindevops.com/2020/05/01/weekly-roundup-may-1st-2020/](https://thisweekindevops.com/2020/05/01/weekly-roundup-may-1st-2020/)
https://redd.it/gbewhy
@r_devops
This Week In DevOps
Weekly Roundup: May 1st, 2020 - This Week In DevOps
Keeping up with the latest DevOps announcements is hard. Get all the latest news from AWS, Google Cloud, Azure, Digital Ocean and HashiCorp in one place.
kustomize w/ skaffold: how to deploy several versions
Currently, I'm using [kustomize](https://kustomize.io/) + [skaffold](https://skaffold.dev/) in order to generate artifacts, build and deploy them.
My kustomize structure is really straighforward:
```
kustomize
├── base
│ ├── kustomization.yaml
│ ├── kustomizeconfig
│ │ ├── ...
│ ├── dev
│ │ ├── deployment.yaml
└── overlays
├── dev
│ ├── ...
└── prod
│ ├── ...
```
After that, using skaffold, I'm able to build and deploy them:
```
apiVersion: skaffold/v2beta1
kind: Config
metadata:
name: spring-boot-slab
build:
artifacts:
- image: covid-backend
profiles:
- name: docker
build:
artifacts:
- image: covid-backend
docker:
dockerfile: Dockerfile-multistage
- name: dev
deploy:
kustomize:
paths: ["kustomize/overlays/dev"]
- name: prod
deploy:
kustomize:
paths: ["kustomize/overlays/prod"]
```
So,
$ skaffold build --profile=docker --cache-artifacts=false -q | skaffold deploy --profile=dev --build-artifacts -
Would generate:
deployment.apps/dev-covid-backend created
And using
$ skaffold build --profile=docker --cache-artifacts=false -q | skaffold deploy --profile=prod --build-artifacts -
Would generate:
deployment.apps/prod-covid-backend created
I facing with I'm only able to deploy two deployments (one for each overlay), one for `dev-deployment`, and the other for `prod-deployment`.
Currently, each overlay will generate an `DeploymentConfig` with name `dev-deployment`, and `prod-deployment`. Each generated deployment would be linked to an image version, and when it's deployed, pods would be replaced all for the ones.
But what about if I would need to rollout several versions at the same time, say `0.0.1`, `0.0.2`, `0.0.3` into `dev`? What I would to do... create an overlay per version? I think it's not a solution.
I hope I've explained so well.
https://redd.it/ga5tjg
@r_devops
Currently, I'm using [kustomize](https://kustomize.io/) + [skaffold](https://skaffold.dev/) in order to generate artifacts, build and deploy them.
My kustomize structure is really straighforward:
```
kustomize
├── base
│ ├── kustomization.yaml
│ ├── kustomizeconfig
│ │ ├── ...
│ ├── dev
│ │ ├── deployment.yaml
└── overlays
├── dev
│ ├── ...
└── prod
│ ├── ...
```
After that, using skaffold, I'm able to build and deploy them:
```
apiVersion: skaffold/v2beta1
kind: Config
metadata:
name: spring-boot-slab
build:
artifacts:
- image: covid-backend
profiles:
- name: docker
build:
artifacts:
- image: covid-backend
docker:
dockerfile: Dockerfile-multistage
- name: dev
deploy:
kustomize:
paths: ["kustomize/overlays/dev"]
- name: prod
deploy:
kustomize:
paths: ["kustomize/overlays/prod"]
```
So,
$ skaffold build --profile=docker --cache-artifacts=false -q | skaffold deploy --profile=dev --build-artifacts -
Would generate:
deployment.apps/dev-covid-backend created
And using
$ skaffold build --profile=docker --cache-artifacts=false -q | skaffold deploy --profile=prod --build-artifacts -
Would generate:
deployment.apps/prod-covid-backend created
I facing with I'm only able to deploy two deployments (one for each overlay), one for `dev-deployment`, and the other for `prod-deployment`.
Currently, each overlay will generate an `DeploymentConfig` with name `dev-deployment`, and `prod-deployment`. Each generated deployment would be linked to an image version, and when it's deployed, pods would be replaced all for the ones.
But what about if I would need to rollout several versions at the same time, say `0.0.1`, `0.0.2`, `0.0.3` into `dev`? What I would to do... create an overlay per version? I think it's not a solution.
I hope I've explained so well.
https://redd.it/ga5tjg
@r_devops
kustomize.io
Kustomize - Kubernetes native configuration management
Blog post: Building with Terraform: Azure Windows VMs
Hey guys, I just wrote a shiny new Azure blog post you may enjoy on the ATA blog. I'm starting to really use Terraform a lot and decided to start writing about it. So far, it's soooo much better than ARM templates!
Summary: Learn how to get started with Terraform by creating an Azure VM in this step-by-step tutorial.
https://adamtheautomator.com/terraform-azure/
https://redd.it/gbjnmc
@r_devops
Hey guys, I just wrote a shiny new Azure blog post you may enjoy on the ATA blog. I'm starting to really use Terraform a lot and decided to start writing about it. So far, it's soooo much better than ARM templates!
Summary: Learn how to get started with Terraform by creating an Azure VM in this step-by-step tutorial.
https://adamtheautomator.com/terraform-azure/
https://redd.it/gbjnmc
@r_devops
Adam the Automator
Get Started with Terraform by Building an Azure VM [Tutorial]
Learn how to get started with Terraform by creating an Azure VM in this step-by-step tutorial.
Monthly 'Getting into DevOps' thread - 2020/05
**What is DevOps?**
* [AWS has a great article](https://aws.amazon.com/devops/what-is-devops/) that outlines DevOps as a work environment where development and operations teams are no longer "siloed", but instead work together across the entire application lifecycle -- from development and test to deployment to operations -- and automate processes that historically have been manual and slow.
**Books to Read**
* [The Phoenix Project](https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/1942788290) - one of the original books to delve into DevOps culture, explained through the story of a fictional company on the brink of failure.
* [The DevOps Handbook](https://www.amazon.com/dp/1942788002) - a practical "sequel" to The Phoenix Project.
* [Google's Site Reliability Engineering](https://landing.google.com/sre/books/) - Google engineers explain how they build, deploy, monitor, and maintain their systems.
* [The Site Reliability Workbook](https://landing.google.com/sre/workbook/toc/) - The practical companion to the Google's Site Reliability Engineering Book
* [The Unicorn Project](https://www.amazon.com/Unicorn-Project-Developers-Disruption-Thriving-ebook/dp/B07QT9QR41) - the "sequel" to The Phoenix Project.
* [DevOps for Dummies](https://www.amazon.com/DevOps-Dummies-Computer-Tech-ebook/dp/B07VXMLK3J/) - don't let the name fool you.
**What Should I Learn?**
* [Emily Wood's essay](https://crate.io/a/infrastructure-as-code-part-one/) - why infrastructure as code is so important into today's world.
* [2019 DevOps Roadmap](https://github.com/kamranahmedse/developer-roadmap#devops-roadmap) - one developer's ideas for which skills are needed in the DevOps world. This roadmap is controversial, as it may be too use-case specific, but serves as a good starting point for what tools are currently in use by companies.
* [This comment by /u/mdaffin](https://www.reddit.com/r/devops/comments/abcyl2/sorry_having_a_midlife_tech_crisis/eczhsu1/) - just remember, DevOps is a mindset to solving problems. It's less about the specific tools you know or the certificates you have, as it is the way you approach problem solving.
* [This comment by /u/jpswade](https://gist.github.com/jpswade/4135841363e72ece8086146bd7bb5d91) - what is DevOps and associated terminology.
* [Roadmap.sh](https://roadmap.sh/devops) - Step by step guide for DevOps or any other Operations Role
Remember: DevOps as a term and as a practice is still in flux, and is more about culture change than it is specific tooling. As such, specific skills and tool-sets are not universal, and recommendations for them should be taken only as suggestions.
**Previous Threads**
https://www.reddit.com/r/devops/comments/ft2fqb/monthly_getting_into_devops_thread_202004/
https://www.reddit.com/r/devops/comments/fc6ezw/monthly_getting_into_devops_thread_202003/
https://www.reddit.com/r/devops/comments/exfyhk/monthly_getting_into_devops_thread_2020012/
https://www.reddit.com/r/devops/comments/ei8x06/monthly_getting_into_devops_thread_202001/
https://www.reddit.com/r/devops/comments/e4pt90/monthly_getting_into_devops_thread_201912/
https://www.reddit.com/r/devops/comments/dq6nrc/monthly_getting_into_devops_thread_201911/
https://www.reddit.com/r/devops/comments/dbusbr/monthly_getting_into_devops_thread_201910/
https://www.reddit.com/r/devops/comments/cydrpv/monthly_getting_into_devops_thread_201909/
https://www.reddit.com/r/devops/comments/ckqdpv/monthly_getting_into_devops_thread_201908/
https://www.reddit.com/r/devops/comments/c7ti5p/monthly_getting_into_devops_thread_201907/
https://www.reddit.com/r/devops/comments/bvqyrw/monthly_getting_into_devops_thread_201906/
https://www.reddit.com/r/devops/comments/blu4oh/monthly_getting_into_devops_thread_201905/
https://www.reddit.com/r/devops/comments/axcebk/monthly_getting_into_devops_thread/
**Please keep this on topic (as a reference for those new to devops).**
https://redd.it/gbkqz9
@r_devops
**What is DevOps?**
* [AWS has a great article](https://aws.amazon.com/devops/what-is-devops/) that outlines DevOps as a work environment where development and operations teams are no longer "siloed", but instead work together across the entire application lifecycle -- from development and test to deployment to operations -- and automate processes that historically have been manual and slow.
**Books to Read**
* [The Phoenix Project](https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/1942788290) - one of the original books to delve into DevOps culture, explained through the story of a fictional company on the brink of failure.
* [The DevOps Handbook](https://www.amazon.com/dp/1942788002) - a practical "sequel" to The Phoenix Project.
* [Google's Site Reliability Engineering](https://landing.google.com/sre/books/) - Google engineers explain how they build, deploy, monitor, and maintain their systems.
* [The Site Reliability Workbook](https://landing.google.com/sre/workbook/toc/) - The practical companion to the Google's Site Reliability Engineering Book
* [The Unicorn Project](https://www.amazon.com/Unicorn-Project-Developers-Disruption-Thriving-ebook/dp/B07QT9QR41) - the "sequel" to The Phoenix Project.
* [DevOps for Dummies](https://www.amazon.com/DevOps-Dummies-Computer-Tech-ebook/dp/B07VXMLK3J/) - don't let the name fool you.
**What Should I Learn?**
* [Emily Wood's essay](https://crate.io/a/infrastructure-as-code-part-one/) - why infrastructure as code is so important into today's world.
* [2019 DevOps Roadmap](https://github.com/kamranahmedse/developer-roadmap#devops-roadmap) - one developer's ideas for which skills are needed in the DevOps world. This roadmap is controversial, as it may be too use-case specific, but serves as a good starting point for what tools are currently in use by companies.
* [This comment by /u/mdaffin](https://www.reddit.com/r/devops/comments/abcyl2/sorry_having_a_midlife_tech_crisis/eczhsu1/) - just remember, DevOps is a mindset to solving problems. It's less about the specific tools you know or the certificates you have, as it is the way you approach problem solving.
* [This comment by /u/jpswade](https://gist.github.com/jpswade/4135841363e72ece8086146bd7bb5d91) - what is DevOps and associated terminology.
* [Roadmap.sh](https://roadmap.sh/devops) - Step by step guide for DevOps or any other Operations Role
Remember: DevOps as a term and as a practice is still in flux, and is more about culture change than it is specific tooling. As such, specific skills and tool-sets are not universal, and recommendations for them should be taken only as suggestions.
**Previous Threads**
https://www.reddit.com/r/devops/comments/ft2fqb/monthly_getting_into_devops_thread_202004/
https://www.reddit.com/r/devops/comments/fc6ezw/monthly_getting_into_devops_thread_202003/
https://www.reddit.com/r/devops/comments/exfyhk/monthly_getting_into_devops_thread_2020012/
https://www.reddit.com/r/devops/comments/ei8x06/monthly_getting_into_devops_thread_202001/
https://www.reddit.com/r/devops/comments/e4pt90/monthly_getting_into_devops_thread_201912/
https://www.reddit.com/r/devops/comments/dq6nrc/monthly_getting_into_devops_thread_201911/
https://www.reddit.com/r/devops/comments/dbusbr/monthly_getting_into_devops_thread_201910/
https://www.reddit.com/r/devops/comments/cydrpv/monthly_getting_into_devops_thread_201909/
https://www.reddit.com/r/devops/comments/ckqdpv/monthly_getting_into_devops_thread_201908/
https://www.reddit.com/r/devops/comments/c7ti5p/monthly_getting_into_devops_thread_201907/
https://www.reddit.com/r/devops/comments/bvqyrw/monthly_getting_into_devops_thread_201906/
https://www.reddit.com/r/devops/comments/blu4oh/monthly_getting_into_devops_thread_201905/
https://www.reddit.com/r/devops/comments/axcebk/monthly_getting_into_devops_thread/
**Please keep this on topic (as a reference for those new to devops).**
https://redd.it/gbkqz9
@r_devops
Amazon
What is DevOps?
Find out what is DevOps, how and why businesses utilize DevOps models, and how to use AWS DevOps services.
Blog Post: Yes! You do need MicroServices!
There was an article trending on Medium bashing microservices recently and instead advocating for the monolith. As a DevOps Architect, I've only ever had to cut corners when it came to working with monolith applications. [So I wrote a response to it here.](https://medium.com/devops-dudes/yes-you-do-need-microservices-c38be2c7cd4?source=friends_link&sk=c27f5b0fbb115e290a829c9dfa763f78)
What does the DevOps at large community think about microservices?
https://redd.it/gbpicy
@r_devops
There was an article trending on Medium bashing microservices recently and instead advocating for the monolith. As a DevOps Architect, I've only ever had to cut corners when it came to working with monolith applications. [So I wrote a response to it here.](https://medium.com/devops-dudes/yes-you-do-need-microservices-c38be2c7cd4?source=friends_link&sk=c27f5b0fbb115e290a829c9dfa763f78)
What does the DevOps at large community think about microservices?
https://redd.it/gbpicy
@r_devops
Medium
YES!! You Do Need Microservices!
With every step forward, there’s always that one guy trying to take us back
GitFlow for multiple environments
I have a pretty typical Gitflow Continuous Deployment setup going.
develop -> dev.company.com
release/* -> release.company.com
master -> us.company.com
My question is, what can I do for multiple production environments. I have a US and EU deployment, and will end up with multiple US environments shortly.
I’m not keen on having dedicated branches for each of my US and EU environments.
Looking to see what other folk have come up with for this.
https://redd.it/gborcs
@r_devops
I have a pretty typical Gitflow Continuous Deployment setup going.
develop -> dev.company.com
release/* -> release.company.com
master -> us.company.com
My question is, what can I do for multiple production environments. I have a US and EU deployment, and will end up with multiple US environments shortly.
I’m not keen on having dedicated branches for each of my US and EU environments.
Looking to see what other folk have come up with for this.
https://redd.it/gborcs
@r_devops
reddit
GitFlow for multiple environments
I have a pretty typical Gitflow Continuous Deployment setup going. develop -> dev.company.com release/* -> release.company.com master ->...
What are containers used for?
Hi,
currently I am a trainee at a IT company that creates software. However, I am not working with developers but with the sysadmins.
I think I have a basic understanding of what containers are. But I cant get my head around when to use them.
We in the sysadmin team are hosting all our applications on VMs in a vCenter. All of the VMs have a single job to do and nothing else runs on them.
But as said I can't imagen a real live scenario when to use containers. I am not a developer I only write some scripts here and there.
​
* What applications are you deploying in a container?
* I've read that containers should not be used permantently. So you should not use a container to run nginx in it for the companies website?
* I often read that VMs are outdated, since they are too slow to deploy and to heavy for some/ most applications. But this somehow clashs with the point of temporarily containers, does it?
* Lets say you DO use containers on a permanent base. How do you keep track of them? In vCenter I have a list of VMs and its all good (+ monitoring).
* When you use multiple containers for productive applications and the VM somehow fails - isn't that a huge risk?
* Are containers purly targeted for developers who are testing something quickly and then destroy the container afterwards?
Sorry for the questions but as mentioned we basicially don't use containers at all. Only a few but thousands of VMs in a vcenter.
​
Thank you all - have a nice weekend :)
https://redd.it/gbooj5
@r_devops
Hi,
currently I am a trainee at a IT company that creates software. However, I am not working with developers but with the sysadmins.
I think I have a basic understanding of what containers are. But I cant get my head around when to use them.
We in the sysadmin team are hosting all our applications on VMs in a vCenter. All of the VMs have a single job to do and nothing else runs on them.
But as said I can't imagen a real live scenario when to use containers. I am not a developer I only write some scripts here and there.
​
* What applications are you deploying in a container?
* I've read that containers should not be used permantently. So you should not use a container to run nginx in it for the companies website?
* I often read that VMs are outdated, since they are too slow to deploy and to heavy for some/ most applications. But this somehow clashs with the point of temporarily containers, does it?
* Lets say you DO use containers on a permanent base. How do you keep track of them? In vCenter I have a list of VMs and its all good (+ monitoring).
* When you use multiple containers for productive applications and the VM somehow fails - isn't that a huge risk?
* Are containers purly targeted for developers who are testing something quickly and then destroy the container afterwards?
Sorry for the questions but as mentioned we basicially don't use containers at all. Only a few but thousands of VMs in a vcenter.
​
Thank you all - have a nice weekend :)
https://redd.it/gbooj5
@r_devops
Full Devops training course!
Wanted to share this great devops training where I spent the last two days watching on youtube. It covers a lot of technologies including docker, ansible, vagrant, maven, jenkins, git, selenium..etc
[https://www.youtube.com/watch?v=5RpER8wWn8M&list=PL2nCJd3szjvVGzYUaAAlPEbzQo8QZ6Mtz](https://www.youtube.com/watch?v=5RpER8wWn8M&list=PL2nCJd3szjvVGzYUaAAlPEbzQo8QZ6Mtz)
Happy devoping!
https://redd.it/gbto0j
@r_devops
Wanted to share this great devops training where I spent the last two days watching on youtube. It covers a lot of technologies including docker, ansible, vagrant, maven, jenkins, git, selenium..etc
[https://www.youtube.com/watch?v=5RpER8wWn8M&list=PL2nCJd3szjvVGzYUaAAlPEbzQo8QZ6Mtz](https://www.youtube.com/watch?v=5RpER8wWn8M&list=PL2nCJd3szjvVGzYUaAAlPEbzQo8QZ6Mtz)
Happy devoping!
https://redd.it/gbto0j
@r_devops
YouTube
1 DevOps Introduction
getting asked Big O Time Complexity questions for a DevOps/build&release position?
what do you guys think about companies (ranging mid-level to large for example Salesforce) asking the data structures/time complexity related questions for a position thats more like Devops/SRE/Build&release . recently been interviewing after getting laid off due to Covid-19 and i am seeing this a lot.
i have nothing against it but cmon the job description says \`configure CI/CD\` , \`release automation\` \`add monitoring/logging\` so my focus has been on that when preparing.
https://redd.it/gbn63h
@r_devops
what do you guys think about companies (ranging mid-level to large for example Salesforce) asking the data structures/time complexity related questions for a position thats more like Devops/SRE/Build&release . recently been interviewing after getting laid off due to Covid-19 and i am seeing this a lot.
i have nothing against it but cmon the job description says \`configure CI/CD\` , \`release automation\` \`add monitoring/logging\` so my focus has been on that when preparing.
https://redd.it/gbn63h
@r_devops
reddit
getting asked Big O Time Complexity questions for a...
what do you guys think about companies (ranging mid-level to large for example Salesforce) asking the data structures/time complexity related...
OpenStack -- is it relevant? (honest noob question)
I found a copy of "OpenStack Cloud Computing Cookbook (2nd, 2013) from Packt Publishing. Given the wide variety of skills needed to become a DevOps person - is it worth spending my time on this book?
For reference, my library also includes:
* ansible for devops (Jeff Geerling)
* Designing Data Intensive Apps (Oreilly)
* Building Secure & Reliable Systems (Oreilly, a freebie)
* [https://www.vagrantup.com/docs/index.html](https://www.vagrantup.com/docs/index.html)
* [https://github.com/trimstray/the-book-of-secret-knowledge](https://github.com/trimstray/the-book-of-secret-knowledge)
Thanks in advance.
https://redd.it/gbld0y
@r_devops
I found a copy of "OpenStack Cloud Computing Cookbook (2nd, 2013) from Packt Publishing. Given the wide variety of skills needed to become a DevOps person - is it worth spending my time on this book?
For reference, my library also includes:
* ansible for devops (Jeff Geerling)
* Designing Data Intensive Apps (Oreilly)
* Building Secure & Reliable Systems (Oreilly, a freebie)
* [https://www.vagrantup.com/docs/index.html](https://www.vagrantup.com/docs/index.html)
* [https://github.com/trimstray/the-book-of-secret-knowledge](https://github.com/trimstray/the-book-of-secret-knowledge)
Thanks in advance.
https://redd.it/gbld0y
@r_devops
VMware Fusion and Kitchen-CI on Mac
Hi
Is anyone using kitchen-ci to converge cookbooks on a Windows VM with VMWare Fusion (VMF) on Macos Catalina?
I am attempting to migrate from a VBox setup to WMF because VBox crashes the Mac every reboot or shutdown. I read somewhere that for CI to work in Fusion I needed the [vagrant+vmware plugin](https://www.vagrantup.com/vmware/index.html) which I have bought and installed.
So, I already have a W2012R2 VM (not using vagrant for this) and it's configured on our company domain with a static IP address.
I've also setup a custom NAT network (vmnet2) with NAT and WinRM port forwarding:
Host port: 55987
Type: TCP
VM IP address: as configured in the VM
Virtual machine port: 5985
In kitchen.yml :
driver:
name: vagrant
host: 127.0.0.1
reset_command: echo "Starting Test Kitchen."
However when I converge I see this error:
-----> Starting Test Kitchen (v2.4.0)
-----> Converging <APP-W2012>...
Preparing files for transfer
Preparing dna.json
Resolving cookbook dependencies with Berkshelf 7.0.9...
Removing non-cookbook files before transfer
Preparing data_bags
Preparing environments
Preparing nodes
Preparing roles
Preparing validation.pem
Preparing client.rb
Preparing client.rb
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: 1 actions failed.
>>>>>> Failed to complete #converge action: [password is a required option] on APP-W2012
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration
The kitchen / platform is as follows:
platforms:
- name: W2012
driver:
host: 127.0.0.1
port: 55987
guest: windows
transport:
name: winrm
elevated: true
elevated_username: System
elevated_password: null
driver_config:
gui: true
box: TCP_W2012
guest: windows
username: Administrator <<<< as per VM login
password: ******** <<<<
communicator: winrm
Am I missing something or is this not feasible at all?
https://redd.it/gbovpv
@r_devops
Hi
Is anyone using kitchen-ci to converge cookbooks on a Windows VM with VMWare Fusion (VMF) on Macos Catalina?
I am attempting to migrate from a VBox setup to WMF because VBox crashes the Mac every reboot or shutdown. I read somewhere that for CI to work in Fusion I needed the [vagrant+vmware plugin](https://www.vagrantup.com/vmware/index.html) which I have bought and installed.
So, I already have a W2012R2 VM (not using vagrant for this) and it's configured on our company domain with a static IP address.
I've also setup a custom NAT network (vmnet2) with NAT and WinRM port forwarding:
Host port: 55987
Type: TCP
VM IP address: as configured in the VM
Virtual machine port: 5985
In kitchen.yml :
driver:
name: vagrant
host: 127.0.0.1
reset_command: echo "Starting Test Kitchen."
However when I converge I see this error:
-----> Starting Test Kitchen (v2.4.0)
-----> Converging <APP-W2012>...
Preparing files for transfer
Preparing dna.json
Resolving cookbook dependencies with Berkshelf 7.0.9...
Removing non-cookbook files before transfer
Preparing data_bags
Preparing environments
Preparing nodes
Preparing roles
Preparing validation.pem
Preparing client.rb
Preparing client.rb
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: 1 actions failed.
>>>>>> Failed to complete #converge action: [password is a required option] on APP-W2012
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration
The kitchen / platform is as follows:
platforms:
- name: W2012
driver:
host: 127.0.0.1
port: 55987
guest: windows
transport:
name: winrm
elevated: true
elevated_username: System
elevated_password: null
driver_config:
gui: true
box: TCP_W2012
guest: windows
username: Administrator <<<< as per VM login
password: ******** <<<<
communicator: winrm
Am I missing something or is this not feasible at all?
https://redd.it/gbovpv
@r_devops
Vagrant by HashiCorp
VMware Integration - Vagrant by HashiCorp
The Vagrant VMware plugin provides rock-solid stability, improved performance, and dedicated support for using VMware and Vagrant.
Creating a custom Terraform provider
I needed to research how to create a custom provider for my job, so I created a small experiment with a server that provides and API over HTTP and a custom provider that consumes it.
It might be helpful for someone trying to create a custom Terraform provider so here is the code :)
[https://github.com/julianespinel/terraform-custom-provider](https://github.com/julianespinel/terraform-custom-provider)
https://redd.it/gbi79c
@r_devops
I needed to research how to create a custom provider for my job, so I created a small experiment with a server that provides and API over HTTP and a custom provider that consumes it.
It might be helpful for someone trying to create a custom Terraform provider so here is the code :)
[https://github.com/julianespinel/terraform-custom-provider](https://github.com/julianespinel/terraform-custom-provider)
https://redd.it/gbi79c
@r_devops
GitHub
julianespinel/terraform-custom-provider
Repository to learn how to create a Terraform custom provider. - julianespinel/terraform-custom-provider
How to use Linkerd with Terraform?
Hello,
I am trying to install Linkerd into my cluster using Terraform, but I always get met with the following error after rebooting my deployment:
Message: time="2020-05-01T18:47:46Z" level=info msg="running version stable-2.7.1"
time="2020-05-01T18:47:46Z" level=info msg="Using with pre-existing key: /var/run/linkerd/identity/end-entity/key.p8"
time="2020-05-01T18:47:46Z" level=info msg="Using with pre-existing CSR: /var/run/linkerd/identity/end-entity/key.p8"
[ 0.13589148s] ERROR linkerd2_app::env: Could not read LINKERD2_PROXY_IDENTITY_TOKEN_FILE: No such file or directory (os error 2)
[ 0.13618327s] ERROR linkerd2_app::env: LINKERD2_PROXY_IDENTITY_TOKEN_FILE="/var/run/secrets/kubernetes.io/serviceaccount/token" is not valid: InvalidTokenSource
Invalid configuration: invalid environment variable
Linkerd itself seems to be installed successfully and the `linkerd check` test passes every test.
This is my linkerd install in Terraform:
data "helm_repository" "linkerd" {
name = "linkerd"
url = "https://helm.linkerd.io/stable"
}
resource "helm_release" "linkerd" {
name = "linkerd"
repository = data.helm_repository.linkerd.metadata[0].name
chart = "linkerd/linkerd2"
set {
name = "global.identityTrustAnchorsPEM"
value = tls_self_signed_cert.trustanchor_cert.cert_pem
}
set {
name = "identity.issuer.crtExpiry"
value = tls_locally_signed_cert.issuer_cert.validity_end_time
}
set {
name = "identity.issuer.tls.crtPEM"
value = tls_locally_signed_cert.issuer_cert.cert_pem
}
set {
name = "identity.issuer.tls.keyPEM"
value = tls_private_key.issuer_key.private_key_pem
}
}
It seems to have something to do with service accounts, but I'm not sure how to go about fixing it. Thanks in advance for any assistance.
EDIT: Looking further into this, it's because the secrets volume is not mounted, although I'm not sure why it wouldn't be mounted. Comparing the output between the default emojivoto app and my deployment, the following mount is missing:
/var/run/secrets/kubernetes.io/serviceaccount from emoji-token-h65v7 (ro)
I see that my deployments have service account tokens though so I'm not sure why they are not mounted alongside the pod.
https://redd.it/gbo3pc
@r_devops
Hello,
I am trying to install Linkerd into my cluster using Terraform, but I always get met with the following error after rebooting my deployment:
Message: time="2020-05-01T18:47:46Z" level=info msg="running version stable-2.7.1"
time="2020-05-01T18:47:46Z" level=info msg="Using with pre-existing key: /var/run/linkerd/identity/end-entity/key.p8"
time="2020-05-01T18:47:46Z" level=info msg="Using with pre-existing CSR: /var/run/linkerd/identity/end-entity/key.p8"
[ 0.13589148s] ERROR linkerd2_app::env: Could not read LINKERD2_PROXY_IDENTITY_TOKEN_FILE: No such file or directory (os error 2)
[ 0.13618327s] ERROR linkerd2_app::env: LINKERD2_PROXY_IDENTITY_TOKEN_FILE="/var/run/secrets/kubernetes.io/serviceaccount/token" is not valid: InvalidTokenSource
Invalid configuration: invalid environment variable
Linkerd itself seems to be installed successfully and the `linkerd check` test passes every test.
This is my linkerd install in Terraform:
data "helm_repository" "linkerd" {
name = "linkerd"
url = "https://helm.linkerd.io/stable"
}
resource "helm_release" "linkerd" {
name = "linkerd"
repository = data.helm_repository.linkerd.metadata[0].name
chart = "linkerd/linkerd2"
set {
name = "global.identityTrustAnchorsPEM"
value = tls_self_signed_cert.trustanchor_cert.cert_pem
}
set {
name = "identity.issuer.crtExpiry"
value = tls_locally_signed_cert.issuer_cert.validity_end_time
}
set {
name = "identity.issuer.tls.crtPEM"
value = tls_locally_signed_cert.issuer_cert.cert_pem
}
set {
name = "identity.issuer.tls.keyPEM"
value = tls_private_key.issuer_key.private_key_pem
}
}
It seems to have something to do with service accounts, but I'm not sure how to go about fixing it. Thanks in advance for any assistance.
EDIT: Looking further into this, it's because the secrets volume is not mounted, although I'm not sure why it wouldn't be mounted. Comparing the output between the default emojivoto app and my deployment, the following mount is missing:
/var/run/secrets/kubernetes.io/serviceaccount from emoji-token-h65v7 (ro)
I see that my deployments have service account tokens though so I'm not sure why they are not mounted alongside the pod.
https://redd.it/gbo3pc
@r_devops
Kubernetes
Configure Service Accounts for Pods
Kubernetes offers two distinct ways for clients that run within your cluster, or that otherwise have a relationship to your cluster's control plane to authenticate to the API server.
A service account provides an identity for processes that run in a Pod,…
A service account provides an identity for processes that run in a Pod,…
I am in a shop that doesn’t think about devops but my job involves automation and the tools are moving more towards making devops a priority for our infrastructure. How do I help shift the culture towards a devops mindset?
Officially, I am a “Infrastructure automation engineer”. Unofficially I am basically a DevOps engineer. I work for a large organization that is SD based and everything revolves around tickets and change requests. Agile isn’t even whispered around the hallways. I can talk to other teams but all the main decisions fall into the hands of the directors. Its somewhat maddening when I can try to plan out a delivery process for a specific API connection but no one can tell me why that API might be useless in a few months.
We’re a VMWare shop and VMW is moving their stuff more towards being cloud agnostic. That’s great...but none of the people above me seem to understand how we could streamline our processes or why I want to have better comms with the dev teams. Or why I care about integrating the CI/CD pipeline in with our automated infrastructure workflows.
Has anyone been in a similar boat? How did you deal with it?
https://redd.it/gbnqjz
@r_devops
Officially, I am a “Infrastructure automation engineer”. Unofficially I am basically a DevOps engineer. I work for a large organization that is SD based and everything revolves around tickets and change requests. Agile isn’t even whispered around the hallways. I can talk to other teams but all the main decisions fall into the hands of the directors. Its somewhat maddening when I can try to plan out a delivery process for a specific API connection but no one can tell me why that API might be useless in a few months.
We’re a VMWare shop and VMW is moving their stuff more towards being cloud agnostic. That’s great...but none of the people above me seem to understand how we could streamline our processes or why I want to have better comms with the dev teams. Or why I care about integrating the CI/CD pipeline in with our automated infrastructure workflows.
Has anyone been in a similar boat? How did you deal with it?
https://redd.it/gbnqjz
@r_devops
reddit
I am in a shop that doesn’t think about devops but my job involves...
Officially, I am a “Infrastructure automation engineer”. Unofficially I am basically a DevOps engineer. I work for a large organization that is SD...
How can I get services to service communication working using Nomad / Consul?
I'm a noob to orchestration and working on learning HashiCorp Nomad since it's evidently a lot simpler than Kubernetes.
I got a cluster up and running, but after reading through the docs and guides I still cannot figure out how to have one service access another.
I see that Consul Connect is used for that, but a lot of that is security related, setting up ACLs, etc. which I don't need at all. I just want to have one service be able to reach another.
Is there something I'm missing?
https://redd.it/gbhbs5
@r_devops
I'm a noob to orchestration and working on learning HashiCorp Nomad since it's evidently a lot simpler than Kubernetes.
I got a cluster up and running, but after reading through the docs and guides I still cannot figure out how to have one service access another.
I see that Consul Connect is used for that, but a lot of that is security related, setting up ACLs, etc. which I don't need at all. I just want to have one service be able to reach another.
Is there something I'm missing?
https://redd.it/gbhbs5
@r_devops
reddit
How can I get services to service communication working using...
I'm a noob to orchestration and working on learning HashiCorp Nomad since it's evidently a lot simpler than Kubernetes. I got a cluster up and...
infrastructure-as-code: yaml/hcl vs general purpose programming framework
Hi Devops!
As the title suggest what are your preferences and thoughts regarding this? Pro's and con's? Would be interesting to hear your thoughts.
I honestly havn't made up my mind what the best approach is atm. I've been using Terraform and Cloudformation for quite some time (strongly favours Terraform).
As great as Terraform is there are always times when I wished that I had general purpose programming constructs to work with, like if/else statements, loops and what not. Terraform have added some features in this regards however does not feel 100% natural, often feels like I'm fighting the dsl.
Recently Pulumi and aws cdk has popped up, where instead of a dsl (yaml/hcl) you write in javascript or your favourite programming language to provision your infra. From my understanding you get state and resource dependency graphs (the thing that makes an IaC tool worthwhile).
https://redd.it/gbhv65
@r_devops
Hi Devops!
As the title suggest what are your preferences and thoughts regarding this? Pro's and con's? Would be interesting to hear your thoughts.
I honestly havn't made up my mind what the best approach is atm. I've been using Terraform and Cloudformation for quite some time (strongly favours Terraform).
As great as Terraform is there are always times when I wished that I had general purpose programming constructs to work with, like if/else statements, loops and what not. Terraform have added some features in this regards however does not feel 100% natural, often feels like I'm fighting the dsl.
Recently Pulumi and aws cdk has popped up, where instead of a dsl (yaml/hcl) you write in javascript or your favourite programming language to provision your infra. From my understanding you get state and resource dependency graphs (the thing that makes an IaC tool worthwhile).
https://redd.it/gbhv65
@r_devops
reddit
infrastructure-as-code: yaml/hcl vs general purpose programming...
Hi Devops! As the title suggest what are your preferences and thoughts regarding this? Pro's and con's? Would be interesting to hear your...
Why I got rid of our dev, test, staging and prod environment
Hi Reddit, I wanted to share a process/concept I introduced where I work for how we manage our environments.
I'm sure many of you are all aware of the usual dev, test, staging and prod environments and application changes move through these stages to finally get released to the end user. A problem me and my team had was environment bottleneck where for example devs would finish a feature but couldn't move it to the next stage because QA were still testing the previous feature in the next environment. Developers would also develop locally but if they wanted to test on the more closer to production like dev environment they risked wiping out a another devs current changes so there were constant slack messages along the lines of "Can I deploy X to Y" and you hoped someone would reply before you overwrote something you shouldn't have.
We are already a team that embraces infrastructure as code and our environment were brought up in an automated consistent matter. The problem was there was a 1 to many relationship between our environment stages and team members.
So since we can bring up an environment with code, why fix ourselves to 4? I called the concept color environments (but really you can use anything that has an an essentially infinite pool of options to choose from). Now when we work on a feature we deploy to a random color that isn't already in use and our stack gets a domain to access it based on that i.e. "cyan.example.com".
We've been doing this for half a year now and it has drastically changed our development and deployment process for the better.
* Developers can spin up their feature without waiting for an env to be available
* QA can test against a devs color or re-create a new color on their branch
* Our product owner can be given a color env with a feature to review it for as long as they need
* We can do user research and AB testing between colors
* Environment drift is not an issue as colors dont stay up very long and we always create an env from scratch
* Our deployment to prod is just bring up a new color and do a blue green DNS flip
Theres a few hurdles we had to overcome so here's a few of the main ones:
Spinning up infinite of environments can be costly. We're on AWS so took advantage of services like Lambda and other serverless services to keep costs right down. Our environments are also ephemeral by default and after a few days of being brought up they destroy themselves unless configured otherwise (such as prod envs or features that are taking longer to develop).
We gained extra flexibility with our environments but that also came with extra complexity and time spent waiting for an environment to be available. The application stack we did this on was fairly small and we found the sweet spot for time to getting a new env up from scratch ~15 minutes. Enough time to grab a coffee and not too long, updates to an env after are much quicker once its up. For that reason I don't recommend this for large application stacks, maybe this could work for a part of a stack such as a micro-service that is part of a bigger monolith.
Databases and blue green flips can be a bit tricky. Luckily since blue/green deploys are not a new thing there were a few resources out there to help us with this.
Anyways thats a quick rundown of the concept, hope it's something interesting. Has anyone else done something similar? Also if you have any questions about the concept/process let me know :)
https://redd.it/gbhtk0
@r_devops
Hi Reddit, I wanted to share a process/concept I introduced where I work for how we manage our environments.
I'm sure many of you are all aware of the usual dev, test, staging and prod environments and application changes move through these stages to finally get released to the end user. A problem me and my team had was environment bottleneck where for example devs would finish a feature but couldn't move it to the next stage because QA were still testing the previous feature in the next environment. Developers would also develop locally but if they wanted to test on the more closer to production like dev environment they risked wiping out a another devs current changes so there were constant slack messages along the lines of "Can I deploy X to Y" and you hoped someone would reply before you overwrote something you shouldn't have.
We are already a team that embraces infrastructure as code and our environment were brought up in an automated consistent matter. The problem was there was a 1 to many relationship between our environment stages and team members.
So since we can bring up an environment with code, why fix ourselves to 4? I called the concept color environments (but really you can use anything that has an an essentially infinite pool of options to choose from). Now when we work on a feature we deploy to a random color that isn't already in use and our stack gets a domain to access it based on that i.e. "cyan.example.com".
We've been doing this for half a year now and it has drastically changed our development and deployment process for the better.
* Developers can spin up their feature without waiting for an env to be available
* QA can test against a devs color or re-create a new color on their branch
* Our product owner can be given a color env with a feature to review it for as long as they need
* We can do user research and AB testing between colors
* Environment drift is not an issue as colors dont stay up very long and we always create an env from scratch
* Our deployment to prod is just bring up a new color and do a blue green DNS flip
Theres a few hurdles we had to overcome so here's a few of the main ones:
Spinning up infinite of environments can be costly. We're on AWS so took advantage of services like Lambda and other serverless services to keep costs right down. Our environments are also ephemeral by default and after a few days of being brought up they destroy themselves unless configured otherwise (such as prod envs or features that are taking longer to develop).
We gained extra flexibility with our environments but that also came with extra complexity and time spent waiting for an environment to be available. The application stack we did this on was fairly small and we found the sweet spot for time to getting a new env up from scratch ~15 minutes. Enough time to grab a coffee and not too long, updates to an env after are much quicker once its up. For that reason I don't recommend this for large application stacks, maybe this could work for a part of a stack such as a micro-service that is part of a bigger monolith.
Databases and blue green flips can be a bit tricky. Luckily since blue/green deploys are not a new thing there were a few resources out there to help us with this.
Anyways thats a quick rundown of the concept, hope it's something interesting. Has anyone else done something similar? Also if you have any questions about the concept/process let me know :)
https://redd.it/gbhtk0
@r_devops
reddit
Why I got rid of our dev, test, staging and prod environment
Hi Reddit, I wanted to share a process/concept I introduced where I work for how we manage our environments. I'm sure many of you are all aware...