Is it true that the decision to choose a VDS/VPS hosting for a company is more influenced by engineers than managers?
I assume that behind any management decision on choosing a VDS/VPS hosting company, there is a consultation (past or present) with engineers.
View Poll
https://redd.it/p2jsc2
@r_devops
I assume that behind any management decision on choosing a VDS/VPS hosting company, there is a consultation (past or present) with engineers.
View Poll
https://redd.it/p2jsc2
@r_devops
How would you answer this Problem Statement
Roughly about a year when I had gotten my cloud cert, and was getting into devops I had an interview for a company for a junior devops engineer position. For the interview I had to explain and answer the following Problem Statement:
· The company is creating its new applications with an event driven microservices pattern.
· The company has already selected AWS
· The company has already selected Jenkins
· The microservices uptime should be 24/7
· The microservices need to be highly resilient, an hour of downtime will cost the company a million dollars in revenue.
Create a design for continuous delivery for these microservices from the branching strategy, through deployment, and the overall stability and scalability in a production environment.
I don't remember the answer I gave, but I am curious how would someone with a lot of experience in the industry answer this question?
https://redd.it/p2hti5
@r_devops
Roughly about a year when I had gotten my cloud cert, and was getting into devops I had an interview for a company for a junior devops engineer position. For the interview I had to explain and answer the following Problem Statement:
· The company is creating its new applications with an event driven microservices pattern.
· The company has already selected AWS
· The company has already selected Jenkins
· The microservices uptime should be 24/7
· The microservices need to be highly resilient, an hour of downtime will cost the company a million dollars in revenue.
Create a design for continuous delivery for these microservices from the branching strategy, through deployment, and the overall stability and scalability in a production environment.
I don't remember the answer I gave, but I am curious how would someone with a lot of experience in the industry answer this question?
https://redd.it/p2hti5
@r_devops
reddit
How would you answer this Problem Statement
Roughly about a year when I had gotten my cloud cert, and was getting into devops I had an interview for a company for a junior devops engineer...
From AWS CloudFormation to Terraform: Migrating Apache Kafka
Every once in a while we found ourselves in a spot where it's no longer up to us — our infrastructure demands a change.
When it comes to Kafka, the high scale and the fact that it's the system bottleneck requires us to be dynamic, responsive and in control, especially when running in production.
But how can we deploy frequent changes, such as: security, hardware, monitoring, etc. and still be stable, version controlled, audited and with a growing demand to user independence?
Check out my new blog post to hear about how Riskified created it's new Kafka infrastructure with Terraform and how we performed our cluster migration with zero downtime and zero data loss.
Invite you to read:
https://medium.com/riskified-technology/from-aws-cloudformation-to-terraform-migrating-apache-kafka-32bdabdbaa59
https://redd.it/p28lfz
@r_devops
Every once in a while we found ourselves in a spot where it's no longer up to us — our infrastructure demands a change.
When it comes to Kafka, the high scale and the fact that it's the system bottleneck requires us to be dynamic, responsive and in control, especially when running in production.
But how can we deploy frequent changes, such as: security, hardware, monitoring, etc. and still be stable, version controlled, audited and with a growing demand to user independence?
Check out my new blog post to hear about how Riskified created it's new Kafka infrastructure with Terraform and how we performed our cluster migration with zero downtime and zero data loss.
Invite you to read:
https://medium.com/riskified-technology/from-aws-cloudformation-to-terraform-migrating-apache-kafka-32bdabdbaa59
https://redd.it/p28lfz
@r_devops
Linkedin
Riskified | LinkedIn
Riskified | 59,177 followers on LinkedIn. Riskified (NYSE:RSKD) empowers businesses to unleash ecommerce growth by taking risk off the table. Many of the world’s biggest brands and publicly traded companies selling online rely on Riskified for guaranteed…
production setup
how would you move your staging to production ? what steps would you take and how would you bring up the infrastructure around it?
https://redd.it/p2r4hh
@r_devops
how would you move your staging to production ? what steps would you take and how would you bring up the infrastructure around it?
https://redd.it/p2r4hh
@r_devops
reddit
production setup
how would you move your staging to production ? what steps would you take and how would you bring up the infrastructure around it?
How does your team handle interrupt work?
Currently, the on call individual handles all interrupt work in addition to being on-call for the services the team owns. Interrupt work encompasses all things unplanned (e.g. last minute 'urgent' requests or non-planned sprint work).
Does your team/organization have processes in place to handle or track this kind of unplanned work? If so, what kind of benefits did you gain?
https://redd.it/p256ud
@r_devops
Currently, the on call individual handles all interrupt work in addition to being on-call for the services the team owns. Interrupt work encompasses all things unplanned (e.g. last minute 'urgent' requests or non-planned sprint work).
Does your team/organization have processes in place to handle or track this kind of unplanned work? If so, what kind of benefits did you gain?
https://redd.it/p256ud
@r_devops
reddit
How does your team handle interrupt work?
Currently, the on call individual handles all interrupt work in addition to being on-call for the services the team owns. Interrupt work...
Adding custom alerts in kubeprometheus helm chart
Hello all, i have a task of creating custom alerts for an application, where should i paas the alerts config within my values.yaml file ? I am using kubeprometheus stack ?
https://redd.it/p2sryz
@r_devops
Hello all, i have a task of creating custom alerts for an application, where should i paas the alerts config within my values.yaml file ? I am using kubeprometheus stack ?
https://redd.it/p2sryz
@r_devops
reddit
Adding custom alerts in kubeprometheus helm chart
Hello all, i have a task of creating custom alerts for an application, where should i paas the alerts config within my values.yaml file ? I am...
Who has the ability to connect 3 x 1 Gbit/s at home for less than $80/mo per Gbit/s?
Hi Guys!
For a project I'm working on, I need to figure out how many people can have 3 x 1 Gbit/s at home.
Some ISPs don't have dedicated infrastructure and rent it from someone else so different ISPs won't be able to give you more than one line.
Please choose the option...
View Poll
https://redd.it/p2da54
@r_devops
Hi Guys!
For a project I'm working on, I need to figure out how many people can have 3 x 1 Gbit/s at home.
Some ISPs don't have dedicated infrastructure and rent it from someone else so different ISPs won't be able to give you more than one line.
Please choose the option...
View Poll
https://redd.it/p2da54
@r_devops
reddit
Who has the ability to connect 3 x 1 Gbit/s at home for less than...
Hi Guys! For a project I'm working on, I need to figure out how many people can have 3 x 1 Gbit/s at home. Some ISPs don't have dedicated...
A writing competition, with a cash prize
For the month of August, Hashnode (https://hashnode.com) has a writing competition and one of the primary topics is AWS.
If you have written articles in the past and you put a lot of effort you can use them by republishing (use the canonical URL😎) or write new ones!
The prize is $50 and there is a lot of room (not many people have joined so far, so this makes it easier for new writers).
https://townhall.hashnode.com/special-august-giveaway-for-the-top-150-writers-of-javascript-aws-and-ruby-on-rails
https://redd.it/p2udld
@r_devops
For the month of August, Hashnode (https://hashnode.com) has a writing competition and one of the primary topics is AWS.
If you have written articles in the past and you put a lot of effort you can use them by republishing (use the canonical URL😎) or write new ones!
The prize is $50 and there is a lot of room (not many people have joined so far, so this makes it easier for new writers).
https://townhall.hashnode.com/special-august-giveaway-for-the-top-150-writers-of-javascript-aws-and-ruby-on-rails
https://redd.it/p2udld
@r_devops
Hashnode
Hashnode — Blogging Platform for Builders in Tech
Hashnode is a blogging platform where developers, engineers, and tech leaders write to sharpen ideas, share knowledge, and build their reputation. Start for free.
How to reduce risk of deployments by using Autopilot on Datadog
In the blog, we will explain how SREs can accurately verify the risk of their software in CI/CD pipeline by integrating Autopilot with Datadog monitoring solutions.
OpsMx Autopilot is a machine learning (ML) and natural language processing tool that analyzes the data for you automatically so you can quickly and accurately decide whether an update should be moved forward in the pipeline.
Autopilot helps you to stay a step ahead of the competition by automating the decision-making process and assessing risk before deployment. Autopilot is a verification module, which is a part of the larger OpsMx platform for continuous delivery built on top of Spinnaker.
It follows API based architecture, which is extremely easy to extend and integrate with any DevOps tool chain in your organization.
https://redd.it/p2vf7i
@r_devops
In the blog, we will explain how SREs can accurately verify the risk of their software in CI/CD pipeline by integrating Autopilot with Datadog monitoring solutions.
OpsMx Autopilot is a machine learning (ML) and natural language processing tool that analyzes the data for you automatically so you can quickly and accurately decide whether an update should be moved forward in the pipeline.
Autopilot helps you to stay a step ahead of the competition by automating the decision-making process and assessing risk before deployment. Autopilot is a verification module, which is a part of the larger OpsMx platform for continuous delivery built on top of Spinnaker.
It follows API based architecture, which is extremely easy to extend and integrate with any DevOps tool chain in your organization.
https://redd.it/p2vf7i
@r_devops
OpsMx Blog |
How to reduce risk of deployments in Datadog with OpsMx Autopilot
Integrate Autopilot with Data dog or nay existing platform to reduce the risk of deployments in your ci/cd pipeline.
Encrypting server-side emails using serverless workflows
G'day DevOps,
We wanted to share something we worked on as a PoC for our serverless workflow engine. The idea was not ours, but something that the group who ran the PoC dreamt up!
The problem they tried to solve was the fact that emails sent from internal systems typically only have an SMTP (or email) configuration with the generic username, password and transport security settings. But their requirement was that all of the attachments from the system sent to external emails (vendor support, managed service support or outsourced support) be compressed and encrypted.
Direktiv (open source edition) was configured with an SMTP listener, converts the email to a CloudEvent and deconstructs it into JSON objects. From that point forward the workflow does whatever they want to do (zip, encrypt, SMS password to a number).
We thought it was pretty cool and applicable to a lot of users - let us know what you think!
We've written a blog article about it below:
https://blog.direktiv.io/direktiv-encrypting-server-side-email-attachments-in-the-real-world-d18a7bccb36c
We also released version 0.3.4, a lot of features added:
https://github.com/vorteil/direktiv/releases/tag/v0.3.4
As always - we welcome feedback and questions!
https://redd.it/p2vpuz
@r_devops
G'day DevOps,
We wanted to share something we worked on as a PoC for our serverless workflow engine. The idea was not ours, but something that the group who ran the PoC dreamt up!
The problem they tried to solve was the fact that emails sent from internal systems typically only have an SMTP (or email) configuration with the generic username, password and transport security settings. But their requirement was that all of the attachments from the system sent to external emails (vendor support, managed service support or outsourced support) be compressed and encrypted.
Direktiv (open source edition) was configured with an SMTP listener, converts the email to a CloudEvent and deconstructs it into JSON objects. From that point forward the workflow does whatever they want to do (zip, encrypt, SMS password to a number).
We thought it was pretty cool and applicable to a lot of users - let us know what you think!
We've written a blog article about it below:
https://blog.direktiv.io/direktiv-encrypting-server-side-email-attachments-in-the-real-world-d18a7bccb36c
We also released version 0.3.4, a lot of features added:
https://github.com/vorteil/direktiv/releases/tag/v0.3.4
As always - we welcome feedback and questions!
https://redd.it/p2vpuz
@r_devops
Medium
Direktiv: encrypting server-side email attachments in the real-world!
We’ve all been there … engaging with external parties to monitor and manage the internal systems. A consequence of this is that internal…
How does Autopilot augment Data dog to reduce risk in a CI/CD pipeline?
This blog is a continuation of the Autopilot story where we discuss how one can reduce the risk of releases by augmenting an exiting monitoring platform like Datadog. autopilot provides Realtime risk assessment of releases before a code is deployed into production and also deny releases that fail a minimum threshold.
Once Autopilot is configured, it will automatically fetch the logs from applications, pipelines and metrics. During the execution of a pipeline, it can compare risk scores of a new release against a baseline run to assert the quality of a release. Autopilot determines if it can promote a new update fully to production or push it back to the developer for debugging. The log analysis and risk- assessment get processed in a matter of seconds and provide automated decisions during the execution of a pipeline run.
The AI/ML-enabled intelligence layer in Autopilot uses supervised learning to improve its judgment abilities over time. SREs, as they evaluate the confidence score of any release, can change Autopilot’s assessment of the impact of errors and warnings. These inputs are like feedback to Autopilot, which helps it to develop a contextual understanding of specific applications and pipelines.
Read More How does Autopilot augment Data dog to reduce risk in a CI/CD pipeline?
https://redd.it/p2verc
@r_devops
This blog is a continuation of the Autopilot story where we discuss how one can reduce the risk of releases by augmenting an exiting monitoring platform like Datadog. autopilot provides Realtime risk assessment of releases before a code is deployed into production and also deny releases that fail a minimum threshold.
Once Autopilot is configured, it will automatically fetch the logs from applications, pipelines and metrics. During the execution of a pipeline, it can compare risk scores of a new release against a baseline run to assert the quality of a release. Autopilot determines if it can promote a new update fully to production or push it back to the developer for debugging. The log analysis and risk- assessment get processed in a matter of seconds and provide automated decisions during the execution of a pipeline run.
The AI/ML-enabled intelligence layer in Autopilot uses supervised learning to improve its judgment abilities over time. SREs, as they evaluate the confidence score of any release, can change Autopilot’s assessment of the impact of errors and warnings. These inputs are like feedback to Autopilot, which helps it to develop a contextual understanding of specific applications and pipelines.
Read More How does Autopilot augment Data dog to reduce risk in a CI/CD pipeline?
https://redd.it/p2verc
@r_devops
OpsMx Blog |
How to reduce risk of deployments in Datadog with OpsMx Autopilot
Integrate Autopilot with Data dog or nay existing platform to reduce the risk of deployments in your ci/cd pipeline.
Advice on CircleCI config
Here is my CircleCI config. I don't think I am using it "correctly" even though the tests run. Any thoughts on how I can improve it?
The app is run using Heroku, but I don't want to necessarily automatically upgrade Heroku because of database schema changes.
---
version: 2.1
workflows:
main:
jobs:
- build
jobs:
build:
machine:
image: ubuntu-2004:202107-02
steps:
- checkout
# Create network
- run: docker network create test_network
# Run postgres
- run: docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=runner --name db --network test_network postgres
# Build flask image
- run: docker build -f flask/Dockerfile -t flask flask/
# Run flask image
- run: docker run -d -e TEST_DATABASE_URL=postgresql://postgres:runner@db:5432/db_test
-e DATABASE_URL=postgresql://postgres:postgres@db:5432/db_dev
--name flask --network test_network flask python manage.py run
# Run Tests
- run: docker exec flask pytest "app/tests" --cov="app" -p no:warnings
https://redd.it/p2y21u
@r_devops
Here is my CircleCI config. I don't think I am using it "correctly" even though the tests run. Any thoughts on how I can improve it?
The app is run using Heroku, but I don't want to necessarily automatically upgrade Heroku because of database schema changes.
---
version: 2.1
workflows:
main:
jobs:
- build
jobs:
build:
machine:
image: ubuntu-2004:202107-02
steps:
- checkout
# Create network
- run: docker network create test_network
# Run postgres
- run: docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=runner --name db --network test_network postgres
# Build flask image
- run: docker build -f flask/Dockerfile -t flask flask/
# Run flask image
- run: docker run -d -e TEST_DATABASE_URL=postgresql://postgres:runner@db:5432/db_test
-e DATABASE_URL=postgresql://postgres:postgres@db:5432/db_dev
--name flask --network test_network flask python manage.py run
# Run Tests
- run: docker exec flask pytest "app/tests" --cov="app" -p no:warnings
https://redd.it/p2y21u
@r_devops
reddit
Advice on CircleCI config
Here is my CircleCI config. I don't think I am using it "correctly" even though the tests run. Any thoughts on how I can improve it? The app is...
New Book: CI/CD for Monorepos
Ie have a gift for you: a free, 50-page ebook on effective CI/CD for monorepos. The book is open source, and you can download it today.
https://redd.it/p2zq5c
@r_devops
Ie have a gift for you: a free, 50-page ebook on effective CI/CD for monorepos. The book is open source, and you can download it today.
https://redd.it/p2zq5c
@r_devops
GitHub
GitHub - semaphoreci/book-monorepo-cicd: Effectively build, test, and deploy code with monorepos.
Effectively build, test, and deploy code with monorepos. - semaphoreci/book-monorepo-cicd
Job title for someone who mainly works on CI/CD?
Interested to know what job titles people prefer for someone who primarily works on CI/CD in support of an Agile scrum, that isn't "DevOps Engineer" (e.g. DevOps is a culture, not a job title, etc).
The model we have right now is "DevOps Engineers" aligned to one or more Agile scrums. The DevOps Engineers are responsible for helping the scrum build, test and release software themselves using existing tools and APIs.
The DevOps Engineers don't touch the software code or support the apps in production (SREs do that), and they don't manage the cloud infrastructure (there is a separate "Platform Engineering" team for that).
Rather they help the app developers implement the right APIs in their apps to make sure things like logging, monitoring, unit testing, containerisation are all implemented and that configuration, secret storage and so on are all done properly.
"DevOps Engineer" seems to be okay, alongside SRE and Platform Engineer (for infrastructure), but in the spirit of the "DevOps as a culture, not a job title" I'm wondering if there is a better option for this type of CI/CD/Pipeline role?
https://redd.it/p2wy88
@r_devops
Interested to know what job titles people prefer for someone who primarily works on CI/CD in support of an Agile scrum, that isn't "DevOps Engineer" (e.g. DevOps is a culture, not a job title, etc).
The model we have right now is "DevOps Engineers" aligned to one or more Agile scrums. The DevOps Engineers are responsible for helping the scrum build, test and release software themselves using existing tools and APIs.
The DevOps Engineers don't touch the software code or support the apps in production (SREs do that), and they don't manage the cloud infrastructure (there is a separate "Platform Engineering" team for that).
Rather they help the app developers implement the right APIs in their apps to make sure things like logging, monitoring, unit testing, containerisation are all implemented and that configuration, secret storage and so on are all done properly.
"DevOps Engineer" seems to be okay, alongside SRE and Platform Engineer (for infrastructure), but in the spirit of the "DevOps as a culture, not a job title" I'm wondering if there is a better option for this type of CI/CD/Pipeline role?
https://redd.it/p2wy88
@r_devops
reddit
Job title for someone who mainly works on CI/CD?
Interested to know what job titles people prefer for someone who primarily works on CI/CD in support of an Agile scrum, that isn't "DevOps...
Domain knowledge for DevOps?
I am interviewing for a higher position (slightly inclined towards the business side) and the recruiter wants to know my domain knowledge. I was stumped because I have worked with banking clients, audit firms, healthcare and data analytics startups.
As a DevOps engineer, does the domain really matter since it's basically the same flow (SCM, IaC, Config, CICD, Monitoring)? Although I know the product we are building but I don't really know the nuances of these different sectors.
What domain knowledge should I look into if I have worked primarily for Banking and Audit clients?
PS. One pointer could be the difference in security audits across these sectors. For eg, healthcare has HIPAA.
https://redd.it/p31fol
@r_devops
I am interviewing for a higher position (slightly inclined towards the business side) and the recruiter wants to know my domain knowledge. I was stumped because I have worked with banking clients, audit firms, healthcare and data analytics startups.
As a DevOps engineer, does the domain really matter since it's basically the same flow (SCM, IaC, Config, CICD, Monitoring)? Although I know the product we are building but I don't really know the nuances of these different sectors.
What domain knowledge should I look into if I have worked primarily for Banking and Audit clients?
PS. One pointer could be the difference in security audits across these sectors. For eg, healthcare has HIPAA.
https://redd.it/p31fol
@r_devops
reddit
Domain knowledge for DevOps?
I am interviewing for a higher position (slightly inclined towards the business side) and the recruiter wants to know my domain knowledge. I was...
Lost at new job, is it normal and how to overcome.
So this is my first devops jobs ever, it’s for a startup and they’ve given me projects that I need to complete. I’ve told them before in the interview that all my expertise with the tools are foundational and it’s simple and basic. terraform, docker, etc…
To which they seem to be fine with, otherwise I wouldn’t have gotten the job. But I’m actually lost as to what is going on and what I’m doing and it’s just the first week. The only things I’ve got is what they want me to do and that’s it.
I have been learning documentation and white paper for tools I need to learn. But I’m not to sure if I need to tell them I need some mentoring or if that will be an annoyance. I’m fine to do the work on my own, it’s just I need to know how to do it.
Last thing I want is for them to feel like they’re having to babysit me.
https://redd.it/p3377h
@r_devops
So this is my first devops jobs ever, it’s for a startup and they’ve given me projects that I need to complete. I’ve told them before in the interview that all my expertise with the tools are foundational and it’s simple and basic. terraform, docker, etc…
To which they seem to be fine with, otherwise I wouldn’t have gotten the job. But I’m actually lost as to what is going on and what I’m doing and it’s just the first week. The only things I’ve got is what they want me to do and that’s it.
I have been learning documentation and white paper for tools I need to learn. But I’m not to sure if I need to tell them I need some mentoring or if that will be an annoyance. I’m fine to do the work on my own, it’s just I need to know how to do it.
Last thing I want is for them to feel like they’re having to babysit me.
https://redd.it/p3377h
@r_devops
reddit
Lost at new job, is it normal and how to overcome.
So this is my first devops jobs ever, it’s for a startup and they’ve given me projects that I need to complete. I’ve told them before in the...
Dbt founder Tristan Handy on the changing face of the data stack
>“I don’t think it’s that [self-serve analytics\] are going to get more ‘complex’—it’s that they’re going to get more ‘sophisticated' ... The advancement that we saw in computer interfaces in the latter half of the 20th century was an increase in technological sophistication, but a decrease in end-user complexity.”
https://mixpanel.com/blog/tristan-handy-changing-data-stack/
https://redd.it/p32z4g
@r_devops
>“I don’t think it’s that [self-serve analytics\] are going to get more ‘complex’—it’s that they’re going to get more ‘sophisticated' ... The advancement that we saw in computer interfaces in the latter half of the 20th century was an increase in technological sophistication, but a decrease in end-user complexity.”
https://mixpanel.com/blog/tristan-handy-changing-data-stack/
https://redd.it/p32z4g
@r_devops
Mixpanel
Tristan Handy on the changing face of the data stack - Mixpanel
Having started as an "Excel guy" for hire in high school and gone on to found dbt Labs a few decades later, there are few more qualified to give lessons on the past, present, and future of the modern data stack.
AMA Alert! We’re from Devtron Labs, one of India’s first open source platform for Kubernetes
We’ll be going live at 10pm EST and we look forward to your questions on DevOps, Kubernetes, running a start-up and working in the tech industry!
Check us out here - https://devtron.ai
https://redd.it/p36c74
@r_devops
We’ll be going live at 10pm EST and we look forward to your questions on DevOps, Kubernetes, running a start-up and working in the tech industry!
Check us out here - https://devtron.ai
https://redd.it/p36c74
@r_devops
devtron.ai
Devtron | AI-Native Kubernetes Management Platform
Simplify Kubernetes operations with Devtron - the AI for DevOps platform that unifies application, infrastructure, and cost management with intelligent pipelines.
How is bitBucket for cicd pipeline??
Anyone is using bitbucket for cicd? We have source code lying in bitbucket, that is reason I am trying to see if that worth exploring? How is it while compared to GitLab? I think GitLab provides end to end devops tool chain right from the planning to monitoring. Want to get reviews from the real users...
https://redd.it/p37777
@r_devops
Anyone is using bitbucket for cicd? We have source code lying in bitbucket, that is reason I am trying to see if that worth exploring? How is it while compared to GitLab? I think GitLab provides end to end devops tool chain right from the planning to monitoring. Want to get reviews from the real users...
https://redd.it/p37777
@r_devops
reddit
How is bitBucket for cicd pipeline??
Anyone is using bitbucket for cicd? We have source code lying in bitbucket, that is reason I am trying to see if that worth exploring? How is it...
What tool do you use to manage ECS Deployments?
We're thinking about using terraform to provision base infrastructure, (maybe with "stub" ECS services).
It would be nice to have a simple file that engineers could manage themselves (and that can live with application code), which when applied to ECS would create/modify services. e.g. set container images, env vars, scaling settings.
A key requirement here is really being able to do this via a declarative file format, and not by running ad-hoc commands in a CLI.
Does anyone have any good suggestions?
Thanks!
https://redd.it/p38993
@r_devops
We're thinking about using terraform to provision base infrastructure, (maybe with "stub" ECS services).
It would be nice to have a simple file that engineers could manage themselves (and that can live with application code), which when applied to ECS would create/modify services. e.g. set container images, env vars, scaling settings.
A key requirement here is really being able to do this via a declarative file format, and not by running ad-hoc commands in a CLI.
Does anyone have any good suggestions?
Thanks!
https://redd.it/p38993
@r_devops
reddit
What tool do you use to manage ECS Deployments?
We're thinking about using terraform to provision base infrastructure, (maybe with "stub" ECS services). It would be nice to have a...