From AWS CloudFormation to Terraform: Migrating Apache Kafka
Every once in a while we found ourselves in a spot where it's no longer up to us — our infrastructure demands a change.
When it comes to Kafka, the high scale and the fact that it's the system bottleneck requires us to be dynamic, responsive and in control, especially when running in production.
But how can we deploy frequent changes, such as: security, hardware, monitoring, etc. and still be stable, version controlled, audited and with a growing demand to user independence?
Check out my new blog post to hear about how Riskified created it's new Kafka infrastructure with Terraform and how we performed our cluster migration with zero downtime and zero data loss.
Invite you to read:
https://medium.com/riskified-technology/from-aws-cloudformation-to-terraform-migrating-apache-kafka-32bdabdbaa59
https://redd.it/p28lfz
@r_devops
Every once in a while we found ourselves in a spot where it's no longer up to us — our infrastructure demands a change.
When it comes to Kafka, the high scale and the fact that it's the system bottleneck requires us to be dynamic, responsive and in control, especially when running in production.
But how can we deploy frequent changes, such as: security, hardware, monitoring, etc. and still be stable, version controlled, audited and with a growing demand to user independence?
Check out my new blog post to hear about how Riskified created it's new Kafka infrastructure with Terraform and how we performed our cluster migration with zero downtime and zero data loss.
Invite you to read:
https://medium.com/riskified-technology/from-aws-cloudformation-to-terraform-migrating-apache-kafka-32bdabdbaa59
https://redd.it/p28lfz
@r_devops
Linkedin
Riskified | LinkedIn
Riskified | 59,177 followers on LinkedIn. Riskified (NYSE:RSKD) empowers businesses to unleash ecommerce growth by taking risk off the table. Many of the world’s biggest brands and publicly traded companies selling online rely on Riskified for guaranteed…
production setup
how would you move your staging to production ? what steps would you take and how would you bring up the infrastructure around it?
https://redd.it/p2r4hh
@r_devops
how would you move your staging to production ? what steps would you take and how would you bring up the infrastructure around it?
https://redd.it/p2r4hh
@r_devops
reddit
production setup
how would you move your staging to production ? what steps would you take and how would you bring up the infrastructure around it?
How does your team handle interrupt work?
Currently, the on call individual handles all interrupt work in addition to being on-call for the services the team owns. Interrupt work encompasses all things unplanned (e.g. last minute 'urgent' requests or non-planned sprint work).
Does your team/organization have processes in place to handle or track this kind of unplanned work? If so, what kind of benefits did you gain?
https://redd.it/p256ud
@r_devops
Currently, the on call individual handles all interrupt work in addition to being on-call for the services the team owns. Interrupt work encompasses all things unplanned (e.g. last minute 'urgent' requests or non-planned sprint work).
Does your team/organization have processes in place to handle or track this kind of unplanned work? If so, what kind of benefits did you gain?
https://redd.it/p256ud
@r_devops
reddit
How does your team handle interrupt work?
Currently, the on call individual handles all interrupt work in addition to being on-call for the services the team owns. Interrupt work...
Adding custom alerts in kubeprometheus helm chart
Hello all, i have a task of creating custom alerts for an application, where should i paas the alerts config within my values.yaml file ? I am using kubeprometheus stack ?
https://redd.it/p2sryz
@r_devops
Hello all, i have a task of creating custom alerts for an application, where should i paas the alerts config within my values.yaml file ? I am using kubeprometheus stack ?
https://redd.it/p2sryz
@r_devops
reddit
Adding custom alerts in kubeprometheus helm chart
Hello all, i have a task of creating custom alerts for an application, where should i paas the alerts config within my values.yaml file ? I am...
Who has the ability to connect 3 x 1 Gbit/s at home for less than $80/mo per Gbit/s?
Hi Guys!
For a project I'm working on, I need to figure out how many people can have 3 x 1 Gbit/s at home.
Some ISPs don't have dedicated infrastructure and rent it from someone else so different ISPs won't be able to give you more than one line.
Please choose the option...
View Poll
https://redd.it/p2da54
@r_devops
Hi Guys!
For a project I'm working on, I need to figure out how many people can have 3 x 1 Gbit/s at home.
Some ISPs don't have dedicated infrastructure and rent it from someone else so different ISPs won't be able to give you more than one line.
Please choose the option...
View Poll
https://redd.it/p2da54
@r_devops
reddit
Who has the ability to connect 3 x 1 Gbit/s at home for less than...
Hi Guys! For a project I'm working on, I need to figure out how many people can have 3 x 1 Gbit/s at home. Some ISPs don't have dedicated...
A writing competition, with a cash prize
For the month of August, Hashnode (https://hashnode.com) has a writing competition and one of the primary topics is AWS.
If you have written articles in the past and you put a lot of effort you can use them by republishing (use the canonical URL😎) or write new ones!
The prize is $50 and there is a lot of room (not many people have joined so far, so this makes it easier for new writers).
https://townhall.hashnode.com/special-august-giveaway-for-the-top-150-writers-of-javascript-aws-and-ruby-on-rails
https://redd.it/p2udld
@r_devops
For the month of August, Hashnode (https://hashnode.com) has a writing competition and one of the primary topics is AWS.
If you have written articles in the past and you put a lot of effort you can use them by republishing (use the canonical URL😎) or write new ones!
The prize is $50 and there is a lot of room (not many people have joined so far, so this makes it easier for new writers).
https://townhall.hashnode.com/special-august-giveaway-for-the-top-150-writers-of-javascript-aws-and-ruby-on-rails
https://redd.it/p2udld
@r_devops
Hashnode
Hashnode — Blogging Platform for Builders in Tech
Hashnode is a blogging platform where developers, engineers, and tech leaders write to sharpen ideas, share knowledge, and build their reputation. Start for free.
How to reduce risk of deployments by using Autopilot on Datadog
In the blog, we will explain how SREs can accurately verify the risk of their software in CI/CD pipeline by integrating Autopilot with Datadog monitoring solutions.
OpsMx Autopilot is a machine learning (ML) and natural language processing tool that analyzes the data for you automatically so you can quickly and accurately decide whether an update should be moved forward in the pipeline.
Autopilot helps you to stay a step ahead of the competition by automating the decision-making process and assessing risk before deployment. Autopilot is a verification module, which is a part of the larger OpsMx platform for continuous delivery built on top of Spinnaker.
It follows API based architecture, which is extremely easy to extend and integrate with any DevOps tool chain in your organization.
https://redd.it/p2vf7i
@r_devops
In the blog, we will explain how SREs can accurately verify the risk of their software in CI/CD pipeline by integrating Autopilot with Datadog monitoring solutions.
OpsMx Autopilot is a machine learning (ML) and natural language processing tool that analyzes the data for you automatically so you can quickly and accurately decide whether an update should be moved forward in the pipeline.
Autopilot helps you to stay a step ahead of the competition by automating the decision-making process and assessing risk before deployment. Autopilot is a verification module, which is a part of the larger OpsMx platform for continuous delivery built on top of Spinnaker.
It follows API based architecture, which is extremely easy to extend and integrate with any DevOps tool chain in your organization.
https://redd.it/p2vf7i
@r_devops
OpsMx Blog |
How to reduce risk of deployments in Datadog with OpsMx Autopilot
Integrate Autopilot with Data dog or nay existing platform to reduce the risk of deployments in your ci/cd pipeline.
Encrypting server-side emails using serverless workflows
G'day DevOps,
We wanted to share something we worked on as a PoC for our serverless workflow engine. The idea was not ours, but something that the group who ran the PoC dreamt up!
The problem they tried to solve was the fact that emails sent from internal systems typically only have an SMTP (or email) configuration with the generic username, password and transport security settings. But their requirement was that all of the attachments from the system sent to external emails (vendor support, managed service support or outsourced support) be compressed and encrypted.
Direktiv (open source edition) was configured with an SMTP listener, converts the email to a CloudEvent and deconstructs it into JSON objects. From that point forward the workflow does whatever they want to do (zip, encrypt, SMS password to a number).
We thought it was pretty cool and applicable to a lot of users - let us know what you think!
We've written a blog article about it below:
https://blog.direktiv.io/direktiv-encrypting-server-side-email-attachments-in-the-real-world-d18a7bccb36c
We also released version 0.3.4, a lot of features added:
https://github.com/vorteil/direktiv/releases/tag/v0.3.4
As always - we welcome feedback and questions!
https://redd.it/p2vpuz
@r_devops
G'day DevOps,
We wanted to share something we worked on as a PoC for our serverless workflow engine. The idea was not ours, but something that the group who ran the PoC dreamt up!
The problem they tried to solve was the fact that emails sent from internal systems typically only have an SMTP (or email) configuration with the generic username, password and transport security settings. But their requirement was that all of the attachments from the system sent to external emails (vendor support, managed service support or outsourced support) be compressed and encrypted.
Direktiv (open source edition) was configured with an SMTP listener, converts the email to a CloudEvent and deconstructs it into JSON objects. From that point forward the workflow does whatever they want to do (zip, encrypt, SMS password to a number).
We thought it was pretty cool and applicable to a lot of users - let us know what you think!
We've written a blog article about it below:
https://blog.direktiv.io/direktiv-encrypting-server-side-email-attachments-in-the-real-world-d18a7bccb36c
We also released version 0.3.4, a lot of features added:
https://github.com/vorteil/direktiv/releases/tag/v0.3.4
As always - we welcome feedback and questions!
https://redd.it/p2vpuz
@r_devops
Medium
Direktiv: encrypting server-side email attachments in the real-world!
We’ve all been there … engaging with external parties to monitor and manage the internal systems. A consequence of this is that internal…
How does Autopilot augment Data dog to reduce risk in a CI/CD pipeline?
This blog is a continuation of the Autopilot story where we discuss how one can reduce the risk of releases by augmenting an exiting monitoring platform like Datadog. autopilot provides Realtime risk assessment of releases before a code is deployed into production and also deny releases that fail a minimum threshold.
Once Autopilot is configured, it will automatically fetch the logs from applications, pipelines and metrics. During the execution of a pipeline, it can compare risk scores of a new release against a baseline run to assert the quality of a release. Autopilot determines if it can promote a new update fully to production or push it back to the developer for debugging. The log analysis and risk- assessment get processed in a matter of seconds and provide automated decisions during the execution of a pipeline run.
The AI/ML-enabled intelligence layer in Autopilot uses supervised learning to improve its judgment abilities over time. SREs, as they evaluate the confidence score of any release, can change Autopilot’s assessment of the impact of errors and warnings. These inputs are like feedback to Autopilot, which helps it to develop a contextual understanding of specific applications and pipelines.
Read More How does Autopilot augment Data dog to reduce risk in a CI/CD pipeline?
https://redd.it/p2verc
@r_devops
This blog is a continuation of the Autopilot story where we discuss how one can reduce the risk of releases by augmenting an exiting monitoring platform like Datadog. autopilot provides Realtime risk assessment of releases before a code is deployed into production and also deny releases that fail a minimum threshold.
Once Autopilot is configured, it will automatically fetch the logs from applications, pipelines and metrics. During the execution of a pipeline, it can compare risk scores of a new release against a baseline run to assert the quality of a release. Autopilot determines if it can promote a new update fully to production or push it back to the developer for debugging. The log analysis and risk- assessment get processed in a matter of seconds and provide automated decisions during the execution of a pipeline run.
The AI/ML-enabled intelligence layer in Autopilot uses supervised learning to improve its judgment abilities over time. SREs, as they evaluate the confidence score of any release, can change Autopilot’s assessment of the impact of errors and warnings. These inputs are like feedback to Autopilot, which helps it to develop a contextual understanding of specific applications and pipelines.
Read More How does Autopilot augment Data dog to reduce risk in a CI/CD pipeline?
https://redd.it/p2verc
@r_devops
OpsMx Blog |
How to reduce risk of deployments in Datadog with OpsMx Autopilot
Integrate Autopilot with Data dog or nay existing platform to reduce the risk of deployments in your ci/cd pipeline.
Advice on CircleCI config
Here is my CircleCI config. I don't think I am using it "correctly" even though the tests run. Any thoughts on how I can improve it?
The app is run using Heroku, but I don't want to necessarily automatically upgrade Heroku because of database schema changes.
---
version: 2.1
workflows:
main:
jobs:
- build
jobs:
build:
machine:
image: ubuntu-2004:202107-02
steps:
- checkout
# Create network
- run: docker network create test_network
# Run postgres
- run: docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=runner --name db --network test_network postgres
# Build flask image
- run: docker build -f flask/Dockerfile -t flask flask/
# Run flask image
- run: docker run -d -e TEST_DATABASE_URL=postgresql://postgres:runner@db:5432/db_test
-e DATABASE_URL=postgresql://postgres:postgres@db:5432/db_dev
--name flask --network test_network flask python manage.py run
# Run Tests
- run: docker exec flask pytest "app/tests" --cov="app" -p no:warnings
https://redd.it/p2y21u
@r_devops
Here is my CircleCI config. I don't think I am using it "correctly" even though the tests run. Any thoughts on how I can improve it?
The app is run using Heroku, but I don't want to necessarily automatically upgrade Heroku because of database schema changes.
---
version: 2.1
workflows:
main:
jobs:
- build
jobs:
build:
machine:
image: ubuntu-2004:202107-02
steps:
- checkout
# Create network
- run: docker network create test_network
# Run postgres
- run: docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=runner --name db --network test_network postgres
# Build flask image
- run: docker build -f flask/Dockerfile -t flask flask/
# Run flask image
- run: docker run -d -e TEST_DATABASE_URL=postgresql://postgres:runner@db:5432/db_test
-e DATABASE_URL=postgresql://postgres:postgres@db:5432/db_dev
--name flask --network test_network flask python manage.py run
# Run Tests
- run: docker exec flask pytest "app/tests" --cov="app" -p no:warnings
https://redd.it/p2y21u
@r_devops
reddit
Advice on CircleCI config
Here is my CircleCI config. I don't think I am using it "correctly" even though the tests run. Any thoughts on how I can improve it? The app is...
New Book: CI/CD for Monorepos
Ie have a gift for you: a free, 50-page ebook on effective CI/CD for monorepos. The book is open source, and you can download it today.
https://redd.it/p2zq5c
@r_devops
Ie have a gift for you: a free, 50-page ebook on effective CI/CD for monorepos. The book is open source, and you can download it today.
https://redd.it/p2zq5c
@r_devops
GitHub
GitHub - semaphoreci/book-monorepo-cicd: Effectively build, test, and deploy code with monorepos.
Effectively build, test, and deploy code with monorepos. - semaphoreci/book-monorepo-cicd
Job title for someone who mainly works on CI/CD?
Interested to know what job titles people prefer for someone who primarily works on CI/CD in support of an Agile scrum, that isn't "DevOps Engineer" (e.g. DevOps is a culture, not a job title, etc).
The model we have right now is "DevOps Engineers" aligned to one or more Agile scrums. The DevOps Engineers are responsible for helping the scrum build, test and release software themselves using existing tools and APIs.
The DevOps Engineers don't touch the software code or support the apps in production (SREs do that), and they don't manage the cloud infrastructure (there is a separate "Platform Engineering" team for that).
Rather they help the app developers implement the right APIs in their apps to make sure things like logging, monitoring, unit testing, containerisation are all implemented and that configuration, secret storage and so on are all done properly.
"DevOps Engineer" seems to be okay, alongside SRE and Platform Engineer (for infrastructure), but in the spirit of the "DevOps as a culture, not a job title" I'm wondering if there is a better option for this type of CI/CD/Pipeline role?
https://redd.it/p2wy88
@r_devops
Interested to know what job titles people prefer for someone who primarily works on CI/CD in support of an Agile scrum, that isn't "DevOps Engineer" (e.g. DevOps is a culture, not a job title, etc).
The model we have right now is "DevOps Engineers" aligned to one or more Agile scrums. The DevOps Engineers are responsible for helping the scrum build, test and release software themselves using existing tools and APIs.
The DevOps Engineers don't touch the software code or support the apps in production (SREs do that), and they don't manage the cloud infrastructure (there is a separate "Platform Engineering" team for that).
Rather they help the app developers implement the right APIs in their apps to make sure things like logging, monitoring, unit testing, containerisation are all implemented and that configuration, secret storage and so on are all done properly.
"DevOps Engineer" seems to be okay, alongside SRE and Platform Engineer (for infrastructure), but in the spirit of the "DevOps as a culture, not a job title" I'm wondering if there is a better option for this type of CI/CD/Pipeline role?
https://redd.it/p2wy88
@r_devops
reddit
Job title for someone who mainly works on CI/CD?
Interested to know what job titles people prefer for someone who primarily works on CI/CD in support of an Agile scrum, that isn't "DevOps...
Domain knowledge for DevOps?
I am interviewing for a higher position (slightly inclined towards the business side) and the recruiter wants to know my domain knowledge. I was stumped because I have worked with banking clients, audit firms, healthcare and data analytics startups.
As a DevOps engineer, does the domain really matter since it's basically the same flow (SCM, IaC, Config, CICD, Monitoring)? Although I know the product we are building but I don't really know the nuances of these different sectors.
What domain knowledge should I look into if I have worked primarily for Banking and Audit clients?
PS. One pointer could be the difference in security audits across these sectors. For eg, healthcare has HIPAA.
https://redd.it/p31fol
@r_devops
I am interviewing for a higher position (slightly inclined towards the business side) and the recruiter wants to know my domain knowledge. I was stumped because I have worked with banking clients, audit firms, healthcare and data analytics startups.
As a DevOps engineer, does the domain really matter since it's basically the same flow (SCM, IaC, Config, CICD, Monitoring)? Although I know the product we are building but I don't really know the nuances of these different sectors.
What domain knowledge should I look into if I have worked primarily for Banking and Audit clients?
PS. One pointer could be the difference in security audits across these sectors. For eg, healthcare has HIPAA.
https://redd.it/p31fol
@r_devops
reddit
Domain knowledge for DevOps?
I am interviewing for a higher position (slightly inclined towards the business side) and the recruiter wants to know my domain knowledge. I was...
Lost at new job, is it normal and how to overcome.
So this is my first devops jobs ever, it’s for a startup and they’ve given me projects that I need to complete. I’ve told them before in the interview that all my expertise with the tools are foundational and it’s simple and basic. terraform, docker, etc…
To which they seem to be fine with, otherwise I wouldn’t have gotten the job. But I’m actually lost as to what is going on and what I’m doing and it’s just the first week. The only things I’ve got is what they want me to do and that’s it.
I have been learning documentation and white paper for tools I need to learn. But I’m not to sure if I need to tell them I need some mentoring or if that will be an annoyance. I’m fine to do the work on my own, it’s just I need to know how to do it.
Last thing I want is for them to feel like they’re having to babysit me.
https://redd.it/p3377h
@r_devops
So this is my first devops jobs ever, it’s for a startup and they’ve given me projects that I need to complete. I’ve told them before in the interview that all my expertise with the tools are foundational and it’s simple and basic. terraform, docker, etc…
To which they seem to be fine with, otherwise I wouldn’t have gotten the job. But I’m actually lost as to what is going on and what I’m doing and it’s just the first week. The only things I’ve got is what they want me to do and that’s it.
I have been learning documentation and white paper for tools I need to learn. But I’m not to sure if I need to tell them I need some mentoring or if that will be an annoyance. I’m fine to do the work on my own, it’s just I need to know how to do it.
Last thing I want is for them to feel like they’re having to babysit me.
https://redd.it/p3377h
@r_devops
reddit
Lost at new job, is it normal and how to overcome.
So this is my first devops jobs ever, it’s for a startup and they’ve given me projects that I need to complete. I’ve told them before in the...
Dbt founder Tristan Handy on the changing face of the data stack
>“I don’t think it’s that [self-serve analytics\] are going to get more ‘complex’—it’s that they’re going to get more ‘sophisticated' ... The advancement that we saw in computer interfaces in the latter half of the 20th century was an increase in technological sophistication, but a decrease in end-user complexity.”
https://mixpanel.com/blog/tristan-handy-changing-data-stack/
https://redd.it/p32z4g
@r_devops
>“I don’t think it’s that [self-serve analytics\] are going to get more ‘complex’—it’s that they’re going to get more ‘sophisticated' ... The advancement that we saw in computer interfaces in the latter half of the 20th century was an increase in technological sophistication, but a decrease in end-user complexity.”
https://mixpanel.com/blog/tristan-handy-changing-data-stack/
https://redd.it/p32z4g
@r_devops
Mixpanel
Tristan Handy on the changing face of the data stack - Mixpanel
Having started as an "Excel guy" for hire in high school and gone on to found dbt Labs a few decades later, there are few more qualified to give lessons on the past, present, and future of the modern data stack.
AMA Alert! We’re from Devtron Labs, one of India’s first open source platform for Kubernetes
We’ll be going live at 10pm EST and we look forward to your questions on DevOps, Kubernetes, running a start-up and working in the tech industry!
Check us out here - https://devtron.ai
https://redd.it/p36c74
@r_devops
We’ll be going live at 10pm EST and we look forward to your questions on DevOps, Kubernetes, running a start-up and working in the tech industry!
Check us out here - https://devtron.ai
https://redd.it/p36c74
@r_devops
devtron.ai
Devtron | AI-Native Kubernetes Management Platform
Simplify Kubernetes operations with Devtron - the AI for DevOps platform that unifies application, infrastructure, and cost management with intelligent pipelines.
How is bitBucket for cicd pipeline??
Anyone is using bitbucket for cicd? We have source code lying in bitbucket, that is reason I am trying to see if that worth exploring? How is it while compared to GitLab? I think GitLab provides end to end devops tool chain right from the planning to monitoring. Want to get reviews from the real users...
https://redd.it/p37777
@r_devops
Anyone is using bitbucket for cicd? We have source code lying in bitbucket, that is reason I am trying to see if that worth exploring? How is it while compared to GitLab? I think GitLab provides end to end devops tool chain right from the planning to monitoring. Want to get reviews from the real users...
https://redd.it/p37777
@r_devops
reddit
How is bitBucket for cicd pipeline??
Anyone is using bitbucket for cicd? We have source code lying in bitbucket, that is reason I am trying to see if that worth exploring? How is it...
What tool do you use to manage ECS Deployments?
We're thinking about using terraform to provision base infrastructure, (maybe with "stub" ECS services).
It would be nice to have a simple file that engineers could manage themselves (and that can live with application code), which when applied to ECS would create/modify services. e.g. set container images, env vars, scaling settings.
A key requirement here is really being able to do this via a declarative file format, and not by running ad-hoc commands in a CLI.
Does anyone have any good suggestions?
Thanks!
https://redd.it/p38993
@r_devops
We're thinking about using terraform to provision base infrastructure, (maybe with "stub" ECS services).
It would be nice to have a simple file that engineers could manage themselves (and that can live with application code), which when applied to ECS would create/modify services. e.g. set container images, env vars, scaling settings.
A key requirement here is really being able to do this via a declarative file format, and not by running ad-hoc commands in a CLI.
Does anyone have any good suggestions?
Thanks!
https://redd.it/p38993
@r_devops
reddit
What tool do you use to manage ECS Deployments?
We're thinking about using terraform to provision base infrastructure, (maybe with "stub" ECS services). It would be nice to have a...
Sharing some woes with Ubuntu and cloud-init creating a secondary IP on a single NIC
I had written out a question to ask r/devops about Packer building a template on VMware with Ubuntu 20.04.2 but was finally able to find the right combo of holding my tongue while wiggling my ears or whatever, and wanted to share. Mainly so in a month when I hit a similar wall I have somewhere for Google to find it.
The situation I was facing:
I had a weird thing where during the packer building of an Ubuntu template in VMware, the vm gets two IPs on the same nic. That fact isn't really a problem, but the resulting template has the same issue. If I build a vm off that template, either using VMware's OS customizations, not using them, or using any kind of terraform/ansible build process, the VM gets 2 IPs.
Why this bothered me was I've done every combination I could think of with the netplan yaml files to fix it on the final product. Also I had tried several different fixes on the packer build side, such as adding some user-data code to identify the nic (ens33/ens160), set mac as the dhcp-identifyier, etc etc.
VMware has a [KB article](https://kb.vmware.com/s/article/70601) that I thought was related somehow, but their workarounds are just editing the netplan yaml files, which didn't make any lasting change.
On a newly built template, building a VM, under /etc/netplan, I was seeing
00-installer-config.yaml
00-installer-config.yaml.BeforeVMwareCustomization
50-cloud-init.yaml
50-cloud-init.yaml.BeforeVMwareCustomization
99-netcfg-vmware.yaml
The contents of 50-cloud-init.yaml:
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
ens160:
dhcp4: true
match:
macaddress: 00:50:56:82:c5:59
set-name: ens160
I have attempted the fix, but putting a 99-disable-network-config.cfg in /etc/cloud/cloud.d doesn't help, and there is already a file under the directory containing:
/etc/cloud/cloud.cfg.d$ cat subiquity-disable-cloudinit-networking.cfg
network: {config: disabled}
that is put there by the autoinstaller. Both get ignored which I can't explain
In the end, the right combination was adding this code in the user-data:
network:
network:
version: 2
ethernets:
ens160:
dhcp4: true
and adding
'sed -i "s/dhcp4: true/&\n dhcp-identifier: mac/" /target/etc/netplan/00-installer-config.yaml'
as a late-command.
Also, in the build json file, having this for the boot command:
"boot_command": [
"<enter><enter><f6><esc><wait>",
"autoinstall ds=nocloud;<enter><wait>",
"<wait><enter>"
],
The difference was at some point I had added in between autoinstall and ds=... `ip=dchp` because I wasn't getting a dhcp lease for some reason.
I do have to set a different network block if I'm building in my local VMware install, because locally it assigns the interface ens33 vs ens160, but otherwise it's the same.
There's code to try and let cloud-init identify the nic, but it seems to fail so I just have to set it for either local or vsphere.
Hope this is helpful to someone out there... (also sorry if I mixed up a present/past tense somewhere, this started as a question and just edited to a solution)
https://redd.it/p36zeb
@r_devops
I had written out a question to ask r/devops about Packer building a template on VMware with Ubuntu 20.04.2 but was finally able to find the right combo of holding my tongue while wiggling my ears or whatever, and wanted to share. Mainly so in a month when I hit a similar wall I have somewhere for Google to find it.
The situation I was facing:
I had a weird thing where during the packer building of an Ubuntu template in VMware, the vm gets two IPs on the same nic. That fact isn't really a problem, but the resulting template has the same issue. If I build a vm off that template, either using VMware's OS customizations, not using them, or using any kind of terraform/ansible build process, the VM gets 2 IPs.
Why this bothered me was I've done every combination I could think of with the netplan yaml files to fix it on the final product. Also I had tried several different fixes on the packer build side, such as adding some user-data code to identify the nic (ens33/ens160), set mac as the dhcp-identifyier, etc etc.
VMware has a [KB article](https://kb.vmware.com/s/article/70601) that I thought was related somehow, but their workarounds are just editing the netplan yaml files, which didn't make any lasting change.
On a newly built template, building a VM, under /etc/netplan, I was seeing
00-installer-config.yaml
00-installer-config.yaml.BeforeVMwareCustomization
50-cloud-init.yaml
50-cloud-init.yaml.BeforeVMwareCustomization
99-netcfg-vmware.yaml
The contents of 50-cloud-init.yaml:
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
ens160:
dhcp4: true
match:
macaddress: 00:50:56:82:c5:59
set-name: ens160
I have attempted the fix, but putting a 99-disable-network-config.cfg in /etc/cloud/cloud.d doesn't help, and there is already a file under the directory containing:
/etc/cloud/cloud.cfg.d$ cat subiquity-disable-cloudinit-networking.cfg
network: {config: disabled}
that is put there by the autoinstaller. Both get ignored which I can't explain
In the end, the right combination was adding this code in the user-data:
network:
network:
version: 2
ethernets:
ens160:
dhcp4: true
and adding
'sed -i "s/dhcp4: true/&\n dhcp-identifier: mac/" /target/etc/netplan/00-installer-config.yaml'
as a late-command.
Also, in the build json file, having this for the boot command:
"boot_command": [
"<enter><enter><f6><esc><wait>",
"autoinstall ds=nocloud;<enter><wait>",
"<wait><enter>"
],
The difference was at some point I had added in between autoinstall and ds=... `ip=dchp` because I wasn't getting a dhcp lease for some reason.
I do have to set a different network block if I'm building in my local VMware install, because locally it assigns the interface ens33 vs ens160, but otherwise it's the same.
There's code to try and let cloud-init identify the nic, but it seems to fail so I just have to set it for either local or vsphere.
Hope this is helpful to someone out there... (also sorry if I mixed up a present/past tense somewhere, this started as a question and just edited to a solution)
https://redd.it/p36zeb
@r_devops
Vmware
Customized Ubuntu VM which uses Netplan could have multiple static IP addresses for a single network interface (70601)
When a Ubuntu VM or template which uses Netplan as network configuration has a static IP address set for network interface, and this VM or template is customize
Who still uses vagrant and why?
Basically the title. Are people still using vagrant as opposed to something like containerizing an app? I just noticed that proton is using vagrant still and spins up a VM instead of a container to do build activities. Is this still normal instead of doing the same or similar work with docker?
https://redd.it/p3b75w
@r_devops
Basically the title. Are people still using vagrant as opposed to something like containerizing an app? I just noticed that proton is using vagrant still and spins up a VM instead of a container to do build activities. Is this still normal instead of doing the same or similar work with docker?
https://redd.it/p3b75w
@r_devops
reddit
Who still uses vagrant and why?
Basically the title. Are people still using vagrant as opposed to something like containerizing an app? I just noticed that proton is using...