Reddit DevOps
266 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Integrating github Actions with gopaddle for a seamless CI/CD on Kubernetes

Continuous Integration (CI) and Rolling Update to a kubernetes service can be achieved seamlessly when using gopaddle Deck and Propeller together. However, in some scenarios, you may choose to use a different tool/platform in place of Deck to build the docker images and use Propeller deploying and managing the applications. For example, you may choose to use github Actions and then integrate with Propeller for rolling updates. Read more to know how to use github actions with gopaddle for a seamless CI/CD on kubernetes. #kubernetes #docker #github #devops #cloud #azure #microservices https://blog.gopaddle.io/2021/02/18/integrating-github-actions-with-gopaddle-for-a-seamless-ci-cd-on-kubernetes/

https://redd.it/lmsa7t
@r_devops
is it possible to use placeholder variables in a cloud-init file which are then replaced by environment variable for customizing Ubuntu 20.04 Vagrant Box?

I am trying to learn vagrant with its cloud-init experimental feature. I can customize an Ubuntu 20.04 box by passing information like hostname and users using `user-data` file with the `#cloud-config` type. I am curious can I pass such information using `${HOST}` and `${CUSTOM_USER}` into such `cloud-init` files to provision a dynamically created vagrant box?

So far I have tried doing it but the vagrant box does not provide substitution, instead of passing the value via environment variables, literally ${CUSTOM_USER} gets created in `/etc/passwd` file of the vagrant image.

Help would be appreciated here since `cloud-init` beyond the standard examples doesn't have a lot of tutorials

https://redd.it/tikw2o
@r_devops
Deploy ASP .NET 6 MVC Web App on Google Cloud Run using Cloud Build

Learn how to Deploy ASP .NET 6 MVC Web App on Google Cloud Run using Cloud Build

In this tutorial, we will see a methodical way to implement (CD) Continuous Deployment of an ASP .NET 6 MVC Web App on Google Cloud Run with the help of Google Cloud Build Trigger.

By the end of this tutorial, you will be able to have a full understanding of enabling Continuous Delivery of ASP .NET 6 applications to Cloud Run via Cloud Build.

This tutorial covers in-depth concepts of working with Cloud Build triggers, Cloud Run features such as Logs, Revisions, SLOs etc.

The tutorial also helps you understand how to troubleshoot the Continuous Deployments on Cloud Run.

https://youtu.be/5M9yzZOJXaQ

#cloud #google #aspnetcore #postgresql #cloudstorage #cloudarchitect #devops #cicd #cloudbuild #googlecloudplatform

https://redd.it/yd2zip
@r_devops
cloud-init and nodata source

If I understand cloud-init docs correctly, it is possible to store user-data configuration as part of a VM under /var/lib/cloud/seed/nocloud

tree /var/lib/cloud/seed/nocloud
| user-data
|
meta-data

My user-data file looks like this:

#cloud-config
runcmd:
- echo "hello baked in config"

Now my issue is, the runcmd never executes even though cloud-init picks it up correctly,

util.pyDEBUG: Read 408 bytes from /var/lib/cloud/seed/nocloud/user-data

Also, runcmd module is configured properly:

cat /etc/cloud/cloud.cfg
...
cloudconfigmodules:
- runcmd

I also have additional CIDATA volume attached to the VM to provide user-data. But the runcmd in it executes just fine.

Any idea why cloud-init is not running runcmd in /var/lib/seed/nocloud/user-data?

https://redd.it/zjfx8k
@r_devops
"sudo sync"
]
}

# Provisioning the VM Template for Cloud-Init Integration in Proxmox #2
provisioner "file" {
source = "files/99-pve.cfg"
destination = "/tmp/99-pve.cfg"
}

# Provisioning the VM Template for Cloud-Init Integration in Proxmox #3
provisioner "shell" {
inline = [ "sudo cp /tmp/99-pve.cfg /etc/cloud/cloud.cfg.d/99-pve.cfg" ]
}
}

user-data

#cloud-config
autoinstall:
version: 1
locale: en_US
keyboard:
layout: de
ssh:
install-server: true
allow-pw: true
disable_root: true
ssh_quiet_keygen: true
allow_public_ssh_keys: true
identity:
hostname: packer-ubuntu-20
password: "$6$qiEN6LwtNwuOZoim$8nvdVicI/.oDb5W4ynnyhToYKegBUGDEWgomK6kymT6xalkuQaoqHhAY4xcurVQ50wDEBhF.OzHUKkm4NvoNe/"
username: packer
realname: packer
packages:
- qemu-guest-agent
- sudo
storage:
layout:
name: direct
swap:
size: 0

https://redd.it/1189igt
@r_devops
Moving away from base64 in yaml

We have this current code in a yserdata.yml file and it works. It's read when we do "terraform plan/apply".

#cloud-config
---
users:
- default

write_files:
- path: /usr/local/bin/load_envvars.sh
owner: root:root
permissions: "755"
encoding: b64
content: |
IyEvYmluL2Jhc2gKCmVjaG8gIkxvYWRpbmcgdmFyaWFibGVzIgpzb3VyY2UgL2V0Y
y92YXJpYWJsZXMvY2xvdWRfY29uZmlnLnNoCgo=


I'd like to change it to the code shown below because the encoding to base64 is now becoming a pain to all of us. I'd like to test it locally first on my computer but I am not sure what tool to use or what to install. I'm on mac. I was thinking of generating load\_envvars.sh file from the yaml using some local tools as a proof of concept.

Unfortunately, our project that needs it is only configured in production. That means, if I make some git commits in the project using the code below, I might screw it up. Of course, that unix shell script is just an example. I don't know if my idea below is valid and if it will work.

#cloud-config
---
write_files:
- path: /usr/local/bin/load_envvars.sh
owner: root:root
permissions: "755"
content: |
#!/bin/bash

echo "Loading variables"
source /etc/variables/load_config.sh

https://redd.it/1cop2xr
@r_devops
network config Packer qemu Ubuntu 22.04

Hi guys,

I want make ubuntu 22.04 image by packer and I want setup network by user-data in http folder I use this user-data file, but after made image by Packer, Ubuntu VM has not an IP.
where I make mistake
```
#cloud-config

autoinstall:

version: 1

locale: en_US

keyboard:

layout: us

ssh:

install-server: true

allow-pw: true

packages:

- qemu-guest-agent

user-data:

preserve_hostname: false

hostname: packerubuntu

package_upgrade: true

timezone: Europe/Berlin

chpasswd:

expire: true

list:

- user1:packerubuntu

users:

- name: admin

passwd: $6$xyz$74AlwKA3Z5n2L6ujMzm/zQXHCluA4SRc2mBfO2/O5uUc2yM2n2tnbBMi/IVRLJuKwfjrLZjAT7agVfiK7arSy/

groups: [adm, cdrom, dip, plugdev, lxd, sudo\]

lock-passwd: false

sudo: ALL=(ALL) NOPASSWD:ALL

shell: /bin/bash

- name: user1

plain_text_passwd: packerubuntu

lock-passwd: false

shell: /bin/bash



network:

network:

ethernets:

ens3:

critical: true

dhcp-identifier: mac

dhcp4: true

dhcp6: false

```

https://redd.it/1dspjyo
@r_devops
Pull request testing on Kubernetes: vCluster for isolation and costs control

This week’s post is the third and final in my series about running tests on Kubernetes for each pull request. In the first post, I described the app and how to test locally using Testcontainers and in a GitHub workflow. The second post focused on setting up the target environment and running end-to-end tests on Kubernetes.

I concluded the latter by mentioning a significant quandary. Creating a dedicated cluster for each workflow significantly impacts the time it takes to run. On GKE, it took between 5 and 7 minutes to spin off a new cluster. If you create a GKE instance upstream, you face two issues:

Since the instance is always up, it raises costs. While they are reasonable, they may become a decision factor if you are already struggling. In any case, we can leverage the built-in Cloud autoscaler. Also, note that the costs mainly come from the workloads; the control plane costs are marginal.
Worse, some changes affect the whole cluster, e.g., CRD version changes. CRDs are cluster-wide resources. In this case, we need a dedicated cluster to avoid incompatible changes. From an engineering point of view, it requires identifying which PR can run on a shared cluster and which one needs a dedicated one. Such complexity hinders the delivery speed.

In this post, I’ll show how to benefit from the best of both worlds with vCluster: a single cluster with testing from each PR in complete isolation from others.

Read more...

https://redd.it/1iwhrz2
@r_devops
Connecting to Cloud SQL From Cloud Run without a VPC (GCP)

According to this post that was recently sent to me, its not necessary to create a VPC and doing so would create a network detour effect, as traffic would go out of a GCP managed VPC to your own VPC and back to their VPC. I'm wondering what everyone's thoughts are on this sort of network architecture--i.e. enabling peering to make this connection happen. As it stands, it seems like I wouldn't be able to use IAM auth with this method and would need dedicated postgres credentials for my cloud run jobs. One, is this a valid method of making this connection happen? And two, should I actually be using dedicated credentials (instead of IAM tokens) in production? Lastly, any reason to do all this instead of just use a Cloud SQL Connector? In my case, regarding the connector--there is no support for psycopg yet as a database adapter, but that is soon changing. In the meantime, I'd have to use asyncpg if I wanted to use a connector.

https://redd.it/1mbngxm
@r_devops