Reddit DevOps
272 subscribers
21 photos
31.3K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
is it possible to use placeholder variables in a cloud-init file which are then replaced by environment variable for customizing Ubuntu 20.04 Vagrant Box?

I am trying to learn vagrant with its cloud-init experimental feature. I can customize an Ubuntu 20.04 box by passing information like hostname and users using `user-data` file with the `#cloud-config` type. I am curious can I pass such information using `${HOST}` and `${CUSTOM_USER}` into such `cloud-init` files to provision a dynamically created vagrant box?

So far I have tried doing it but the vagrant box does not provide substitution, instead of passing the value via environment variables, literally ${CUSTOM_USER} gets created in `/etc/passwd` file of the vagrant image.

Help would be appreciated here since `cloud-init` beyond the standard examples doesn't have a lot of tutorials

https://redd.it/tikw2o
@r_devops
How to make this annotation right?

I'm trying to set up a config for my nginx-ingress and I think I'm doing annotation wrong, because when I added the config through ConfigMap data it worked. What am I doing wrong?

My annotations (all things I tried, not all enabled at once):

annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: 1024m
ingress.kubernetes.io/proxy-body-size: 1024m
nginx.org/client-max-body-size: 1024m
nginx.org/proxy-body-size: 1024m
nginx.ingress.kubernetes.io/client-max-body-size: 1024m
nginx.ingress.kubernetes.io/proxy-body-size: 1024m

None of these worked.

But putting this on data on ConfigMap did work:

kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.1.2
helm.sh/chart: ingress-nginx-4.0.18
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","data":{"allow-snippet-annotations":"true","use-proxy-protocol":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx","app.kubernetes.io/version":"1.1.2","helm.sh/chart":"ingress-nginx-4.0.18"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"}}
data:
allow-snippet-annotations: 'true'
proxy-body-size: 1024m
use-proxy-protocol: 'true'


What did I fuck up?

https://redd.it/tir4wy
@r_devops
Testing network config changes with dev environment

Hey all. Recently as I’ve been exploring networking I’ve had problems such as pushing a networking config change to a remote host over ssh only to be unable to ssh afterwards.

My thought is that I should create a dev environment for all changes like this to make sure what I’m doing won’t break anything on the network before I do it.

Is this common practice? If so what tools should I use to do this? Plus if I could do it on my homelab

https://redd.it/tisi52
@r_devops
Serverless lambda multi page site?

Multi page serverless site possible?

I’m using lambda to host my home page.html when api is hit.

But my home page links and forms return forbidden if I try to go to eg:
href=/about.html

Do all links need to instead fire a new function?

I can’t find much info on how to change pages on a serverless website.

https://redd.it/tiupqf
@r_devops
"offline" or air gaped devops work

Hey all, in a group where we don't have internet access in our production/staging envs and it's a pain. People are literally copying files back and forth in a somewhat manual manner to do updates, commit their code, update an image, etc.

Anybody ever encountered an "air gapped" situation like this (we can plug into computers and we have one terminal that we can rdp copy to/from)... how did you deal with it? Just looking for ideas and things I can suggest that account for the fact that they want the network relatively air-gapped but are still safe. Doing a four step copy to grab a package for my python venv or commit my code is getting silly.

https://redd.it/tixtds
@r_devops
In your job do you feel that traditional sysadmin tasks/work is all going to the devops team or being automated in the Cloud with services (Azure, Aws) ? Where the lines between devops and sysadmin are getting blurry, for examples with Serverless, IAC, CI/CD Pipelines, kubernetes etc ?

In your job do you feel that traditional sysadmin tasks/work is all going to the devops team or being automated in the Cloud with services (Azure, Aws) ? Where the lines between devops and sysadmin are getting blurry, for examples with Serverless, IAC, CI/CD Pipelines, kubernetes etc ? and we are getting less and less on premise infrastructure to manage ? to the point where Windows or Linux don't really matters anymore for large enterprise ?

https://redd.it/tj7sh8
@r_devops
What metrics do you use to make decisions?

I’m genuinely curious to know what metrics other DevOps are using and what benefit they get from them, in particular ones that help you make decisions on prioritising your backlog.

Thanks!

https://redd.it/tjbx51
@r_devops
How to safely allow people to collaborate on TF code?

Hey,

I'm a lone DevOps at my company and I manage IaC and create/remove all of the resources by myself. Recently, a person from another team asked me if they could use terraform to provision the resources they need and I happily agreed. However, there are some concerns about security and I've been wondering if you guys could shed some light on how you approach this at your companies.

Currently, I have a single s3 bucket with folders named dev/prod which hold the terraform state files for all the resources that belong to their respective environment.

While I could live with giving access to the dev resources state I feel like there is a better, more granular way. The other option would be to have this person create a pull request with the terraform code and it would be applied by me/CI-CD pipeline as I don't currently have a way to test it.

Should I just create seperate s3 buckets to hold the state for each team that wants to use terraform to provision cloud resources? Or is there a better way?

Also, there is concern about costs as an unexperienced person could easily provision some resources with pricy skus. How do you avoid that? I think I could make the state file read-only to allow the terraform plan command however the developer could still create these resources manually. Am I being paranoid here?

https://redd.it/tjaqcj
@r_devops
DevOps Bulletin Newsletter - Issue 43

Hey folks,
My weekly DevOps newsletter aka DevOps Bulletin -  Digest #43 is out. Check out a sneak peek of the topics covered on this weekly issue:

* **Infrastructure as Code security risks** and how to find them - This post will dive into IaC risks and focus on IaC management tools such as Terraform, cloud providers, and deployment platforms involving containers and Kubernetes. For each scenario, it will look into threats, tools, integrations, and best practices to reduce risk.
* Why you should **stop using branches for deploying to different GitOps environments** \- while ranch-per-environment mostly works, but there are some issues with it.
* **Hands-on with PostgreSQL authorization** \- how you can limit users to reading and mutating only their own data with row-level security (RLS) policies.
* Who’s attacking my server? - a hands-on tutorial on how to secure a server against **brute-forcing SSH access and visualize potential attackers IPs in a map**.
* **Contributing to complex projects** \- Mitchell Hashimoto (the guy behind Terraform & others) cover in this blog post how to approach with confidence a complex open-source project.
* **CRI-O vulnerability could allow container escape** \- A newly discovered vulnerability in the container runtime tool CRI-O could allow attackers who are able to create pods in a Kubernetes or OpenShift to break out to the underlying cluster node, effectively escalating their privileges.
* Podcast of the week goes to “**The Kubernetes Developer Experience**” by The Cloudcast - This episodes goes into how Kubernetes gain traction without a developer experience

Complete issue: [https://www.devopsbulletin.com/issues/azure-penetration-testing](https://www.devopsbulletin.com/issues/azure-penetration-testing)

Feedback is welcome :)

https://redd.it/tjcmwf
@r_devops
How painful was Log4j for you?

My team, org, and probably company STRUGGLED with log4j remediation. Giant micro service architecture meant hundreds of apps needed their repo's updated, rebuilt, and redeployed. Worse, since log4j, the company has cracked down and implemented intense scanning and remediation requirements across all image repositories. An image in a repository with any CVEs now gets escalated up to war rooms with VPs involved if its not resolved in a few days... Our CI/CD was definitely not prepared for this, and have been struggling to stay on top of our hundreds of running applications as new vulnerabilities are discovered... And we're a BIG company (like, one of the biggest).

Just wondering what other devs experienced... Do you have 100's-1000's of apps and log4j was a walk in the park for your org? Huge impact that brought everything else to a stand still? Are you taking advantage of SCA like whitesource, snyk, etc? What tools do you use that make it so easy or hard to manage high volumes of code/repo level changes like this? Does gitlab just f*ckin do it all for you? And, if a you do have 100s-1000s of apps and the next log4j scenario comes around, are you setup to automatically fix it this time? How?

Any insight anyone can provide would be super valuable! And if you want to DM me and have a deep conversation about it, that's even better -- I have a decent amount of DevOps knowledge and expertise I'm happy to pass along (I'd pay you if I could, but that's the best I got).

Thanks in advance! Really appreciate this community :)

https://redd.it/tjqnij
@r_devops
Interview with Rona Hirsch, DevOps Engineer at Komodor on ValidKube, Female DevOps engineers and women in tech

Link to the interview: https://www.youtube.com/watch?v=bNG5nRXMCFc

I particularly enjoyed the segments about women in tech. I find that Devops is a very male dominated field and it would be fantastic to have more women in the field.

https://redd.it/tjk4jp
@r_devops
Has anyone left Devops to go back to just being a Dev?

I’m starting to get extremely bored and stressed with Devops. I notice alot of developer positions ask for Devops experience and was thinking of switching back. Has anyone done this? Did you have to take a pay cut?

https://redd.it/tk2z7u
@r_devops
Build vs Buy your development platform? Which option do you choose & why?

Some companies prefer to build their own internal development platforms while others prefer to buy something that's already on the market and focus on building their product instead. I'm looking for pros & cons for each option from your own experience. What worked, what didn't, what didn't you expect to happen, etc.

https://redd.it/tkru1n
@r_devops
Learning Python Boto3 and Terraform at Home?

I'm looking to move into devops from a software development role and wanted learn Boto3 and Terraform. What is the best way to practice using these to simulate real world experience/use cases? I can do simple scripts using the documentations, but can't think of "bigger"/challenging scripts to write. Any ideas or suggestions would be great!

https://redd.it/tnbwfd
@r_devops
Running Postgres 10 in production. Should I just upgrade to 11, or use the maintenance window to jump to 12/13/14?

Might be a dumb question but I'm wondering if it's ever common to jump multiple major versions?

Obviously we would test it thoroughly in our lower environments. We're using multi-region clusters in RDS.

https://redd.it/tn1tr9
@r_devops
is it just me or there is no sonarqube equivalent in azure DevOps?

was asked to implement sonarqube, saw how its a bit clunky to install and maintain. tried to look for something more seamless and integrated into azure DevOps pipelines (so no need to config/deploy our own sonarqube/code analysis tool) but couldn't find something that competes it. am i missing the mark or there is no such solution by azure and/or the tasks in azure devops? thanks.

https://redd.it/tna3bx
@r_devops
Kubernetes, Helm and automated deployments that read the kubernetes status

For my understanding of Helm so far, it seems that it is "write only (in etcd) or deploy only".

That is, a chart done with helm deploys something in kubernetes, it can have multiple phases (with pre hooks and post hooks and so on), but it doesn't really reuse the status of what is being deployed.

Trivial example: Helm cannot read the name of a pod that gets deployed during a deployment (that pod name will have an hash, so it is cannot be precomputed before).

I know that in the best case a chart deploys what is needed without much need to use the state of what is deployed, but unfortunately time constraints and other limits often prevent the best case (for the same reason Agile is mostly never applied, or the devops aproach is also mostly never applied).

Hence the question: is there a way in Helm to read the state of what was deployed during a deployment and use it for further actions? (I know one could wrap helm in scripts and make two deployments or the like, but that is clumsy)

If Helm cannot do it, is there any other deployment manager that can do it?

https://redd.it/tnh4fq
@r_devops
How do you manage your terraform library?

Just genuinely curious what your iac practice and pipeline is like. I support many teams and find myself writing terraform for them since they either don't know terraform or won't write it given my desired standard. I am contemplating publishing a library of modules I use and having my developers use them but I am not sure how I can enforce this or allow them to combine them in different combinations. Any ideas?

https://redd.it/tn2um5
@r_devops
Data center migration due to geopolitics

Some customers of ours are either upset about what's going on in Eastern Europe or they're anxious for some reason. Anyway, they want their data moved out of our Moscow data center. This would be a massive undertaking, as there are hundreds of "tenants" to deal with, all with differing configurations. Are any of you also working at companies who also have to deal with a similar scenario? How are you handling it?

https://redd.it/tniy2n
@r_devops
What is a good way to manage 100-300+ microservices on multiple environments?

I am trying to find a good way to manage 100+ or even 300+ microservices on-prem k8s clusters that consist of deployments, services, configmaps, secrets, databases, load balancers and other configs and for now some of the good options that I can find are to:

\- Create a Helm Charts and combine multiple chars under one big Umbrella Chart - searching for good articles or experience with this

\- Manage different environments with Kustomize

\- Use Terraform for the deployments - but for some reason this seems to be very complex for management

Any advice, articles, guides or shared knowledge in managing a lot of services would be very appreciated.

Thanks!

https://redd.it/tnkwtd
@r_devops
Podman 4 still unusable on macOS

I'm macOS user which means I'm forced to use docker (we can argue but there's nothing better than docker for running multiple apps locally). I was really excited when podman 3 got released but excitement passed once I tried it. Long story short volumes were less than usable. With the release of podman 4 I got excited again. And again I hit the volumes wall. Is it me doing something wrong or is podman still not usable on macOS?

https://redd.it/tnncir
@r_devops