Reddit DevOps
268 subscribers
2 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Configuration as code

We are building a microservices project and need to upgrade and create automated release notes. What si the best practices.

I am considering a general database for dev and test environment to hold all secrets and configurations and do a diff between them at every release to ease the process.

I would have to have ideas

https://redd.it/ok03s5
@r_devops
Still using Docker Hub? you can now publish images to GitHub Packages container registry

Hi folks 👋

I guess that like a lot of you, I've been pushing my Docker images to Docker Hub, which has been and still is a good registry.

Though, if you've been following the open source ecosystem development recently, then GitHub Actions and with it GitHub Packages registry is now being more widely adopted.

I wrote up a blog article on how to manage your Node.js Docker images in GitHub Packages using GitHub Actions, which includes building and publishing them to the GitHub packages registry: https://snyk.io/blog/managing-node-js-docker-images-in-github-packages-using-github-actions/

https://redd.it/ok14tp
@r_devops
Cloud IaaS with DevOps pipeline

How do you do your devops pipelines with IaaS?

Speaking based on Azure. I do have ARM templates which declaratively describe desired infrastructure and its configuration. Then I'm deploying applications to infrastructure (whether it is PaaS service like App Service or Azure Kubernetes Service doesn't really matter).

At the beginning - in the simpler cases - I had an approach to create a single devops pipeline which firstly applies ARM templates and then deploys application. The drawback is that even a very simple application change results in rerunning ARM template and with a little bit more complex infra it can take some time even when infra (ARM) didn't change at all.

With the k8s and microservices it makes even less sense to apply ARM template with each microservice deployment.

So right now I think it's probably the best to have 2 separate pipelines:

1. Pipeline for infrastructure - applies ARM templates, triggered only when infra was really changed
2. Pipeline for application code - doesn't touch infra at all. Just deploys application/specific microservice.

In some cases deploying new version of application might need also change in infra (new component like queue, redis, whatever) but I think those are rare cases and then pipelines will just need to be run in correct order.

Any thoughts based on your experience?

https://redd.it/ok1bss
@r_devops
Deploy docker-compose from Github Action to remote server

I want to be able to deploy the latest docker-compose from Github Actions to a remote QA server that is accessible through SSH. One option I can think of is to get the file from git into the remote server and do docker-compose up manually. Are there any standard options available?

https://redd.it/ok0zxh
@r_devops
Install specific version of a package

I have a pretty simple manifest for packages that needs to be installed. It has an array of package names, and then ensures they're installed:

$basicpackagelist = 'p7zip-full','unzip','python3','tzdata','make','build-essential',

exec { 'apt-update':
command => '/usr/bin/apt-get update',
}

Exec'apt-update' -> Package <| |>
package { $basicpackagelist:ensure => 'installed'}

Thing is, some packages need to be installed on a specific version.

In that same manifest, is it possible to create some sort of dictionary that would specify the version that the package has to be?

Thanks ahead!

https://redd.it/ok0ufp
@r_devops
Would you rather give your code or your container images to a third party service?

Hello!

I need some help from the collective experience of the DevOps people!

I wrote service called WunderPreview which gives you a running staging environment for all your branches/pull requests/commits. It works similar like a CI system in the way that it s triggered by GitHub when a change in your code happens, WunderPreview then grabs your code, builds and deploys your Docker container and gives you the URL to the running staging system.

We spoke to a lot of people and some where saying: No, I don't want to give you access to my Code, can't you just grab our Docker image build by our CI system and just deploy this?

I am now want to know what works better for you and your company:

A) when you give access to your code to a third party service to build and deploy your containers

or

B) give access to your Docker image to deploy your existing container images

Which version do software companies you work for prefer?

Thanks for the help!

https://redd.it/ok59xc
@r_devops
How to deploy Hashicorp Vault on Kubernetes?

I started a blog series where I show you how to deploy Hashicorp Vault into Kubernetes using a Helm chart.

In this first part we will explore using a the vault Helm chart to deploy it on our Local Kubernetes cluster.

https://marcofranssen.nl/install-hashicorp-vault-on-kubernetes-using-helm-part-1

In the second part I will cover deploying on AWS EKS using a High available configuration utilizing AWS KMS for auto unsealing of vault.

https://redd.it/ok3g09
@r_devops
Could Kubernetes Pods Ever Become Deprecated?

Hi /r/DevOps,

Today I published an article that explores Kubernetes deprecation policy and rules. In the article I explain how could all kinds of Kubernetes objects (including core and stable APIs) become deprecated, which I think might be interesting to some of the Kubernetes folks around here.

Here's link to the article: https://towardsdatascience.com/could-kubernetes-pods-ever-become-deprecated-e8ee6b4b8066

Feedback is very much appreciated!

https://redd.it/ok3q2i
@r_devops
Platform Engineering: How do you do it?

About me: I have around 4.5 years of experience in both backend and full stack engineering. I just joined a new company as a senior software engineer and the first engineering hire in a satellite office. My team is the "platform" team and is just me and my manager in the head office for now but staff/principal engineers will also be hired soon.

Platform Team: The company which uses GCP is trying to break up a python monolith and extract functionality into microservices. Every team is building microservices differently. The job of the platform team is to standardize the way microservices are built, tested, deployed, monitored etc.

What I've found so far: I'm not very familiar with kubernetes and have been spending time playing with it and trying to learn what I can about it. Here's what I'm thinking the platform team should standardize:

Programming Language: (Python only at first because that is what the monolith uses)
Framework: (Flask maybe)
Communication Protocol (REST vs gRPC)
CI/CD: (helm charts that deploy to a kubernetes cluster and a tool like CircleCI maybe)
Load Testing: (k6 maybe)
Logging, Monitoring, Alerting, Tracing: (Newrelic for now since monolith uses it. Later maybe cloud native stuff like prometheus, grafana, jaegar etc.)
Message Bus: (Maybe Kafka but I heard it's hard to set up and operate)
Service Discovery: (Service Mesh maybe, Istio?)

I'm probably missing a lot of stuff.

I think the platform team should deliver a repository with a sample (empty) microservice and documentation. Teams can use that as scaffolding for their microservices.

I still need to learn more about how to use kubernetes. So far I've created a cluster on GKE and deployed Google's "Online Boutique" project to it.

Question: How do you do platform engineering at your companies? I'd like to learn from other people's experiences on this topic. Are there any resources (articles, talks, podcasts, books etc.) on this topic that you know of?

Is there a better subreddit for this?

https://redd.it/ojojef
@r_devops
Getting started with Vault for an existing non-containerized app

I've got a couple of questions about Vault!

We have a bunch of Windows server applications that currently handle secrets as follows; our apps are in C#

We store them in settings files in code
We store them encrypted, using a certificate
The servers have this certificate with the private key, so they can decrypt the secret

We're looking at implementing Hashicorp Vault. It seems easy enough to simply replace the encrypt-store-decrypt with storing the secret in Vault in the KV engine, and just grabbing it in our apps - that takes that certificate out of the picture entirely. Since we're on-prem, I'll need to figure out our auth method, happy for any suggestions there. No real "questions" as such on that point.

One thing though:

We also have some certificates from vendors/partners that need to be managed; we don't generate them ourselves.

What would be the best engine for these? The PKI engine stores certs but seems to assume that it's generating them. I could simply store the encoded certs in the KV engine, but then Vault won't know that they're certificates and won't have the associate metadata, like expiration, which is important for us to track easily.

https://redd.it/okafu7
@r_devops
Configuration of software baked into AMI

Hello, I and wondering about what the common process is for configuring software baked into AMIs at instance startup. I have the following scenario:


I am building an AMI that will run a particular software (found in the OS's package repos). I am using Packer to install the required system packages and create the AMI. I also need to apply some custom configuration files to the software as well. The configuration files contain environment specific settings, and will likely change over time, so I will have that in version control. As I don't want to rebuild the AMI on every change of the configuration file, and to allow reuse of the AMI across environments, I will not be including them in the AMI. This means I will have to apply the configuration files during the instance startup. What are some options for doing this? In particular, I am curious about the following:

How to retrieve the configuration files from my version control? I don't really want to configure git access on the instance to my repository.
The configuration files might need to have secrets (ex. database credentials). I don't want to check these into our git repository, so these will have to be added in at some point in the process. We are exploring secret management tools, and might go with something like Hashicorp Vault (open to ideas).

I came up with the following process, but I am looking for critique / best practices.

Config files stored in our git repository are automatically pushed to an s3 bucket through Github Actions or some other CD process. The config files have 'filler' information in place of the secrets.
Userdata script grabs configuration files from s3 bucket. I will retrieve my secrets from my secret management tool (Vault, etc), and swap them in for the 'filler' information in the config files.

I am using Terraform for setting up almost all of the infrastructure. So I can substitute environment name into the user data script as needed to pull in the correct file, secrets, etc.

I am definitely looking for ideas on secret management tools as well. Currently we mostly have stuff in SSM Parameter store.

Thanks

https://redd.it/ok9u1q
@r_devops
Download SQL scripts from Maven Repo

Dear All,

I am new to maven and in pursuit to upgrade java app which as some SQL scripts

https://mvnrepository.com/artifact/org.camunda.bpm.distro/camunda-sql-scripts/7.12.0

I can find the scripts in JAR but how could I download a zip file of these scripts so that I can execute on the DB myself instead?

please advice.

https://redd.it/okb3dq
@r_devops
Devops Prep in one year

The short: I'll be job hunting in a year and would like to transition to devops/SRE. Paths forward for a current systems admin?

Long: Solo systems admin, graduated with a BS in Comp Sci in 2012, went straight to an MSP (i don't know why...), became a solo sysadmin 'jack of all trades' at a 100 employee, two location, medical office in 2015 and have been there ever since. Exclusively Windows besides my ELK stack, PRTG, and an internal wiki. Minor scripting of some repetitive tasks (powershell, cmd) I've done a little, very little, python for a personal project. We have no cloud infrastructure. I feel pretty solid on networking concepts.

The wife and I will be moving to another state next July when she matches to a residency. No idea where. Could be East coast, West, PNW, Utah, PA, we don't know. Relevant maybe? Makes it hard to check most popular technologies in an area.

I'm digging through all the posts, stickies, etc and putting together a pile of resources to start going over. I'm reading The Phoenix Project, I've also got the DevOps and Unicorn books downloaded. Picking out websites, youtube videos, etc. I've got the roadmap, best practices, everything from the weekly thread.


I have a lot of downtime at work that could be devoted to this. (sorry current employer...) I've just got things running smooth enough that I have the downtime. I've got servers that aren't production that I can do whatever on.


What would you do in my shoes? Just read and play with the tech? Jenkins and AWS? Gitlab and Kub? Certs? Classes?


I'm not expecting to walk in to a full blown senior or even midlevel devops position. I expect a pay cut and a "junior" job title, that's what I'm shooting for. Probably a pay cut too (I make ~$70k now).

TLDR: 'Jack of all trades' sysadmin, 6 years as an admin, comp sci degree, wants to move in to devops, has a year to prep. What would your priorities be?

https://redd.it/ok9f5n
@r_devops
Allowing KVMs to reach the internet (Question)

Hi all, I'm having a slight dilemma with a current work situation.

I've got two interfaces on my CentOS 8 machine (internal network facing- eth1), and (internet facing - eth2), and I'm working on setting up a bunch of VMs to use for development purposes.

I considered creating a bridge (br0) and adding eth1 and eth2, but I lose the ability to SSH when I do so. I was researching on other ways, and I came across the use of NAT and macvtap.

I currently have NAT "working" - the VMs can ping the host and eachother, but fail with a "destination port unreachable" when pinging the internet.

Macvtap supposedly is a lightweight way of bridging interfaces, but again I was not able to ping the internet.

I've been writing a .xml template and using `virsh net-define <file>.xml` if the command matters.

Has anyone had any experience with allowing VMs to reach the internet with NAT or macvtap and could give me a bit of assistance?

https://redd.it/okaucj
@r_devops
What to put on Tinder bio?

Completely serious question but I'm trying to figure out what is the most effective

Devops engineer? Most people probably don't know what this means

Cloud engineer? A bit better but similar issues I think

Software engineer? I feel like this might be the one but it's a bit ambiguous

Any advice is appreciated

https://redd.it/okfbxk
@r_devops
How to control access for new users to run certain Ansible Playbooks to setup their work environment only?

&#x200B;

Right now were turning as much low level tasks, like creating users, to locking user accounts with Ansible Playbooks, aka IaC. These playbooks are stored in a git repo, with a BitBucket front end.

We also have two Ansible Playbooks that will automate the creation of a user's .gitconfig file and install software from a RHEL repo, for a new user setup.

My question is that ideally we would like for a new user to sit down at their computer, with Ansible engine installed, have then run whatever playbook that is only needed to get them setup to work.

How can we go about this so that a user can only run certain playbooks and only has privileges to run those playbooks? And once the new user setup is done, that is it.

https://redd.it/okgv9s
@r_devops
moving from a sysadmin/MSP role to DEVOPS

Hi everyone,

I'm posting to see if I can get any insights on how to transition from my current role, into more of a DevOps role at a software company.

Currently, I'm working at a software distributor, within its managed services team. We mainly look after cloud-based environments as a 'software as a service model', where customers utilise the software/platform, and we deploy then eventually manage the servers along with the software.

Because our company doesn't do any development, I feel like I am missing out on the CI/CD-related experience if I were to start applying for DevOps/SRE jobs. For context, my relevant experience after 2 years at my current role include:

- automating ETL processes, data backups, software patches using Python

- created Azure runbooks to schedule Azure environments

- used Terraform and Kubernetes to deploy environments

- used CloudWatch to monitor AWS resources and created Python scripts to parse IIS logs

- managed AD users, networking and security configuration, software licenses, and SSL certificates


I also have all three AWS associate certificates along with the CKA


Essentially, my main worry applying for DevOps jobs is that I have never worked at a company that had developers pumping out development for software. Therefore, I haven't really been involved with the CI/CD process that's a core foundation of DevOps. I have experience developing my own applications and and have deployed them to IIS. Although I don't imagine that is anywhere the same, as deploying heavily-used applications on production environments.


Also, although I have a Comp Sci degree, my role over the past three years hasn't been development-heavy. I hear that you need to be a decent SWE as well.


Does anyone have any insights on what I can do to transition from my current role to DevOps?

https://redd.it/okgamp
@r_devops
Junior Cloud Engineer Interview

I have an interview on Friday for a Junior Cloud Engineer position and I'm currently a an admin. I was wondering if anybody could provide an idea of what would be asked in an interview that I probably wouldn't be asked in a sysadmin or desktoptech interview. At this point, I know what I know so I know I won't learn substantially more from now. The main thing is being nervous about completely embarrassing myself. I'd like to think I interview well but cloud engineer is in a different class of roles that I have interviewed for and I don't want to be blindsided by the unexpected.

I know there are many jobs out there and it's not the end end the world if I don't get it but I live in the "lesser" city of a two city metroplex (in terms of IT job availability) but for once there is a GOOD job that is vertical for me in career at a great company open so my nerves are through the roof. My commute time would go down by an hour+ if I got this job so I'm really gunning for it

The JD qualifications pretty much only asks for some basic to intermediate windows experience and basic virtualization experience and basic networking which I do have. I have done some basic projects in AWS and Azure but the unknown factor of what I could potentially be asked has me worked up

https://redd.it/okijjf
@r_devops
Jenkins X

What are the capabilities of Jenkins X and what is its support for legacy Jenkins shared libraries?

https://redd.it/ojgv3n
@r_devops
AWS NAT Solution for inbound and outbound traffic?

Hi guys! I hope everyone is doing well. I've run into a problem I can't seem to figure out and am looking online for suggestions, help, etc. So any help is well appreciated

**What we need:**

* We have a customer connect to us through a VPN. In our case currently a site to site VPN setup on AWS.
* We need the customer to send traffic/data to one of our resources, but customer has to send this to an IP outside of the VPC CIDR.
* We need a device that NATs this IP into our VPC and routes traffic to a specific resource. We also need the outbound traffic to go through the NAT back to the customer.

**What we've checked:**

* We've looked at the Transit Gateway, NAT Gateway, Client VPN... But we can't find a valid way of doing this.
* The Transit Gateway doesn't seem to do NAT, and we can't figure out a way of using the Transit Gateway together with the NAT Gateway to accomplish what we need.
* It also doesn't seem to be possible to configure the NAT Gateway to NAT specific IPs to specific resources for both in and outbound traffic.
* We've seen the option of using a NAT Instance (which AWS seems to have moved to the NAT Gateway...), and think that maybe this is the least complicated method?

Simple diagram to depict what we're trying to achieve:

[https://forums.aws.amazon.com/servlet/JiveServlet/download/8-343034-989711-34061/aws-nat.jpg](https://forums.aws.amazon.com/servlet/JiveServlet/download/8-343034-989711-34061/aws-nat.jpg)

https://redd.it/oiuhoo
@r_devops
Any team leaders on this sub?

I am curious to know what team leaders (whose teams participate in DevOps) think of a project I'm working on. Please, please, please find holes and critique as if you were aiming to start a flame war.

**Here's a concept summary:**

* It's a continuous feedback sharing and learning tool
* DevOps is the first space I want to address because of its sheer complexity
* You map the Ops activities your team does\*\* then write/link notes to them
* Your engineers spend about 5-minutes per day reviewing notes you and their peers share

\*\* Mapping is done by selecting from a DevOps capability map

Now, you might be thinking, "Why don't we just do this on Slack?". Slack channels better serve ephemeral content, so why not a clean, dedicated space for sharpening your abilities?

**Expected benefits include:**

* Supplements your 1-on-1 coaching and engineer's ongoing certification studies
* Boosts efficacy of work by linking feedback and learning direct to relevant areas
* Help neurodivergent tech workers grasp feedback and learning better due to visual context

So... let me know what you think :)

https://redd.it/oklwda
@r_devops