Reddit DevOps
270 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
All devops I met are against using Microsoft products (Windows, .NET Core, C#, etc). I am curious if the devops here in reddit are also against Microsoft software products and what is your rationale?

I only met a couple of devops in my career and they are all against Microsoft (Windows, .NET Core - cross platform, C#, etc) and will always prefer non-Microsoft solutions . I am curious if that is the same with the devops here in reddit and what is your rationale?

https://redd.it/l8hbq0
@r_devops
Creating Kubernetes Resources via UI and creating testing/staging environments

I'm currently planing a devops project for university and I would really appreciate any tips and recommendations.

Our Institut is involved in a system/application where you can model an architecture, meaning you can create components and interfaces that resemble Microservices and their connections between each other.

For my project, there are basically two goals/features:

1. Make it possible to create a the Kubernetes Resources (pods and services) for the created architecture modelled with the given system. The system lacks input for values that are needed for the Kubernetes Resources, such as Docker Images and Ports, so I would need the create inputs for these values.

2. Once we have created the Kubernetes cluster, we want to be able to create different testing/staging environemnts

Here are my initials ideas on how I would implement this:

1. Feature: For each component we have, we create some input fields where we can fill in information needed to create the kubernetes pods/services, such as docker images and ports. Then we create the kubernetes resources with this information via the kubernetes API.

Here is my first question: Do I really have to create these UIs from scratch, that basically just serve as slightly simplified yaml file substitution or are there any ready made solutions to this?

2. Feature: To create testing environments, I just create a namespace, for example `beta` and then take the already created kubernetes resources for the component, allow changing the values for example use an updated docker image and just apply these resources in the new namespace.

Is there anything wrong with creating testing environments by creating new namespaces. Are there better solutions to this? I heard about Terraform, which apparently can automate creating staging environment, could this be useful for my project?

How would you implement these ideas? Any recommendation and related works are appreciated!

https://redd.it/l7vhdx
@r_devops
Why is the AWS DevOPS Pro Cert so valued?

The only real DevOPS service that it tests in the exam which is used in lot of places is Cloudformation. Apart from that I don't see a single other service in the cert that is relevant to DevOPS in real world. I have worked in many organisations (both big and small, all in DevOPS role) and these are the services I have always used.

\-Terraform

\-Ansible

\-Docker

\-Kubernetes

\-Prometheus

\-Python Scripting

\-CloudFormation

\-Jenkins

So as you can see, apart from CloudFormation none of the services tested in the DevOPS pro exam relates to real life tools used by devOPS engineers everyday. If I was a recruiter, I would personally hire someone who knows the tools mentioned above rather than who have a AWS DevOPS pro cert.

https://redd.it/l7v1i5
@r_devops
Is a DevOps culture possible if 'You build it, you run it' is not an option?

Hey there. It am working for a company right now that is trying to adopt DevOps and agile methods to fix the issues we have from doing waterfall software development in the past. We are somewhere in the middle, probably like a lot of other companies aswell.

A question just arose in my mind which is "Is a DevOps culture possible if 'You build it, you run it' is not an option?". We are developing software that gets distributed to our customers then and they manage and run it e.g. in their data center. We only help e.g. during first installation or update. Therefore I guess our development teams are just not able "to run" what they "built". As we have quite some customers that run our software in an air-gapped environment there will also never be the case that all systems will be "run" by us. Additionally our teams are not big enough to "run" all the software installations of our customers. Its a bit similar as if you would expect Microsoft itself to "run" all the installations of Microsoft Office on each home PC around the world. However, I guess Microsoft is getting with Office 365 in a position to actually run what they build ;). As said for us, that is not an option.

With all this, can we even enable a proper DevOps culture? From my understanding getting Ops very close to Dev is a key element, thus the name "DevOps" ;). But this does not look doable for me right now.

https://redd.it/l7nnmd
@r_devops
How would I pull from a remote repo and then push to a DIFFERENT remote repo and not the original repo I pulled from?

Thanks :)

https://redd.it/l8ol7q
@r_devops
Git Repo(s) Structure

We're just starting down the IaC path. We're looking to use GitHub for our repo(s). And that just adds more questions.

Anyone have any advice on structuring an IaC repository? Multiple repos? One big repo? Are there any best practices or examples? I'm fairly new to git but if we put it all in one big repo, our team will need to pull down the full repo locally, right?

I'm hoping to see some examples to better understand how we could/should set up our repo structure. Anyone have pointers?

https://redd.it/l8ujpc
@r_devops
What type of things should I ask/learn from my colleagues

I got my first cloud SRE/DevOps job 2 weeks ago at a medium-size company. I am a university graduate who went to 4 month coding boot camp for software development which is how I got this job and I am 23.

My colleagues are a lot older, one has 20 years of experience prior as a developer and others have a similar amount of experience in IT from different backgrounds such as my boss who works on cloud security.

I am wondering what are the best things I should learn and ask from them to become a good engineer as currently, 90% of my time is training on A cloud guru.

I am training and will work with the following stack; AWS solutons architect, REHL 7 , Jenkins, Boto3 and Python, Ansible, Terraform, Bash scripting, Splunk, Datadog.

https://redd.it/l8sc9n
@r_devops
Help Needed

Hello all! I’m new to configuration management. I’ve put together a plan for a large construction project. I need to start developing excel spreadsheets to track CI items. Can anyone share something they’ve used previously?

https://redd.it/l8r88x
@r_devops
what would be the best practice to pass env variables to docker in a pipeline

Hi everyone,

I have a multi-module springboot project that I'm running as a docker-container in different environments (development, staging and production).

\*staging and production are hosted in a simple debian machine on digitalocean.

I have an Azure Key Vault that stores the database credentials and this is what my application.yml looks like:

azure:
keyvault:
uri: ${AZURE_KEYVAULT_URI}
client-id: ${AZURE_KEYVAULT_CLIENT_ID}
client-key: ${AZURE_KEYVAULT_CLIENT_KEY}
tenant-id: ${AZURE_KEYVAULT_TENANT_ID}

In local it's pretty simple, since i have all the .env files in the root folder of my application :

docker:
docker run ... --env-file .env

docker-compose :
version: '3.7'
services:
service:
env_file:
- .env

I want to pass the corresponding .env file while building/running my container in staging/production environnement but since I don't store the .env files in my remote reposiotry for security purposes, this approach doesn't work, so my first idea was to use CI/CD variables and inject variables this way :

- ssh ${DEPLOY_USER}@${SSH_REMOTE_HOST} "docker run --rm -d -p 8080:8080 -e AZURE_KEYVAULT_URI=${AZURE_KEYVAULT_URI} -e AZURE_KEYVAULT_CLIENT_ID=${AZURE_KEYVAULT_CLIENT_ID} ...

\*this works but it's not correct in my opinion (what if i have 100 variables ? lol)

My second idea was to use CI/CD variables but this time include all variables in one file :

- ssh ${DEPLOY_USER}@${SSH_REMOTE_HOST} "docker run --rm -d -p 8080:8080 --env-file {ENV_VARS} ...

\*this didn't workout

Anyway, is this the best approach ? would ansible help me here ? (deploying the env variables to the digitalocean instances ?)

https://redd.it/l8pch3
@r_devops
Bare metal kube

So have you ever heard of any company using Metallb load balancer in production?
In my company we are trying to run kube cluster on bare metal but the liders don't want to use Metallb since it is in beta stage (what I fully understand)
Are there any alternatives? Or maybe some other company provides it's own kube implementation which you can run in your company's headquarters with load balancer? IDK if vmware support this? Can you give me more light? Actually we maps services with node ports which is not the solution if we want to move the rest of the services to the cluster (actually few exposed services to the clients, but the target is rather 50)
Implementing custom load balancer seems more expensive than gke

https://redd.it/l8i2gk
@r_devops
What're my options for expansion?

So currently I have a single vps running about 6 websites.

I have Traefik running from docker-compose, and all of the websites setup from docker-compose. If a website needs a database that's in a container in the same compose as the website it's required for.

I'm wanting to expand, for load balancing, and to make it so if the server fails none of the websites go down.

I tried to switch to switch to docker swarm and ran into tons of issues. Not all of them were swarm specific. When I finally got Traefik up and running and solved the issue of it not acquiring SSL certificates (16 hours of work). I guessed the rest was going to be a little easier.

I started up a Wordpress/MySQL container stack and... Wordpress wouldn't connect to MySQL. I seem to have stumbled upon an article which states that it's something to do with the internal ip addresses & host names retaining old values when pulling a stack down and pushing it back up.

I basically gave up at this point and spun back up the server with all the compose files running on the single node. I reset the new VPS instances I have acquired and now don't know what to do.

In total I have 3 spare VPS and the one that currently is hosting all the websites.

I figured swarm would be easy but it caused a lot of problems.

I don't really know what nomad is or what kubernetes is. I'm willing to learn them, if they will fit my needs.

Anyone got any advice?

tl;dr: docker swarm appears to suck hard, sometimes, and is very frustrating to fix. What're some good alternatives and how easy is it to transition from using docker-compose files?

https://redd.it/l90rxr
@r_devops
Private container registry

Hi all
I would like to use https://www.projectquay.io/ as a private container registry but
unfortunately I could not find any resources how to install it.

Is projectquay really open source? When I visit the site https://quay.io/, the provided version on premises is a trial version.

On github https://github.com/quay/quay, there is not any guide how to install it.

Would be nice, if someone could help.

Thanks

https://redd.it/l8vzns
@r_devops
Best practices surrounding password storage (hashicorp vault)

Hi,


I've been looking into vault lately, and I am trying to figure out best practices/most secure. We are not using kubernetes/aws/gcp. Currently we are just deploying docker-composes, and I was wondering what the best way to secure our passwords might be. With passwords I mean the application to database kind of credentials. Or credentials to another app.


Is there a good way to secure the authentication mechanism, so that not the whole token is in configuration files, or findable through config files? I get that getting a vault token removes the need of passwords in configuration, and that is pretty neat, but how do you prevent an attacker from using the same token to still get the passwords?


Or should you have the mindset that when an attacker gains access to a machine all passwords are "lost" anyway. Or are there some defined best practices?

https://redd.it/l8hhll
@r_devops
What are the best containers to use with Kubernetes

I read recently that docker was depreciated for Kubernetes. What container service do you use instead? Also, if you deploy your app and set up your CI/CD pipeline without containers, is it easy to put everything in containers later? Or do you realistically need to add containers at the very beginning?

https://redd.it/l8sc2p
@r_devops
Monorepo Build Systems (Bazel vs Pants vs Please)

This probably gets asked somewhat often, but these tools tend to change so I want to get an opinion from people who use monorepo build systems in their codebase.

I am looking for a build system to use with Javascript, Python and gRPC. I wanted to go with Bazel since a lot of big companies seem to be using it (or a version of it) to some extent (Google, Dropbox). The main problem I have with Bazel is the Python support just seems awful. From weird namespacing bugs to a somewhat fragmented Python ecosystem, Bazel seems to not have Python integrated well.

I wanted to switch to either Pants or Please, because they seem to support both Python and gRPC. Although they lack JS support. There is also Buck, but Buck has no gRPC support which is a deal breaker.

Would appreciate any opinions/recommendations.

https://redd.it/l8gmne
@r_devops
Transferring data between two minio servers

In my work place, we have deployed MinIO (I wasn't in charge when they did that, and the only thing I know is that it's not on Amazon and it's a self-hosted one). Then, I have to transfer the data to a newer service which has minio as well.

First one is a Linux server I have full access of. The second one is "Storage as a service". For now, I have no idea about how can I transfer the damn data between two servers.

https://redd.it/l8gfgs
@r_devops
How to calculate cycle time (process time) ?

Hello everyone, me and my team (one other guy) are working on a project, we have a Github and we decided to follow Devops principles to learn. We also created a Kanban board for the project directly on Github. Now, we'd like to use the github API to calculate the process time for each given task (issue), but it seems the Github API is lacking in terms of what issue is on any given column, so that we could get the issues in the todo progress and save their dates when they were first added in the column, so that we could use this date (minus) the date the issue is closed at to get the process time. Does anyone know how to use the github api to get the cycle/process time?

https://redd.it/l95iw9
@r_devops
How did you get into devops?



View Poll

https://redd.it/l9blqv
@r_devops
Decentralize Infrastructure As Code

As a developer I love infrastructure as code, especially collocated with my service code. For example, having Jenkins pipelines or k8s manifests in the same codebase as the source code for the service. One problem I face though is that I work with a couple of centralized repositories that manage things like secrets or Terraform definitions for my stateful backend pieces like storage (e.g. one large repository for all terraform).

There are some good reasons these things are centralized: auditing, human gating for cost and security reviews, as well as an ability to bootstrap it all in case a new environment needs to be spun up.

The downside as a developer is that it adds friction. I need to go make changes in a separate repository, possibly wait for team-external review, etc.

I'm curious if anyone has run into similar issues and find a compromise. I was hoping to allow service teams to "mount" (maybe use git submodules from the centralized repository) config related to their service, and get the benefits of having the service dev team be more independent.

https://redd.it/l9j5v0
@r_devops
I've got some Kubernetes diagrams to share

Hi folks!

As a Kubernetes freshman, I've been looking into ways to customize it. And, to my surprise, I came to the conclusion that the Kubernetes API plays a very important role there. Custom resources seem to be a very good design decision because they can be manipulated in the same manner as any other built-in resources, such as Pods, Namespaces, or Services. However, the documentation is a bit bloated and the API structure is far from being trivial. So, I ended up drawing this diagram with Kubernetes API structure. Then I turned to the Operator Pattern because apparently custom resources without code actually have very little use. And found myself reading tons of vague articles full of some marketing speech. So, when I finally figured out that operators are simply Pods with custom controller logic watching and manipulating custom resources, I ended up making an animation of one of the operator's logic (or full-sized GIF, but be careful, it's 16 MB). Since operators are actually control loops, I thought that it might be easier to grasp the idea by looking at a dynamic visualization, not a static diagram. And finally, I also wrote an introductory but concrete article about the Operators Pattern with some useful (in my opinion) links in the end.

Sharing my findings because I hope it may safe time for people on a similar journey!

https://redd.it/l9gp5b
@r_devops