Reddit DevOps
268 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Can anyone remember an ineractive browser "game" explaining the CapitalOne Hack?

I have a strong memory of a vendor creating a microsite game that walked you through ~15 steps to replicate the CapitalOne hack (90% confident it was that use case) with a console in the browser window. But I can't seem to re-find it. Does anyone else remember it and have the link?

https://redd.it/ksz4op
@r_devops
GCP Memory Store Alternative

Could someone tell me, whether using GCP's Memorystore for Redis is better or using a Redis Docker Container with a mounted volume could be almost equivalent?

Since the gcp service is really expensive.
I can't seem to justify 35 USD per month for just 1 GB of storage.

I can't seem to fine anything online to help me understand what is the approach I should finalize on.

Any help or guidance would be highly appreciated :)

https://redd.it/kt1mxc
@r_devops
Is there a good APM or cloud monitoring solution for large private clouds?

In other words, is there something like datadog for monitoring cloud and application performance which does not require sending metrics to their cloud?

https://redd.it/kt2hyp
@r_devops
AWS EKS Architecture Discussion

I’ve been tasked with designing our Kubernetes Cluster offering for AWS. The requirement is to use managed EKS clusters. I've worked primarily on GCP and Azure, so while I'm quite familiar with those clouds, AWS is new for me.

I’ve read the AWS EKS documentation front to back as well as many AWS blog posts.

I'm recommending deploying EKS with custom networking enabled so that the pods do not receive IP address on the same subnet as the primary node interface. The benefit of this, as I understand it, are reserving IP space (by using a non RFC1918 IP Address) and being able to set separate security policy for pods. AWS Kubernetes CNI plug-in accomplishes this by utilizing a secondary ENI on the nodes which is deployed in a separate subnet.

We use Kubenet in Azure and I have to document why this is a bad practice in AWS. This is where things start to get fuzzy for me. With kubenet, since kubernetes implements a bridge network the pods cannot communicate with each other without NAT. In Azure this is not such a big deal because the limit on User Defined Routes in the routing table is 400 so you can theoretically have up to a 400 node cluster. In AWS the main VPC route table limit is 50 routes, SO your theoretical cluster limit is 50 nodes.

Whew! Ok with that all said I was wondering if we could get around the 50 node limit by using Custom Route Tables or a Transit Gateway alongside Kubenet Network Plugin?


Also I was wondering if someone could explain to me why CNI with custom networking enable does not require a route per worker node if the node is still NATing the Pods via the secondary interface.


Thanks!

https://redd.it/kt3g88
@r_devops
Alternatives to Terraform for AWS EKS deployments.

First off I am going to say that I'm probably going to use terraform.


I've been tasked with deploying AWS infrastructure to support EKS Cluster Deployments, VPCs, Subnets, etc... I've used terraform in other CSPs and sometimes terraform falls flat keeping up with the CSPs APIs. My co-workers have used powershell and bash to call the APIs directly. I'm not interested in doing that. So whats the next best alternative? AWS CloudSDK with Python? eksctl? aws-cli? CloudFormations (Please no)?


Any suggestions would be appreciated.

https://redd.it/kt3cje
@r_devops
Python exercise tips for SRE interview?

I have the next few rounds of an SRE interview coming up. The position will rely a fair amount on the ability to create tools. My background is largely in linux administration, but I do have \~2 years of python under my belt and \~5 with bash. I am self taught, so I don't have any real official foundational knowledge/concepts. During the first interview, I had to solve a easy/medium difficulty leetcode problem. When I pulled up python, I completely blanked. I even forgot how to write a function! So I panicked and switched to bash. Thankfully I solved it in an appropriate amount of time, they liked my solution and thought I did well enough to move me onto the next interview. In any case, I imagine there will be more tasks like this one. I've been doing problems on leetcode (and struggling), but I am curious, are there any other really good resources or labs/projects I could work on?

https://redd.it/kta899
@r_devops
DevOps Server Admin Letter Satire

I remember a humorous letter written from the perspective of a server administrator who did not like automation or configuration management or something to that effect. Does anyone know what I am talking about? Know where I can find this? Thanks in advance!

https://redd.it/kta7n4
@r_devops
I am looking for some beginner/intermediate GitLab Ci/CD piplenine guides

Hello,

I am currently in hope of transferring to a new DevOps team that is starting from scratch(something like internship which will be performance profiled based on the results).

Based on what I understand the basics are to build up a pipeline that will do the build, make tests, and push to production. But I don't know and can't find any good sources in doing this.

Note: I still haven't checked the pinned books in the subeditor but I think that they are a very good start in my path to DevOps.

Any help will be appreciated, thanks :)

https://redd.it/kt62nj
@r_devops
login to github from terminal

anybody know the git input on terminal kind of lost here cause nothing is saving

https://redd.it/kt9ieo
@r_devops
Question about pull request CI strategy

I'm working in a company where all CI procedures determined by the DevOps team mostly without asking the developers. We have tests covering almost any place of our applications.

About a week ago one of my pull request branches couldn't pass the CI procedure and I realized that the production branch is being merged to my PR branch (feature) before the CI procedures.

There were no CI issues with the master branch. However, when it gets merged to my PR branch it fails at some CI checks. After digging the errors I found that there is a non-standard configuration included in the production branch and it was conflicting with my PR because what I implement was right (following the RFC's and documentation.)

Now it is my branch that is falling. I asked DevOps to not merge the production branch when running the CI procedures because I want my PR branch to be tested in isolation without changes from the production branch.

However, DevOps declined by saying "This is how it works, otherwise we can't guarantee the production branch stability if we don't merge it to PR branch before CI procedures."

While I see the DevOps argument is valid but I still I'm not convinced to have run CI procedures with the production branch merged to the PR branch.

So what is your CI procedure look like in terms of PR branch testing and do you think what our DevOps team is doing legit?

https://redd.it/ktf7r0
@r_devops
Never bend the rules in an effort to prop up a fundamentally flawed system. Instead, follow the rules to the letter in order expose the systemic problems. It's the only way to bring about change.

Just thought I'd toss that one out there, as I am sure some of you are enduring a serious mess right now.

https://redd.it/ktck69
@r_devops
Azure pipeline Variables between tasks

Hey,
Im really struggling with variables between tasks and hope to find some help.

I have one powershell task which parse a yaml file, and extract a variable. This variable im setting like this:
Write-Host("##vsotask.setvariable variable=APPLICATION_NAME;isOutput=true$tempApplicationName")

where $tempApplicationName holds the parsed yaml variable im looking for. I have debugged it to make sure it has value.

In the next task, I have a Kubernetes task, which takes inputs:

- task: Kubernetes@0
condition: succeeded()
displayName: "Waiting for rollout"
inputs:
connectionType: Kubernetes Service Connection
kubernetesServiceEndpoint: ${{parameters.kubernetesServiceEndpoint}}
namespace: ${{parameters.namespace}}
command: rollout
arguments: status deployment/$(APPLICATIONNAME) -n ${{parameters.namespace}}

Resource I been looking at:
https://medium.com/microsoftazure/how-to-pass-variables-in-azure-pipelines-yaml-tasks-5c81c5d31763

${{ variables['APPLICATIONNAME'] }} // which is compile time, so i know it wont work
$(variables'APPLICATION_NAME') // Runtime but never worked either

Any ideas?

https://redd.it/kt246j
@r_devops
Running BOINC (contribute computation resource for science voluntarily) on when my servers have <50% CPU load in spare hours: Has anyone done this? Is this good or bad (e.g. do harm to server stability / not environmental friendly)?

Running BOINC (contribute computation resource for science voluntarily) on when my servers have <50% CPU load in spare hours: Has anyone done this? Is this good or bad (e.g. do harm to server stability / not environmental friendly)?

https://redd.it/kt0kom
@r_devops
Simple managing on multiple AWS accounts with AssumeRole

As we were struggling with simple management of multiple AWS accounts for our team (dev, beta, production), we have decided to use AssumeRole for it's simplicity:

https://medium.com/russmediaequitypartners/simplifying-account-management-on-aws-cloud-using-assumerole-3d719bafd34f

https://redd.it/ksziyf
@r_devops
How many people here have "true" Continuous Deployment?

By "true" CD, I mean a GitOps flow where commits to master immediately kick off a deployment to the production environment.

If so, how did you get there? I think the main problem is getting your technical leadership to feel comfortable enough with your tests to allow it.

https://redd.it/ktk0cd
@r_devops
Load balancer in multi-master kubernetes

Hi, I am looking for best practices for load balancer configurations in a multi master cluster. Suppose there is also a web server which needs a load balancer endpoint. I have a few questions:
1. I assume that there will be different VIPs for the control plane and the web application endpoint, correct?
2. How do we ensure that the api-server calls are not swamped by the calls to the web server of the same load balancer is used? Or do people use two separate load balancers for this configuration?

I am looking for best practices in this configuration.

https://redd.it/ksym84
@r_devops
How to Continuously Deliver Kubernetes Applications With Flux CD



# How to Continuously Deliver Kubernetes Applications With Flux CD

I took help from below link for fluxcd

https://medium.com/better-programming/how-to-continuously-deliver-kubernetes-applications-with-flux-cd-502e4fb8ccfe

## But I'm getting an error :

Error: can not connect to git repository with URL ssh://[email protected]/xx/nginx-kubernetes.git

Full error message:

git clone --mirror: fatal: Could not read from remote repository., full output: Cloning into bare repository '/tmp/flux-gitclone470494538'... ssh: Could not resolve hostname github.com: Try again fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Run 'fluxctl sync --help' for usage.

https://redd.it/ksxemt
@r_devops
Recommend me a solution to manage application versions, deployments

Background: we are using GitHub with AWS CodeDeploy today to deploy microservices from one central AWS account to multiple AWS accounts (dev, qa, prod etc). In short, these are triggered from the GitHub Actions UI, which invoke AWS Linux runners that build, create artifacts and use AWS CodeDeploy to create and deploy a release (cross-account deployments).

Now, I need a higher level abstraction where devs/product owners can see a list of applications (i.e. microservices), stabilities (dev/qa/prod), customers (X/Y/Z) and group them in any way. For example, it should be possible to see the current version of an app deployed in stability = qa and customer = Y, or should be possible to see all microservices running for customer Z across all stabilities.

It doesn't look like any of the AWS in-built solutions fit the bill (including CodePipeline). When I try to look elsewhere, I saw the feature set of Spinnaker ([www.spinnaker.io](https://www.spinnaker.io)) and while it has the concept of applications, it tries to do the deployments itself (as opposed to leveraging CodeDeploy).

Core capabilities I am looking for:

* Ability to trigger deployments for services (single or multiple together)
* Ability to have manual approvals (Spinnaker supports this)
* Ability to see what versions (i.e. which git sha) is deployed where
* (Good to have) Scheduled deployments
* (Good to have) View logs of builds (good to have because GitHub Actions would store this)

Please let me know if you're aware of such a tool (other than spinnaker). This is not just a CD solution, but a higher level view of the entire software deployed.

My backup plan is to implement this from scratch.

https://redd.it/ksx7ho
@r_devops
Linux Troubleshooting - Why Is the Server So Slow?

Linux Troubleshooting – Why Is the Server So Slow? (Running Out of CPU, RAM, and Disk I/O)

https://redd.it/kswycf
@r_devops
D-E-V-OoPS! vs A-G-ILE///

What is Agile?

First, we will take a look at what Agile is all about so we can better compare it to DevOps later. Agile is a methodology that is designed to work on an iterative structure so that it can adapt to changes and respond to constant feedback from the end-user. The point here is to provide constant results that meet the needs of the client.

What is DevOps?

Then we are able to move on to DevOps so we can see how this compares to the Agile framework we looked at before. DevOps is actually a method that comes from Agile. It is slightly different though because it takes into account the unique needs that come with increased software velocity compared to Agile.

etc etc etc etc............ ( simple go through this link --- https://starweaver.com/portfolio/devops-vs-agile-everything-you-need-to-know/ )

https://redd.it/ksxob8
@r_devops
Pre-baked docker images within a gitlab-runner job ?

Hey guys ,


So in a pipeline for a microservice we have integration tests which run and do their job fine.
My issue is that we're needlessly losing time every single run by simple things like pulling in the dependancies


so like within the tests it will need postgres / redis / kafka etc depending on the service.
Is there any smart way to "pre-pull" these docker images so that they're already existing ?


Like for example the job would look like


integration-tests:
stage: test
services:
- docker:19.03.0-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
- npm run test:integration
tags:
- test-runner


But running this would attempt the pull the docker images.


The runner which it runs on \`test-runner\` I was attempting unsuccessfully to add the images into the dockerfile when building it



RUN apk add --update docker openrc
RUN rc-update add docker boot
RUN docker pull busybox


would result in


Step 35/38 : RUN docker pull busybox
---> Running in d18a87927aa0
Using default tag: latest
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

&#x200B;

&#x200B;

But even then I'm not even sure if I should be using that, should i instead be using trying to bake the images into `docker:19.03.0-dind` instead ?


Sorry if this seems stupid but any help would be appreciated

https://redd.it/kts8wu
@r_devops