Reddit DevOps
269 subscribers
14 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
I need help

I made a http service that returns 3 endpoints with python and django, created a dockerfile as well with all steps and I dont know how to make the build and test inside of it, what cmd should i write ? as this is my first time with jenkins

https://redd.it/kc72fh
@r_devops
R Leveraging the newest AI patented research for more efficient IT Ops

On 12/16 Wednesday at noon there is an exciting overview of startup innovating in the AI IT space. The company is based on NC State professor Dr. Xiaohui (Helen) Gu's award winning patented technology. Read on for details.

Systems are becoming more complex. Customer expectations for speed and reliability have never been higher. How do you get ahead without sacrificing ROI?

The secret is adding intelligence to traditional IT operations. Join us December 16 at 12:00 PM ET and we'll show you what is possible.

You’ll learn:

\-The business benefits of accurate anomaly detection for machine data

\-Why anomaly detection and accurate incident prediction are valuable together

\-How to automate root cause analysis to prevent downtime

\- How to use AI to prevent future incidents

**https://lnkd.in/dT4sZWC**

https://redd.it/kcm8z6
@r_devops
Question from interview

> A server and a storage. Suggests a mechanism so that at any point in time it would be possible to recover to the most recent image or the one before that.

Question from an interview I'm not sure about. What is the best solution? Create daily copies of the storage?

Thanks ahead!

EDIT: some more:

A sender and a receiver. The packet turn around time is 200 ms.
1. What would be the rate given a packet size of 1500KB and that only 3 in flight packets are allowed.
2. Same with turn around time of 100 ms.
3. What would be the optimal N if an ack is sent every N packets, and if one of the packets gets corrupted all N packets are sent. … href="/Interview/A-sender-and-a-receiver-The-packet-turn-around-time-is-200-ms-1-What-would-be-the-rate-given-a-packet-size-of-1500KB-an-QTN_872769.htm" class="questionResponse">Answer Question

https://redd.it/kcjevs
@r_devops
Programming Group

Pretty soon, I'm going to start doing a weekly event focused around what tech I'm learning. Right now is web technologies such as HTML, CSS, and JS. I'm currently in a class for it and soon enough will be learning about JS. If you would like to join in, I'm thinking about starting at 6pm EST every Wednesday except the last Wednesday each month via Discord.

https://redd.it/kcri48
@r_devops
Should/could I get into DevOps professionally?

BLUF: I'm kind of in a unique position and am hoping to get some career advice.

I am a junior computer science student. I've been fortunate enough to hold a job as a help desk service tech for 4 years. During that time, I've dabbled in a lot of IT infrastructure tasks thanks to my coworkers. Stuff like Active Directory, CentOS and web server (apache) maintenance. What does it take to get into devops? The appeal to me is that you are working on multiple systems. It seems like "advanced IT", something challenging. There is a laundry list of technologies one must learn in order to get hired. That just doesn't seem realistic to expect from a fresh graduate. Also, I am afraid of getting stuck into one discipline. Maybe I decide to learn js frameworks to get a job as a backend web dev. Am I going to be making relatively the same salary the rest of my life? What should I learn now that will set me up for the high paying job down the road?

https://redd.it/kcrebq
@r_devops
How to deal with the new Docker hub rate limit when using Code Pipeline, Cloud Build, EKS or GKE?

On November 20, rate limits anonymous and free authenticated use of Docker Hub went into effect. Anonymous and Free Docker Hub users are limited to 100 and 200 container image pull requests per six hours. This article explains how to deal with this limit when using Code Pipeline, Cloud Build, EKS or GKE.

https://redd.it/kcxm4m
@r_devops
CrowdSec, an open-source & collaborative fail2ban, built by SecOps for DevOps

Hi there,

CrowdSec is, and will always remain, an open-source (MIT license) and free security solution able to identify aggressive behavior & provide an adapted response to all kinds of attacks. The game changer is that it also enables users to protect each other. Each time an IP is blocked, all community members are informed so they can also block it.

The tool is written in Go and just turned 1.0.0, meaning it is now supported by a local REST API, allowing you to deploy in various enterprise configurations. We built CrowdSec for the people in order to make security accessible to everyone.

You can review the project here: https://github.com/crowdsecurity/crowdsec

Looking forward to your feedback!

https://redd.it/kcz2av
@r_devops
how to create a hook script to update local git branches

Looks like an on-going pain especially for new people to the team. I am no git expert either.

Before I create a pull request, I update my local branches. By doing a pull for master then rebasing my feature-branch on top of new changes in master.

Not everyone rememebers to do this which can cause problems sometimes.

Is there a way to automate this by creating a hook script that runs the relevant pull/merge/rebase commands before a certain git command is run such as git add or git commit etc.?

https://redd.it/kd08nu
@r_devops
The Monitoring Dilemma

Hi all,

I was tasked to set up a monitoring solution for my company's self developed videoconferencing application.

The stack is mostly composed of:

\- Azure Pgsql and Redis SaaS

\- Azure Webapp

\- Several containerized applications (both OSS like Janus and our own code) hosted on both Azure and OVH.

​

I'm the final phases of software selection and I'm strongly leaning towards Prometheus+Grafana.

Today product management told me to consider also paid/managed solutions that will maybe "easier to setup and for which we will have support".

I'm a technical guy and I think that, most things considered, Prometheus will be flexible enough for our needs and won't need any coding. I honestly don't see myself calling up sales guys and get demos.

However I feel like I may be missing out on proprietary solutions and I fear I risk making a biased decision.

Any opinion? Will you feel confortable enough to share a good word on any proprietary monitoring solution?

Thanks in advance,

https://redd.it/kd716v
@r_devops
Fractal Architectures: A Software Craftsman's take to Infrastructure as Code

For several years now we have experienced the pain of automating cloud infrastructure, as we expect most of you in this community have too.

We came up to a conclusion: we don't like it. Our opinion is that there are two main reasons causing this pain:

IaC solutions are not ready for being used at an Enteprise Scale.
We are using these tools in the wrong way.

During the last couple of years we have worked on a framework that we have adopted at scale with our customers during this wonderful 2020.

We would like to share it with you, hear your take on it, compare notes and keep learning!


If you are interested, have a look at it here: https://yanchware.com/content/fractal-arch-iac and let us know what you think!

https://redd.it/kd758t
@r_devops
Work

Devops engineering entry level jobs require degrees and can legally blind people do it?

https://redd.it/kd6mqa
@r_devops
Alerting from screenshot

Hi Techies, As apart of my new project, i have been hunting for a monitoring tool which can send screenshot of dashboard in case of failure configured, instead of sending message in email or slack channel. Tried with Dynatrace as well as google stackdriver but couldn't find any options out there,

Can anyone please suggest on this.

\#monitoring

https://redd.it/kd5fu4
@r_devops
Internal API dev team asking DevOps to solution rate-limiting?

[Discussion\]

Hey all, I have an internal dev team asking the DevOps group to solution rate-limiting on an internal only API service. Occasionally the service gets hit hard from various jobs that run across the company.

My initial reaction was "sure, I'll put WAF on there" however, the more I think about it the more I feel like rate limited should be built in to the application. Specifically because this is an non-public endpoint and we probably don't want to just flat out block requests that hit a rate limit.

Thoughts?

https://redd.it/kd30e0
@r_devops
Nifty tool name escapes me ...

There is a nifty / small utility that runs on Linux (possibly windows?) that dumps every bit of software configuration info and operating system info into JSON output.

Is used extensively behind the scenes for Chef.

I want to say it's name is 4 characters long, possible hawaiian in nature.

I haven't used for about 7 years, so looking for help recalling name.

https://redd.it/kdeigk
@r_devops
Cost center versus profit center

You may heard of the cost center vs profit center issue where people in profit centers get paid and treated better than people in cost centers. People at profit centers make money for the company where as people in cost centers cost money for the company. I can't help to think that a typical devops team that works in silo is an cost center where as member in a feature team with a devops focus is in a profit center. In times of company down sizing, individuals in cost centers are usually the first to go

What are certain things that developers in a cost center do, that I should avoid if I want better career trajectory. Things that I can think of are : maintaining systems, sec ops, building infrastructure and automated tests. Don't get me wrong, these are good skills to have but it is very easy for the business to dismiss these as they cost the company money instead of earning the company money

https://redd.it/kdebzt
@r_devops
Detect and Block Exploit Attempts for Kubernetes Vulnerability: CVE-2020-8554 Man in the Middle (MiTM) Attack Using Kubernetes Service Resources

Read the blog here.


TL;DR

Kubernetes CVE-2020-8554 enables an attacker to intercept traffic from other pods (or nodes) in the cluster if the attacker can create or edit services and pods. This vulnerability was originally discovered almost a year ago, revealing a design flaw that affects all Kubernetes versions
Exploiting this weakness requires at the minimum RBAC permissions to create, update or patch Service resources, specifically:
An attacker that is able to create a ClusterIP service and set the spec.externalIPs field can intercept traffic to that IP.
An attacker that is able to patch the status.loadBalancer.ingress.ip field of LoadBalancer service status.loadBalancer.ingress.ip can intercept traffic to that IP.

What you can do about it:

At this point, CVE-2020-8554  does not have a software update that mitigates this issue. Users are advised to implement fine-grained access restrictions and can use RBAC policies and admission controllers such as this one, OPA Gatekeeper Constraint or others. 
Scan Kubernetes audit logs for evidence of attempts to exploit this CVE. The creation of a new Service or modifying an existing Service leave traces in the audit log.
Monitor Kubernetes resources and entities for attempt at service creation or modification that allow attackers to intercept traffic.
Use admission controllers policy logic to deny and alert on external-facing or unauthorized Ingress Controllers and Services.

https://redd.it/kd0978
@r_devops
Hi team greetings can I know some best practices to store hashicorp vault address and token.

Currently I am doing by storing and restore those using env variables.

But is there any other best practice available for production point of view.

Thanks in advance.

https://redd.it/kcv2r2
@r_devops
DevOps without ops

At our company we lack ops in a way that the ~6 people team also operates and fixes stuff whenever needed. Now people decided it would be best to implement DevOps, which in their mind means developers also doing operations with rotating schedules and on-call duties. The desired state is doing Devs for about 80 % time and 20 % for ops.

Everywhere I look DevOps means better communication between the two teams, but I can't find anywhere anything about not having dedicated team, the closest thing I can find is SRE. However I'm told this is not as uncommon in practice so I would ask everyone here if you have experience with this or can point me to a blogpost or something.

https://redd.it/kcu98y
@r_devops
Perfect gitflow CI in scrum - opinions?

This could be a cross-post between r/Git, r/Scrum and r/DevOps, but the clou of this topic I believe is closest to r/DevOps. I'm not a young dev anymore, but I'm quite new to DevOps so I'm curious about your opinions, suggestions, your own experiences, etc. What I worked out has been working pretty well so far, but when it comes to CI/CD I'm always looking for improvement.

​

Some background

My team (less than 10 devs) works in Scrum: 2-week sprint interval ending with live release (with code freeze 2 days prior to that). The usual story flow: READY > DEV > CODE REVIEW > QA > RELEASE > LIVE QA. We incorporate 3 test instances that always correspond to 2 branches (newest sprint (development) branch is always deployed to staging/TA, newest release candidate is on PRE instance). Docker/helm-chart/kubernetes handles instances and its services, Gitlab serves our repositories.

​

Our variation of Gitflow

We're using a custom variation of Gitflow. It's pretty much the same as Gitflow, except that we use chronologically numbered sprint branches instead develop. Release branches are numbered in same fashion, so it makes it really easy to tell in which calendar sprint a given scope of features was added. This also fixes a problem of prematurely "smuggling" features that normally would have been already pushed to "develop", but were not yet approved by QA. One can say this could be solved by QAing feature branches before they are merged to common "development" branch, BUT we do need to test multiple different features when they are combined together (integration test is even more important for us than testing a feature on its own). That's why we use sprint branches, their main purpose is to automatically deploy to STG/TA instances on any commit so that QA can test all features right away. In case a given feature is not approved, and there is not enough time left to fix it before end of sprint (rarely, but happens), instead of messing with cherry-picking we can just merge only the approved feature branches into release branch (and skip sprint branch). In case the whole scope of the sprint branch is fine (90% of the time) we just merge sprint branch to release. The release package is tested again on PRE and if it's fine it's released, followed by merging to master. Critical hotfixes are pushed with hotfix/x.y branch names (where x is the version of the release to be patched, y - incremental number starting from 1).

​

Automation

This obviously implies plenty of micromanagement around branches, but I've managed to automate it with gitlab API and scripts.

\- sprint/x branches are automatically deployed to STG/TA on any commit.

\- similarly, release/x and hotfix/x branches are deployed to PRE (once built can be can be manually deployed to LIVE from same pipeline)

\- merging sprint/xx into release/xx creates sprint/(xx+1) and protects sprint/xx (so prevents pushing to old branch by accident)

\- merging release/xx into master creates release/(xx+1) and protects release/xx, it also creates 2 new merge requests: sprint/(xx+1) into release/(xx+1) and release/(xx+1) into master

\- there are usually 2-3 days between merging sprint to release and merging release to master (last 2-3 days of the sprint), because of that in that time normally automating adding a MR from new sprint to new release would not be possible (because a new release branch needed for target branch is not yet created), that's why we must always have additional branches 1 sprint ahead of current sprint

​

Pros

\- most pros resolve from a fact that inspecting branches is much more convenient than tags, i.e:

\- comparing diff between 2 release branches is possible even from gitlab UI (easier to inspect scope of release)

\- chronology, easier to rollback

\- docker images are named after branches, so it takes very little time to check in k9s or app actuators what's currently deployed (instead I'd have to check by commit hash or pipeline number)

​

Cons

\- necessity
to have an eye on using correct branches that change every two weeks due to Scrum, i.e. target branch in MR if sprint ends (such mistakes can be prevented by protecting old release branches)

https://redd.it/kdmihv
@r_devops