Reddit DevOps
268 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Cloud Native DevOps Bootcamp... worth it?

I came across this Cloud DevOps 10 week bootcamp for n cloudskills.io

https://cloudskills.io/courses/cloud-native

To subscribe, it’s $27 a month with the ability to cancel anytime. Looking at the concepts and lessons.... is this worth it for someone new to DevOps and wanting to get into cloud?

The last week they also give you tips on your LinkedIn and resume to get hired/promoted, which I thought was cool.

What do you think?

https://redd.it/lp4dlf
@r_devops
Survey on the state of self-managing teams

One could argue that one of the less technical, but very important, aspects of DevOps is the fact that teams and employees need to work autonomously and self-managing, to quickly zoom in on the right solution for their challenges, without having to ask for permission. Do you agree?


To get a better view of the current state of self-managing teams, we are currently doing an international survey, and i'd like to invite you also to share 5min of your valuable time to answer the few questions. (Bonus: You also have a chance of winning one of the $50 AWS gift cards she's giving away to submitters.) All info stays anonymous and of course you'll receive the resulting report in a few weeks time.

https://forms.gle/VVbDuDhGtpBRsM9QA

tnx!

https://redd.it/lp4qwv
@r_devops
Youtube in vid ad skip

A chrome extension for youtube that tracks other users skip behaviour and uses that data to skip in video ads such as skillshare, brilliant, world of war, raid shadow legends, ect

Is this a good idea?

https://redd.it/lozgtd
@r_devops
Meet Harvester -> Open Source Hyperconverged Infrastructure (HCI) Software

Harvester implements HCI on bare metal servers. Here are some notable features of the Harvester:

1. VM lifecycle management including SSH-Key injection, Cloud-init and, graphic and serial port console
2. Distributed block storage
3. Multiple NICs connecting to the management network or VLANs
4. ISO image repository
5. Virtual Machine templates

I went live with Sheng to discuss the features and concepts of Harvester with Demos'

Hope you like it!

https://youtu.be/87\_ODymEGC0

https://redd.it/lot4oh
@r_devops
DevOps for beginners?

I have compiled a "little" article for newbies to get started in the world of DevOps. A lot of the resources in this article have been tried and tested by me and they have proven to be extremely easy to understand and follow. to read more follow this link
https://link.medium.com/q8ONxaC32db

https://redd.it/lopcte
@r_devops
How did devops work before the onset of cloud computing?

Sometimes it seems that many people rely a ton on cloud technologies and these few questions came to mind.

1.How did devops work before the onset of cloud computing?

2.What technologies would you use today to achieve this?

3.When would you use the old approach?

4.What have you learnt from it?

https://redd.it/lpnnkq
@r_devops
A (over)simplified comparison of DevOps, SecOps and DevSecOps

Mild entertainment purposes only.

**DevOps**

"Launch new code daily!"

Priority - rapid delivery of value

Bring devs and ops on the same page

More likely to use public cloud

Example - social media app

​

**SecOps**

"Protect this fortress!"

Priority - security above all else

Integrate security practices into ops

More likely to run on-prem

Example - human clinical trial tool

​

**DevSecOps**

"Mission-critical and pronto!"

Priority - scale-up securely while delivering value

Bring dev, security and ops on the same page

More likely to use hybrid infrastructure

Example - fast-growing fintech

https://redd.it/lpl3ky
@r_devops
Automate baseline deployment

I am seeking some suggestions from the greatest community ever.

So my environment consists of Linux and Windows nodes. They are connected to each other in a localised network

Baselines are sent to me regularly, i.e on a weekly basis. The Baselines are from a team who are based overseas. I download the patches from an internal server and manually deploy them as my environment is not connected to any internet or Intranet. Making the environment face the internet or intranet is not possible at all.

With that being said, the challenge is, not all baselines are not the same, some replace drivers, some edit configuration, or install updated software, so meaning when I receive the patches, it comes with a document that tells me what to do exactly, so i just blindly follow the document

​

This process is very very tedious and annoying- imagine doing it every week.

Is there any way to automate this?

​

Edit: There is no way of knowing what changes the baseline makes, I will only know it after I have deployed it

https://redd.it/lpo6ag
@r_devops
Needs some help deciding the best DevOps strategy (AWS hell)

Hello!Long story short, I'm forcing myself to learn AWS and to practice, I'm trying to deploy a side project but it's fighting me every step of the way. Here's what I want to end up with:

Frontend (mysite.com) <-- HTTPS --> Backend (api.mysite.com) <---> Database (RDS/Postgres)

Nothing groundbreaking:a React frontend that talks to a basic CRUD server (likely Express.js) hosted under the `api` subdomain with a Postgres database - ideally, I also want pipelines for the front and backend that propagates from GitHub.

It doesn't make sense to me to deploy to an EC2 instance as I'll have to pay for all the uptime. I tried setting up the backend API with AWS API Gateway and AWS Lambda in a serverless way, but connecting this to the RDS database was a nightmare.

I feel like this should super simple but it's been days of stress can anyone please point me in the right direction, please!?!?!?

https://redd.it/lpmi91
@r_devops
Argo CD Vault Replacer Plugin

I recently collaborated on an Argo CD plugin called ArgoCD-Vault-Replacer. It allows you to merge your code in Git with your secrets in Hashicorp Vault to deploy into your Kubernetes cluster(s). It supports ‘normal’ Kubernetes yaml (or yml) manifests (of any type) as well as argocd-managed Kustomize and Helm charts.

The plugin camne about because I'm currently pushing my company deeper and deeper into GitOps and the thorny topic of secrets management came up. We already have Vault in place so went looking at the existing options available to us. For one reason or another, they weren't quite right for us, so this plugin was born. Of course, there's no guarantee that this is right for you, there are other great solutions out there.

It works by you first authenticating the Argo CD pods with Vault using Vault's Kubernetes Auth Method. Then you simply modify your yaml (or yml, or Helm, or Kustomize scripts) to point it at the relevant path(s) and key(s) in vault that you wish to add to your code.

In the following example, we populate a Kubernetes Secret with the key secretkey on the path path/to/your/secret. As we are using a Vault kv2 store, we must include ../data/.. in our path. Kubernetes secrets are base64 encoded, so we add the modifier |base64 and the plugin handles the rest.

apiVersion: v1
kind: Secret
metadata:
name: argocd-vault-replacer-secret
data:
sample-secret: <vault:path/data/to/your/secret~secretkey|base64>
type: Opaque

When Argo CD runs, it will pull your yaml from Git, find the secret at the given path and will merge the two together inside your cluster. The result is exactly what you’d expect, a nicely populated Kubernetes Secret.

If you’re already using Argo CD and Vault, then this is really simple to set up and start using. Please do try it out, and issues, comments and PRs are more than welcome: github.com/crumbhole/argocd-vault-replacer

https://redd.it/lphgti
@r_devops
a curated collection of resources on how orgs around the world practice Site Reliability Engineering

How They SRE is a curated knowledge repository of best practices, tools, techniques, and culture of SRE adopted by the leading technology or tech-savvy organizations.

Many organizations regularly come forward and share their best practices, tools, techniques and offer an insight into engineering culture on various public platforms like engineering blogs, conferences & meetups. The content is curated from these avenues and shared in this repository.

https://github.com/upgundecha/howtheysre

https://redd.it/lpdfcg
@r_devops
Success Stories of running Autopilot on Autopilot



OpsMx Autopilot is an AI/ML-powered Continuous Verification platform that verifies software updates across different deployment stages using CI/CD pipelines, ensuring their safety and reliability in live/production environment. It automates new release verification, reducing time-consuming and error-prone manual verification processes. Autopilot uses AI and ML technologies to assess the risk of a new release, find the root-cause of issues and abnormalities for instantaneous diagnosis, and provide real-time visibility and insight about the performance and quality of new deployments to avoid business disruption.

In this blog, we will discuss how Autopilot is used to analyze and improve the release and deployment of any particular product. To understand the efficiency of Autopilot, we at OpsMx, decided to use it in our own product, and we chose to use the functionalities of Autopilot on itself. This is when we came up with the name – “Autopilot on Autopilot”. 

https://redd.it/lpj3qr
@r_devops
Resources Limits Kubernetes

Hello Community,

I am trying understand this use case of cpu utilization by pods. I have the single k3s node cluster setup on a 8 core bare metal server, i am performing some benchmarks on MySQL pod with Nodeport service using sysbench. I've set the cpu limit to '500m' , wherein during the tests i have noticed in the TOP utility, it seems like the pod process uses all the cpu's? Does the pods uses all cores irrespective of the limits defined ? Need some help on understanding this in more detail as i couldn't find this in the official docs of Kubernetes .

Thanks

https://redd.it/lpj1sb
@r_devops
Dynamically launch multiple auto-scaling groups with Terraform?

I want to dynamically launch & tear-down auto-scaling groups for batch processing jobs.

The thing about Terraform is, I know I can use it to launch 1/2/3/etc. auto-scaling groups. But.. I don't know if I can use it to dynamically launch a new auto-scaling group & then tear it down later.

Is there a good solution for this?

https://redd.it/lpaq4s
@r_devops
Level-Up Your Gitconfig: Platform-specific Configurations

https://medium.com/doing-things-right/platform-specific-gitconfigs-and-the-wonderful-includeif-7376cd44994d

Wrote this article a while back, but finally got around to publishing it. It’s been a really handy concept since now I can finally share my dotfiles on all OS’s without having to do anything to accommodate for Windows, Linux or macOS.

There’s a few other helpful examples in there as well to save time when navigating different platforms.

I am very curious how others do this as well.

https://redd.it/lq360b
@r_devops
I made a desktop app where you can monitor apps that you depend on (right from your menu bar) and get notified when they're down

I created a free & open-source app where you can:

* Select services you depend on
* Check their status in your menu bar
* Get notified when they change their status

Could you try it out and let me know your feedback?!

Website: [https://instatus.com/out](https://instatus.com/out)

Github: [https://github.com/instatushq/out](https://github.com/instatushq/out)

https://redd.it/lpvvp1
@r_devops
GitHub Flow, CI/CD pipelines and UAT deployments

I haven't really thought about my Git workflow in a long time. I'm currently working on Azure DevOps Pipelines, got stuck on something and posed the question to SO. One of the comments was essentially: "Your branch-per-environment workflow is antiquated and doesn't work well with modern Git and continuous delivery thinking... you should reconsider your workflow and take a look at Git Flow or Github Flow."

I'm trying to be more DevOps minded, and I'm always looking for a way to "do things better," so I took that as an opportunity to reconsider how I've been using Git for several years now.

My workflow has pretty much been:

* Feature branch
* PR to merge feature branch to staging
* Approval triggers CI/CD pipeline that deploys to UAT site ([staging.example.com](https://staging.example.com))
* If everything looks good and is approved, staging is merged to the production branch
* Merge to production branch triggers CI/CD pipeline (and basically same tests are run again that were run for staging) to push to production ([example.com](https://example.com))

Thus, staging branch is tied to the staging pipeline and production branch is tied to the production pipeline (hence "branch-per-environment").

I spent the past few hours reading up on various strategies and Github Flow sounded like a good fit even though what I'm doing now seems closer to Git Flow. We are a really small team and even the author/creator/whateve of Git Flow recommends using Github Flow instead.

But it left me with a few questions:

1. How does a UAT site fit into the Github Flow model?
2. If it doesn't fit into the model, is UAT just supposed to be done on the production site?
3. Obviously, a change that breaks production is a concern and the staging layer gives a peace of mind that won't happen, so is the end goal to have your tests so rock solid that breaking changes to production don't slip through the tests?
4. How isn't Git Flow a "branch-per-environment" scenario?

I guess I'm just looking for an elaboration of CI/CD pipelines in the a GitHub Flow model.

https://redd.it/lq3an8
@r_devops
Reverse Proxy - Programmable with provisioning of TLS Certs?

I'm trying to author a SaaS/PaaS solution in my basement, and I'm running into a barrier with scalability. You see, I'd like to be able to allow clients to sign up on my website, ask them to point an A record to my IP (and they will tell my site that FQDN), and while they're working on the A record, my site has already instructed the reverse proxy to forward incoming https requests for that FQDN to my web application, and has already begun provisioning a cert with let's encrypt. Obviously, let's encrypt won't be able to issue the cert until the A record propagates.

So, that said, my point is that I'd like to figure out a way that omits me and my hands from the equation: I'd like to not have to sit at the ready to hand-enter configuration for the new FQDN in nginx. I also don't want to pay handsomely for nginx+ for the privilege of using their API. If it came to that, I'd be willing to write a microservice that sits on top of an nginx instance and listens for calls indicating a configuration change, writes out the new config, and issues a sighup to nginx. None of that is desirable though.

I looked into Traefik, which auto-provisions let's encrypt certs, but the Traefik API seems to be read-only. I also looked into Fabio, which does allow for hot configuration changes through Consul (among others) but it doesn't seem to have any facility for getting certs issued without outside intervention.

Does anyone have any ideas for me to look into? Thanks.

https://redd.it/lq3d0y
@r_devops
An essential library for functional programming lovers in Golang

Go does not provide many essential built in functions when it comes to the data structure such as slice and map. This library provides a list of most frequently needed utility functions which are inspired by Lodash(a Javascript utility library).

https://github.com/rbrahul/gofp

It will be appreciated if you could provide your valuable feedback.

https://redd.it/lq75qj
@r_devops
Linux Foundation Certified IT Associate (LFCA)

Colleagues, the Linux Foundation Certified IT Associate (LFCA) exam demonstrates a user’s expertise and skills in fundamental information technology functions, especially in cloud computing. It is ideal for those getting started in an IT career as an administrator/engineer.The LFCA is a pre-professional certification intended for those new to the industry or considering starting an IT career as an administrator or engineer. This certification is ideal for users interested in advancing to the professional level through a demonstrated understanding of critical concepts for modern IT systems including cloud computing. LFCA will test candidates' knowledge of fundamental IT concepts including operating systems, software application installation and management, hardware installation, use of the command line and basic programming, basic networking functions, security best practices, and other related topics to validate their capability and preparedness for an entry-level IT position. Domains and Competencies address: 1) Linux Fundamentals (20%) , 2) System Administration Fundamentals (20%), 3) Cloud Computing Fundamentals (20%), 4) Security Fundamentals (16%), 5) DevOps Fundamentals (16%), and 6) Supporting Applications and Developers (8%). This program includes LFCA Certification Valid for 3 Years, 12 Month Exam Eligibility, Free Retake and Multiple Choice Certification Exam.

Enroll today (individuals & teams welcome): https://fxo.co/BOhH

Much career success, Lawrence E. Wilson - Online Learning Central (https://tinyurl.com/2re6558z)

https://redd.it/lq3wv9
@r_devops
Octopus Deploy Email Notifications

Curious if anyone has come up with a decent Octopus Deploy email notification template that only lists the deployed/completed steps (excluding steps that were excluded). The template that Octopus provides in their email notification how-to lists every step in a deployment even if it was excluded. I've put this together and it outputs way too much. Thoughts?


Current code used in body:
<h2>Deployment of #{`Octopus.Project.Name`} #{Octopus.Release.Number} to #{`Octopus.Environment.Name`}</h2>

<p><em>Initiated by #{unless Octopus.Deployment.CreatedBy.DisplayName}#{Octopus.Deployment.CreatedBy.Username}#{/unless} #{if Octopus.Deployment.CreatedBy.DisplayName}#{Octopus.Deployment.CreatedBy.DisplayName}#{/if} #{if Octopus.Deployment.CreatedBy.EmailAddress} (<a href="mailto:%20#{Octopus.Deployment.CreatedBy.EmailAddress}">#{Octopus.Deployment.CreatedBy.EmailAddress}</a>)#{/if} at #{Octopus.Deployment.Created}</em><br>

<h3>Deployment process</h3>

<p>The deployment included the following actions:</p>

<ul>

<li style="list-style: none">#{each action in Octopus.Action}</li>

<li><strong>#{`action.Name`}</strong> #{if action.Package.NuGetPackageId}&mdash; {action.Package.NuGetPackageId} <em>version #{action.Package.NuGetPackageVersion}#{/if}</em></li>

<li style="list-style: none">#{/each}</li>

</ul>

<h4>Task summary</h4>

<ol>

<li style="list-style: none">#{each step in Octopus.Step} #{if step.Status.Code}</li>

<li>#{step | HtmlEscape} &mdash; <strong>#{step.Status.Code}</strong> #{if step.Status.Error}

<pre>#{step.Status.Error | HtmlEscape}</pre>

<pre>#{step.Status.ErrorDetail | HtmlEscape}</pre>#{/if}#{/if}#{/each}

</li>

</ol>

https://redd.it/lq1psj
@r_devops