Reddit DevOps
271 subscribers
9 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
SAML2, or streaming logs from CloudRun to Elasticsearch via logstash etc.
5. Fixed many bugs for a tool written in Golang

As now I'm heading to Q5, and I'm planning to reach out to new junior DevOps opportunities since it looks like there's not much space to negotiate salary with my company as they're still providing me assistant level salary.

After listing things I did, I feel like I've done many things that can potentially be on my resume, as I've seen many people recommending resume should be as simple as possible, I'm quite worried if I put an overview on my resume, it would seem like I'm bluffing on my resume based on my experience and background, though in reality I spend a lot of my free time studying, and if I put too much information, they won't be interested at all.

thanks for making this far, what do you guys suggest?

https://redd.it/z62bl5
@r_devops
Help me understand real use cases of k8s, I can’t wrap my head around it

So from what I’ve read k8s is for mission critical dockers you want to provide high availability for or scale up. Correct me if I’m wrong!

After running dockers 24/7 for years I’ve never had a container randomly fail or been overcome with too many connections to the point where I’m thinking “if I had more this would have solved the problem “. So in terms of high availability I don’t get it. From what I understand k8s does not even sync data between nodes since they’re all using the same volume mount which to me, is the complete opposite of high availability. To me intuitively, k8s should be something that literally syncs multiple containers all with their own individual volume mounts across multiple remote locations.

In terms of scaling, at what point is a load balancer just not cutting it for you anymore? Such that adding more nodes is the solution.

Who actually benefits from k8s? I see too many examples of enthusiasts deploying at home because they can instead of actually needing it, and when I ask for production examples the only thing i hear are examples of Google, the biggest tech company on earth.

I really am not trying to attack k8s and would love to deploy it myself if I see a real benefit from it.

https://redd.it/z64b1q
@r_devops
Black Friday/Cyber Monday sales for CKA/CKAD exams?

Hi,
Are any known ongoing Black Friday/Cyber Monday sales for CKA/CKAD exams?


Please do not post links to udemy courses. Taking about official prep or exams from Linux foundation
Don't yet see any, maybe will be available tommorrow...?

https://redd.it/z666la
@r_devops
Should I use Capacity Rebalance on spot instances?

Currently, our spot instances are querying an API to test if the spot instance is about to go down. If it sees that the spot instance is about to go down, it immediately sets the instance into DRAINING mode.

Thing is, it's not enough time for the instance to drain itself before it's being taken away by AWS.

After some research, I saw that there's a Capacity Rebalance feature that notifies when the instance is at a higher risk of going down thus possibly giving more time for the tasks inside them to finish.

My only concern with this is that would alert too often and cause most instances to set themselves as draining.

This would be a bit hard to simulate as traffic comes from production mostly. Does anyone have experience with this? Once it notifies, how likely is the instance to really go down? Is there something else I should do?

Thanks ahead!

https://redd.it/z6926o
@r_devops
How to switch to a DevOps based role from a Sysadmin role?

I have almost over 3 years of experience as a Sysadmin working mainly on VMware and HCI based On Premise Infrastructure. I'm looking for any suggestions to get started with DevOps which would help me to land a DevOps based role. I have experience working with Terraform (limited to vSphere provider) and PowerShell Scripts to automate regular tasks at my current role.


I have been applying to multiple job postings available on LinkedIn and even the junior DevOps Engineer role asks for atleast 2+ years of Experience in DevOps related tools.


Any books/playlists which can help me to get into DevOps?

https://redd.it/z6aobc
@r_devops
Advice to approach totally weird situation at new job.

I got hired as full stack engineer just recently. Iv started my onboarding procedure. To make this long story short one. Following the basic setup documentation , i discovered that i will have to deploy my app on my own personal aws account that includes using my own CC.

I literally felt anxiety flowing trough my whole body and iv spent whole weekend trying to figure out what should i do. I dont have any AWS experience , i would start slowly working on back-end /front end jut regular coding things and try to learn as much as i can about AWS.

I have raised my concern to my "mentor" he considered it as a joke , saying that he doesnt get billed more than 15-20$ bucks and so on. Little does he understand that i find such things beyond unprofessional , why would i put my personal CC information on AWS that is totally work related. I didnt read once sentence about "worker protection" in cases of crazy billing issues that some people face on amazon etc etc.

Also the whole atmosphere feels a little bit weird i would like to avoid making a bad impressions right at my first days there , but i havent been this anxious for a very long time.

Im i really being too sensitive and i should play this along with them or should i write a nice message to my CTO explaining it nice as possible and seek for another way to resolve my situation ? I could be totally wrong but this really shocked me to the core and i view it as extreme red flag if i could have seen that setup guide before iv singed the deal i wouldnt even do it.

Appreciate any sort of input i felt like this is the best place to post since i guess many of you work on AWS and understand the billing process and everything else.

https://redd.it/z6fztm
@r_devops
Founders/entrepreneurs: Help is needed over here.

I have already posted about my recent, new, shiny project!

It has a name now: Red Labs Ltd.

We are still collecting feedback from everyone. Our strategy is to get in touch with as many people as possible in this phase and ask for feedback about the website https://www.red-labs.co.uk/ (we want our products/services to be clear to everyone) AND about our future steps.

Future steps:

Step 1. Ask for help and feedback (doing it now).

Step 2. Get our first 10 customers (all the partners are still employed full-time, so we need a minimum amount of revenue to complete the transition aka quit our full-time jobs)

a. How are we going to get the first 10 customers?We are in conversations with ex-directors of different consultancy companies, hopefully, this will give us some clarity.We are going to apply to become both GCP & AWS partners.We have a LinkedIn campaign ready for February 2023We are asking for help from more experienced and business-savvy people on a daily basis.

b. What are we going to show them? We are preparing an MVP.

Step 3. Once we reach our target revenue the transition will be over, which means that we will be able to leave the company we work for.We have decided to leave 55% of the revenue in the company to invest heavily in marketing, 45% should be able to cover our salaries and expenses.

Step 4. We are reducing expenses as much as possible. Our website cost us 7£ lol and the hosting/email are for free. For the first 4/5 years, we will try not to waste money.

Step 5. Ultimately our goal is to move to a subscription-based model, so we're starting with consulting to build a user base/customers to then create products and make them available via a monthly subscription.

There are going to be 2 subscriptions Standard & Premium.

As soon as we will reach 30 customers (I'm estimating 30% Premium, 70% Standard), we will look into expanding the company, hiring more people or looking for a buyout.On top of this, we will keep doing consulting, if there's going to be the time or possibly hire 1/2 people to do it while we manage the subscription-based customers.

\-------

This is a high view of our optimistic plan. Things might go wrong, we know that (particularly with one of the biggest recessions starting) but it's worth the risk.

Thoughts???

https://redd.it/z6ftia
@r_devops
Layoffs?

Has anyone here been hit by the tech layoffs? Curious how DevOps has been faring in these lean times.

https://redd.it/z6iqdu
@r_devops
Can you recommend podcasts for DevOps / DevSecOps ?

I'd love to better keep up with things, and love podcasts as a medium, what are the podcasts you like listening to ?

https://redd.it/z6vvl4
@r_devops
We made a free CICD/deployment tool: initializes your gitlab repo, installs dokku and your app on your server, deploys your app from gitlab to your server, sets your domain and establishes continuous deployment so that all main commits are automatically deployed. Templates for Django, flask, fastApi

[ezinnit](https://github.com/johnsyncs/ezinnit)

Automated CICD Deployment Utility

Continuous integration means that from the moment you begin your project, frequent commits to main are automatically deployed. Continuous deployment means that from the moment you begin your project, you always have a live build of your app in a container on a remote server with a secure public connection.

After running ezinnit, your webapp will be running on your server, live at the https domain of your choice and future commits to your main branch will automatically deploy to the live app.

In a completely automated process, ezinnit initializes and pushes your gitlab repository and deployment pipeline and then installs dokku and a gitlab runner on your server. Your gitlab repository is configured to automatically deploy any commits to your main branch to your server, where your app is automatically built in a container and served at your public https domain.

ezinnit includes app templates for new django, flask and fastApi projects. These templates are intended for starting completely new projects, and create a deployed, working site.

to download and install ezinnit:

in your project's root directory, run:

`mkdir ezinnit wget https://raw.githubusercontent.com/johnsyncs/ezinnit/main/ezinnit -P ezinnit bash ezinnit/ezinnit`

You will be prompted for:

1. gitlab username
2. gitlab domain (if your account is with gitlab.com, then the gitlab domain is gitlab.com)
3. [gitlab personal access token](https://github.com/johnsyncs/ezinnit/blob/main/other/the_old_way/tutorials/link_to_gitlab_and_dokku/get_personal_access_token.md)
4. app name (also becomes your gitlab repository name)
5. ip address of your remote server
6. the domain or subdomain you wish to point to your new app, for example: mynewapp.mydomain.com
7. email address to use for registering with [letsencrypt](https://letsencrypt.org/)
8. optional app template: django, flask or fastApi

requirements:

* a python virtual environment with your app installed (or to make a django project from scratch, see bottom of readme)
* git
* a gitlab account (gitlab.com accounts must be verified to use gitlab runners, but verification is free)
* a server running Ubuntu 18.04/20.04/22.04 [how to create a digital ocean droplet](https://github.com/johnsyncs/ezinnit/blob/main/other/the_old_way/tutorials/digital_ocean_tutorial/create_digital_ocean_droplet.md)
* your local machine's ssh key registered on gitlab
* your local machine's ssh key added to your new server's allowed hosts ([digital ocean tutorial](https://github.com/johnsyncs/ezinnit/blob/main/other/the_old_way/tutorials/digital_ocean_tutorial/create_digital_ocean_droplet.md))
* for your domain to work, you need a DNS "A" record pointing your domain to your server ip address [(create the DNS "A" record before running ezinnit)](https://github.com/johnsyncs/ezinnit/blob/main/other/the_old_way/tutorials/link_to_gitlab_and_dokku/point_url_to_dokku_app.md)

warning!

* this script creates new ssh keys on the remote server!
* if you select an app template, ezinnit will write over files, including your procfile, settings.py, main.py etc. Only use the templates for brand new projects.

what ezinnit does

* checks for ezinnit.config, if it doesn't exist, it prompts you for the values and creates an ezinnit.config file
* if there is no .gitignore in your project directory, uses [toptotal](https://www.toptal.com/developers/gitignore) to create a .gitignore file
* runs app template script if you've selected one (django, flask and fastApi are included in this release)
* creates a gitlab pipeline for automated deployment (.gitlab-ci.yml) in your project directory
* if there is no requirements.txt file in your project directory, creates a requirements.txt file
*
initializes git repository, sets initial branch to main, sets remote to new gitlab repository, commits and pushes to gitlab
* gets the runner token for the new repository from gitlab
* copies ezinnit.config to server
* runs server initialization script on the remote server, which does the following:
* creates new ssh keys on server
* uploads server's ssh keys to gitlab repository
* downloads and installs [dokku](https://dokku.com/) on server (this takes a few minutes)
* creates dokku app on server
* sets the domain for the dokku app on server
* sets the apps port to 80:5000 on server
* downloads and creates a gitlab runner on server
* registers the gitlab runner on server
* downloads and installs [dokku-letsencrypt](https://github.com/dokku/dokku-letsencrypt) on server
* enables encryption for app on server with TLS certificate from [letsencrypt](https://letsencrypt.org/) on server
* adds a chron job on server to automatically renew TLS certificates
* for django, flask and fastApi, creates and runs a script: ezrunto find an open port and run locally in development environment
* when ezinnit completes, gitlab will automatically begin deploying your app to your server. ezinnit will give you a link to your new repository where you can check on the deployment status.

to find an open port and run django, flask or fastApi ezinnit template apps locally in development environment:

`bash ezrun`

Deploy Now and Forever

Use ezinnit whenever you start a new webapp project. At the push of a button, your project will begin with a gitlab repository that automatically deploys main commits to a container on the server of your choice, where your app is running and available at the domain of your choice.

You can now develop for the true environment your app is intended for with instant feedback about how changes will impact real world usability. You know instantly if your app will build in a container and how it will behave on a live server.

The secure production environment is the default, and development mode is the exception - making development safe.

When you start a project with ezinnit, you're really doing CICD. From day one, you hit the ground running with a live app on your own server on your own domain, so you can focus on what only you can do.

to start a django project from scratch:

`mkdir ezinnit wget https://raw.githubusercontent.com/johnsyncs/ezinnit/main/ezinnit%20template%20scripts/django.innit -P ezinnit bash ezinnit/django.innit`

https://redd.it/z6y9rn
@r_devops
Triggering email and db write/reads.

Preface, marketing makes research difficult, more so when using the terms 'email' and 'service'.

I am developing a web-app that will integrate with email and SMS. The webapp is built using Sveltekit and hosted on Vercel. I'm using Mongodb as my db. Mongo has a watch feature that triggers when a change is made to whatever you've configured it to trigger it on. My thinking this far is to build an express app that will handle this watch behavior and email/sms handling.

When I start my googling-around-to-see-what-I-can-copy-paste I come across a lot of services that provide 'triggering' services.

Hosting/setting up servers is not something I have experience with; though I am confident with node.js.

Should I go the triggering service route or should I build/host my own service? Or, is there another path what I am unaware of?

https://redd.it/z70r5x
@r_devops
Job title not aligned with Job Description

TLDR; I do same tasks as DevOps Engineer in my team, My team is made of DevOps Engineers ( More inclined towards ops) and my title is not a DevOps Engineer ( Cloud Infra Dev )
Is it something to be concerned about?

https://redd.it/z72g8g
@r_devops
Overwhelmed by AWS

I have a basic understanding of lots of the core services and what they do, like IAM, security groups, EC2, ELB. But combining it all together is hard for me to wrap my head around. My company requires that all resources created in AWS are done through a cloud formation template that is deployed via our CICD pipeline. I’m overwhelmed with the amount of knowledge required to create a simple ec2 instance that has a public IP. Looking at some internal example templates we have EC2 instances that that have interfaces attached, those interfaces have SGs attached to them (I probably have it wrong I’m AFK). Combining everything together in a CFT is overwhelming. Any recommendations on resources I can use to combine everything together. Whenever I look at documentation it seems focused on one thing like “making an EC2 instance” I never see “making an EC2 instance with an interface, connected to an ELB with appropriate security groups”

https://redd.it/z72x9s
@r_devops
AWS Cloudfront -> Cognito to Google suite

Hi all,


I've been trying to get my head around what I presumed to be a very simple setup but the whole thing is turning into a nightmare and just want to touch base and either confirm if I'm on the right path or perhaps I've gone off trail.



Currently I've IAM Identity Center setup that if people want to access the console they need to auth through our google suite. That all works as expected and is fine for any technical user of the platform.


However my needs are growing beyond having just technical users perform operations. So my idea was simple in that I have a bunch of Lambda applications and I wanted to provide a simple html website hosted on S3 where they can enter some details hit submit and then the lambdas operate without having to try and teach them any CLI or hitting api end points.


However to get this working I'm overwhelmed by all the different pieces I need to have in place. What I currently have in place


\- Suite of Lambdas
\- S3 private bucket for the front end pages
\- ACM provisioned from the cert
\- Route53 domain set up
\- Cloudfront set up pointing to the bucket



Now what I'd like to do is that when a user hits my route53 domain they're asked to auth ( similar to when they hit the AWS platform itself and auth through google.)
However when I google what I am trying to do I am seeing a lot of comments around setting up Cognito and Lambda@Edge and to be blunt I'm not understanding the purpose of them or how they resolve the goal since I didn't need to do any of that for the SSO integration earlier ( IAM Identity Center) . I find myself getting lost in the AWS docs and never getting the answers I want or finding tutorials but they only speak of public cloudfront distributions


Does anyone have any good guides or advice on what path I should be following ?

Like I say in the mind the use case is simple ( user --> hits website --> Auths --> fills out form --> triggers lambda ) but finding it very hard to implement

https://redd.it/z72aky
@r_devops
Windows Container use as Market Share

Hello,

Does anyone know of a study or dataset that will show adoption of Windows containers across industries compared to adoption of Linux containers or no containers (on Windows)?

I would love to see some actual data that has buckets between Windows and Linux.

I'm not talking about the host OS being windows and running Docker with Linux containers. I would really like to see some research, how many people are actually running production workloads in Windows containers compared to production workloads in Linux containers.

Anyone?

https://redd.it/z71zbq
@r_devops
Azure DevOps generate NSIS Setup using Pipelines

Hi there

I would like to generate NSIS executable everytime changes are pushed to main. I am now able to pull the NSIS setup and install it in a job. But where can i „export“ the built executable to? To my understanding Artifacts only support packages like nupkg. Maybe push the exe to a git Repo?

https://redd.it/z7087p
@r_devops
Some tool like drone.io for CD

I'm really embarassed to say that I love docker-compose over K8s for its simplicity & effectiveness.

But tools are reallly lacking.drone.io is like a docker-compose.yml. Simple, effictive & beautiful.

I'm wondering, is there any drone.io alike tool for CD?

https://redd.it/z79o6k
@r_devops
Best implementation to spin up k8s clusters on demand?

I need to spin up multiples of the same cluster at different clients.

FastAPI, PostgreSQL, Elasticsearch.

Been thinking Jenkins, Helm, k8s.

Storage on OpenEBS?

https://redd.it/z703bh
@r_devops
GitFlow Branching Strategy and Alignment to Best Practices



Good evening everyone. First, let me start off by stating that we are a publicly traded company that falls under SOX controls & audit requirements.

For code branching strategies, we generally have followed the GitFlow strategy since our environments match up to the GitFlow branches (feature, develop, release, & main).

Our branches and how it maps to our environments

================================================================================

feature branch - developer's local instance for unit testing

develop branch is deployed to our DEV env.

the release branch is deployed to our QA env

main = PROD env.

================================================================================

Here is our typical workflow:

Developers create a feature branch off the "develop" branch and make their code changes. They will then perform unit testing of their changes.
The developer then requests a PR to the "develop" branch, which is then reviewed and approved by a lead dev. The code is now in the "develop" branch after approval. When all the features for all developers are in the "develop" branch, there may be end-to-end integration testing the team might perform if there are a lot of features that need to be tested together.
When the dev team is ready for formal QA testing by the QA individual/team, a release branch is cut from the develop branch, and the build is deployed to the QA environment. QA will validate the features in this QA environment, and an automated regression suite is run against the entire build. If a bug is found by QA, the feature is sent back to the develop to repeat the first bullet and onwards again. When we are audited, this is the environment that is noted in each backlog ticket for each feature.
When the release has passed all testing, the deployment in QA is released into the next environment - PROD.

We have a consulting group who prefers to change it up so that:

Developer's unit testing and formal QA by the QA team/individual are performed off the feature branch before the developer makes a PR request to get the code changes merged changes into the "develop" branch. They said this avoids them having to do a ton of PR merge requests for each break-fix cycle for a feature.
In this workflow, all the code, by the time it makes it to the QA environment, has been fully tested already. There is nothing more to test in QA besides maybe running the automated regression suite against that new set of changes.

I wanted to support a more efficient workflow for getting code into production, but also need to address SOX change control and stay within best practices at the same time. I am curious to hear if others are following the above process by our internal team or do you agree with the consulting group on having formal QA performed before the feature branch is merged into the "develop" branch.

Thank you ahead of time.

https://redd.it/z7eirb
@r_devops