Reddit DevOps
271 subscribers
9 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Founders/entrepreneurs: Help is needed over here.

I have already posted about my recent, new, shiny project!

It has a name now: Red Labs Ltd.

We are still collecting feedback from everyone. Our strategy is to get in touch with as many people as possible in this phase and ask for feedback about the website https://www.red-labs.co.uk/ (we want our products/services to be clear to everyone) AND about our future steps.

Future steps:

Step 1. Ask for help and feedback (doing it now).

Step 2. Get our first 10 customers (all the partners are still employed full-time, so we need a minimum amount of revenue to complete the transition aka quit our full-time jobs)

a. How are we going to get the first 10 customers?We are in conversations with ex-directors of different consultancy companies, hopefully, this will give us some clarity.We are going to apply to become both GCP & AWS partners.We have a LinkedIn campaign ready for February 2023We are asking for help from more experienced and business-savvy people on a daily basis.

b. What are we going to show them? We are preparing an MVP.

Step 3. Once we reach our target revenue the transition will be over, which means that we will be able to leave the company we work for.We have decided to leave 55% of the revenue in the company to invest heavily in marketing, 45% should be able to cover our salaries and expenses.

Step 4. We are reducing expenses as much as possible. Our website cost us 7£ lol and the hosting/email are for free. For the first 4/5 years, we will try not to waste money.

Step 5. Ultimately our goal is to move to a subscription-based model, so we're starting with consulting to build a user base/customers to then create products and make them available via a monthly subscription.

There are going to be 2 subscriptions Standard & Premium.

As soon as we will reach 30 customers (I'm estimating 30% Premium, 70% Standard), we will look into expanding the company, hiring more people or looking for a buyout.On top of this, we will keep doing consulting, if there's going to be the time or possibly hire 1/2 people to do it while we manage the subscription-based customers.

\-------

This is a high view of our optimistic plan. Things might go wrong, we know that (particularly with one of the biggest recessions starting) but it's worth the risk.

Thoughts???

https://redd.it/z6ftia
@r_devops
Layoffs?

Has anyone here been hit by the tech layoffs? Curious how DevOps has been faring in these lean times.

https://redd.it/z6iqdu
@r_devops
Can you recommend podcasts for DevOps / DevSecOps ?

I'd love to better keep up with things, and love podcasts as a medium, what are the podcasts you like listening to ?

https://redd.it/z6vvl4
@r_devops
We made a free CICD/deployment tool: initializes your gitlab repo, installs dokku and your app on your server, deploys your app from gitlab to your server, sets your domain and establishes continuous deployment so that all main commits are automatically deployed. Templates for Django, flask, fastApi

[ezinnit](https://github.com/johnsyncs/ezinnit)

Automated CICD Deployment Utility

Continuous integration means that from the moment you begin your project, frequent commits to main are automatically deployed. Continuous deployment means that from the moment you begin your project, you always have a live build of your app in a container on a remote server with a secure public connection.

After running ezinnit, your webapp will be running on your server, live at the https domain of your choice and future commits to your main branch will automatically deploy to the live app.

In a completely automated process, ezinnit initializes and pushes your gitlab repository and deployment pipeline and then installs dokku and a gitlab runner on your server. Your gitlab repository is configured to automatically deploy any commits to your main branch to your server, where your app is automatically built in a container and served at your public https domain.

ezinnit includes app templates for new django, flask and fastApi projects. These templates are intended for starting completely new projects, and create a deployed, working site.

to download and install ezinnit:

in your project's root directory, run:

`mkdir ezinnit wget https://raw.githubusercontent.com/johnsyncs/ezinnit/main/ezinnit -P ezinnit bash ezinnit/ezinnit`

You will be prompted for:

1. gitlab username
2. gitlab domain (if your account is with gitlab.com, then the gitlab domain is gitlab.com)
3. [gitlab personal access token](https://github.com/johnsyncs/ezinnit/blob/main/other/the_old_way/tutorials/link_to_gitlab_and_dokku/get_personal_access_token.md)
4. app name (also becomes your gitlab repository name)
5. ip address of your remote server
6. the domain or subdomain you wish to point to your new app, for example: mynewapp.mydomain.com
7. email address to use for registering with [letsencrypt](https://letsencrypt.org/)
8. optional app template: django, flask or fastApi

requirements:

* a python virtual environment with your app installed (or to make a django project from scratch, see bottom of readme)
* git
* a gitlab account (gitlab.com accounts must be verified to use gitlab runners, but verification is free)
* a server running Ubuntu 18.04/20.04/22.04 [how to create a digital ocean droplet](https://github.com/johnsyncs/ezinnit/blob/main/other/the_old_way/tutorials/digital_ocean_tutorial/create_digital_ocean_droplet.md)
* your local machine's ssh key registered on gitlab
* your local machine's ssh key added to your new server's allowed hosts ([digital ocean tutorial](https://github.com/johnsyncs/ezinnit/blob/main/other/the_old_way/tutorials/digital_ocean_tutorial/create_digital_ocean_droplet.md))
* for your domain to work, you need a DNS "A" record pointing your domain to your server ip address [(create the DNS "A" record before running ezinnit)](https://github.com/johnsyncs/ezinnit/blob/main/other/the_old_way/tutorials/link_to_gitlab_and_dokku/point_url_to_dokku_app.md)

warning!

* this script creates new ssh keys on the remote server!
* if you select an app template, ezinnit will write over files, including your procfile, settings.py, main.py etc. Only use the templates for brand new projects.

what ezinnit does

* checks for ezinnit.config, if it doesn't exist, it prompts you for the values and creates an ezinnit.config file
* if there is no .gitignore in your project directory, uses [toptotal](https://www.toptal.com/developers/gitignore) to create a .gitignore file
* runs app template script if you've selected one (django, flask and fastApi are included in this release)
* creates a gitlab pipeline for automated deployment (.gitlab-ci.yml) in your project directory
* if there is no requirements.txt file in your project directory, creates a requirements.txt file
*
initializes git repository, sets initial branch to main, sets remote to new gitlab repository, commits and pushes to gitlab
* gets the runner token for the new repository from gitlab
* copies ezinnit.config to server
* runs server initialization script on the remote server, which does the following:
* creates new ssh keys on server
* uploads server's ssh keys to gitlab repository
* downloads and installs [dokku](https://dokku.com/) on server (this takes a few minutes)
* creates dokku app on server
* sets the domain for the dokku app on server
* sets the apps port to 80:5000 on server
* downloads and creates a gitlab runner on server
* registers the gitlab runner on server
* downloads and installs [dokku-letsencrypt](https://github.com/dokku/dokku-letsencrypt) on server
* enables encryption for app on server with TLS certificate from [letsencrypt](https://letsencrypt.org/) on server
* adds a chron job on server to automatically renew TLS certificates
* for django, flask and fastApi, creates and runs a script: ezrunto find an open port and run locally in development environment
* when ezinnit completes, gitlab will automatically begin deploying your app to your server. ezinnit will give you a link to your new repository where you can check on the deployment status.

to find an open port and run django, flask or fastApi ezinnit template apps locally in development environment:

`bash ezrun`

Deploy Now and Forever

Use ezinnit whenever you start a new webapp project. At the push of a button, your project will begin with a gitlab repository that automatically deploys main commits to a container on the server of your choice, where your app is running and available at the domain of your choice.

You can now develop for the true environment your app is intended for with instant feedback about how changes will impact real world usability. You know instantly if your app will build in a container and how it will behave on a live server.

The secure production environment is the default, and development mode is the exception - making development safe.

When you start a project with ezinnit, you're really doing CICD. From day one, you hit the ground running with a live app on your own server on your own domain, so you can focus on what only you can do.

to start a django project from scratch:

`mkdir ezinnit wget https://raw.githubusercontent.com/johnsyncs/ezinnit/main/ezinnit%20template%20scripts/django.innit -P ezinnit bash ezinnit/django.innit`

https://redd.it/z6y9rn
@r_devops
Triggering email and db write/reads.

Preface, marketing makes research difficult, more so when using the terms 'email' and 'service'.

I am developing a web-app that will integrate with email and SMS. The webapp is built using Sveltekit and hosted on Vercel. I'm using Mongodb as my db. Mongo has a watch feature that triggers when a change is made to whatever you've configured it to trigger it on. My thinking this far is to build an express app that will handle this watch behavior and email/sms handling.

When I start my googling-around-to-see-what-I-can-copy-paste I come across a lot of services that provide 'triggering' services.

Hosting/setting up servers is not something I have experience with; though I am confident with node.js.

Should I go the triggering service route or should I build/host my own service? Or, is there another path what I am unaware of?

https://redd.it/z70r5x
@r_devops
Job title not aligned with Job Description

TLDR; I do same tasks as DevOps Engineer in my team, My team is made of DevOps Engineers ( More inclined towards ops) and my title is not a DevOps Engineer ( Cloud Infra Dev )
Is it something to be concerned about?

https://redd.it/z72g8g
@r_devops
Overwhelmed by AWS

I have a basic understanding of lots of the core services and what they do, like IAM, security groups, EC2, ELB. But combining it all together is hard for me to wrap my head around. My company requires that all resources created in AWS are done through a cloud formation template that is deployed via our CICD pipeline. I’m overwhelmed with the amount of knowledge required to create a simple ec2 instance that has a public IP. Looking at some internal example templates we have EC2 instances that that have interfaces attached, those interfaces have SGs attached to them (I probably have it wrong I’m AFK). Combining everything together in a CFT is overwhelming. Any recommendations on resources I can use to combine everything together. Whenever I look at documentation it seems focused on one thing like “making an EC2 instance” I never see “making an EC2 instance with an interface, connected to an ELB with appropriate security groups”

https://redd.it/z72x9s
@r_devops
AWS Cloudfront -> Cognito to Google suite

Hi all,


I've been trying to get my head around what I presumed to be a very simple setup but the whole thing is turning into a nightmare and just want to touch base and either confirm if I'm on the right path or perhaps I've gone off trail.



Currently I've IAM Identity Center setup that if people want to access the console they need to auth through our google suite. That all works as expected and is fine for any technical user of the platform.


However my needs are growing beyond having just technical users perform operations. So my idea was simple in that I have a bunch of Lambda applications and I wanted to provide a simple html website hosted on S3 where they can enter some details hit submit and then the lambdas operate without having to try and teach them any CLI or hitting api end points.


However to get this working I'm overwhelmed by all the different pieces I need to have in place. What I currently have in place


\- Suite of Lambdas
\- S3 private bucket for the front end pages
\- ACM provisioned from the cert
\- Route53 domain set up
\- Cloudfront set up pointing to the bucket



Now what I'd like to do is that when a user hits my route53 domain they're asked to auth ( similar to when they hit the AWS platform itself and auth through google.)
However when I google what I am trying to do I am seeing a lot of comments around setting up Cognito and Lambda@Edge and to be blunt I'm not understanding the purpose of them or how they resolve the goal since I didn't need to do any of that for the SSO integration earlier ( IAM Identity Center) . I find myself getting lost in the AWS docs and never getting the answers I want or finding tutorials but they only speak of public cloudfront distributions


Does anyone have any good guides or advice on what path I should be following ?

Like I say in the mind the use case is simple ( user --> hits website --> Auths --> fills out form --> triggers lambda ) but finding it very hard to implement

https://redd.it/z72aky
@r_devops
Windows Container use as Market Share

Hello,

Does anyone know of a study or dataset that will show adoption of Windows containers across industries compared to adoption of Linux containers or no containers (on Windows)?

I would love to see some actual data that has buckets between Windows and Linux.

I'm not talking about the host OS being windows and running Docker with Linux containers. I would really like to see some research, how many people are actually running production workloads in Windows containers compared to production workloads in Linux containers.

Anyone?

https://redd.it/z71zbq
@r_devops
Azure DevOps generate NSIS Setup using Pipelines

Hi there

I would like to generate NSIS executable everytime changes are pushed to main. I am now able to pull the NSIS setup and install it in a job. But where can i „export“ the built executable to? To my understanding Artifacts only support packages like nupkg. Maybe push the exe to a git Repo?

https://redd.it/z7087p
@r_devops
Some tool like drone.io for CD

I'm really embarassed to say that I love docker-compose over K8s for its simplicity & effectiveness.

But tools are reallly lacking.drone.io is like a docker-compose.yml. Simple, effictive & beautiful.

I'm wondering, is there any drone.io alike tool for CD?

https://redd.it/z79o6k
@r_devops
Best implementation to spin up k8s clusters on demand?

I need to spin up multiples of the same cluster at different clients.

FastAPI, PostgreSQL, Elasticsearch.

Been thinking Jenkins, Helm, k8s.

Storage on OpenEBS?

https://redd.it/z703bh
@r_devops
GitFlow Branching Strategy and Alignment to Best Practices



Good evening everyone. First, let me start off by stating that we are a publicly traded company that falls under SOX controls & audit requirements.

For code branching strategies, we generally have followed the GitFlow strategy since our environments match up to the GitFlow branches (feature, develop, release, & main).

Our branches and how it maps to our environments

================================================================================

feature branch - developer's local instance for unit testing

develop branch is deployed to our DEV env.

the release branch is deployed to our QA env

main = PROD env.

================================================================================

Here is our typical workflow:

Developers create a feature branch off the "develop" branch and make their code changes. They will then perform unit testing of their changes.
The developer then requests a PR to the "develop" branch, which is then reviewed and approved by a lead dev. The code is now in the "develop" branch after approval. When all the features for all developers are in the "develop" branch, there may be end-to-end integration testing the team might perform if there are a lot of features that need to be tested together.
When the dev team is ready for formal QA testing by the QA individual/team, a release branch is cut from the develop branch, and the build is deployed to the QA environment. QA will validate the features in this QA environment, and an automated regression suite is run against the entire build. If a bug is found by QA, the feature is sent back to the develop to repeat the first bullet and onwards again. When we are audited, this is the environment that is noted in each backlog ticket for each feature.
When the release has passed all testing, the deployment in QA is released into the next environment - PROD.

We have a consulting group who prefers to change it up so that:

Developer's unit testing and formal QA by the QA team/individual are performed off the feature branch before the developer makes a PR request to get the code changes merged changes into the "develop" branch. They said this avoids them having to do a ton of PR merge requests for each break-fix cycle for a feature.
In this workflow, all the code, by the time it makes it to the QA environment, has been fully tested already. There is nothing more to test in QA besides maybe running the automated regression suite against that new set of changes.

I wanted to support a more efficient workflow for getting code into production, but also need to address SOX change control and stay within best practices at the same time. I am curious to hear if others are following the above process by our internal team or do you agree with the consulting group on having formal QA performed before the feature branch is merged into the "develop" branch.

Thank you ahead of time.

https://redd.it/z7eirb
@r_devops
How do you update a MongoDB image to the latest version without losing the volume data?

How do you update a MongoDB image to the latest version without losing the volume data? Is there a tutorial for doing this? I wanted to update my MongoDB version locally, but then I realized I would wipe out the data in my local machine. Need to go from v4 to v6.

https://redd.it/z7eu4z
@r_devops
What OS is your Desktop/Laptop?

What OS do you use for your Main use system for work? Windows? Linux? Mac?

https://redd.it/z7i3io
@r_devops
Can you create a Postgres Deployment with multiple replicas consuming to the same PV?

I am trying to setup HA PostgreSQL, but I have very minimal knowledge about this.

The PV of the cluster is being managed using Longhorn (or some other service, another team is working on this). Since the storage is already being made highly available, can I simply create two Postgres services that use the same data directory in the storage?

This might create deadlocks when two or more Postgres services are trying to access the PV and any one of them is trying to write to it, right? What if I develop a retry mechanism on the application level to handle these deadlocks?

Does this approach make sense and is actually implementable?

Thanks.

https://redd.it/z7jsa2
@r_devops
Whats is a good alternative to Heroku ? for free tier usage ?

i have had an app running on heroku free tier since 2018, now that heroku is turning off free tier i need a new place to host it, i need a place that provides some sort of usable psedo c-name like hero does, can you guys provide somesort of alternative ?

https://redd.it/z7n83n
@r_devops
Does "managed Nomad" exist?

Hi all, I've been working in a Nomad/Consul stack and I wonder why there are a lot of 'managed kubernetes' providers but I can't seem to find any 'managed nomad' providers. As far as I know, they both support the same use cases. How come this doesn't exist? Has anyone tried it? Am I missing something?

Nomad has proven to be rock-solid and really easy to use, so having this in a 'managed' form where you don't have to think about managing the infrastructure might be valuable?

https://redd.it/z7q7l0
@r_devops
Observability and logs body requests

Hi,
Do you save body requests also or only endpoints?
We are thinking to save body requests also but I think is not necessary for all the data and we have some sensitive data also.

I don't understand why we need to save login register requests and not only what we need.

What Info so you logging and save thanks.

https://redd.it/z7s56l
@r_devops