Reddit DevOps
270 subscribers
6 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
How to handle major version bumps when using a fully automated CI/CD pipeline? (SemVer)

I have some open-source apps that use various tooling for SemVer based on conventional commits, such as Commitizen, Cocogitto and standard-version. These tools changed based on project needs and the time when I created them, but all of them have the same issue that I'm not sure how to address:

When I want to bump a major version, say the app is ready for release from 0.x to 1.x how can I get these tools to do that instead of their regular bumping strategy of using feat commits for minor and fix commits for patch releases?

Cocogitto has the --major flag, but I'm not sure what kind of rules could be used in my CI/CD pipeline (GitHub Actions/Drone) to use that flag instead of the automatic bumping strategy.

Or should I just manually run a major release and push the tag to Git? Then of course I have to make sure to include a [SKIP CI] in the commit message to avoid running the pipelines and skipping all the automated release steps like changelog and Docker image which isn't ideal either.

https://redd.it/13j0781
@r_devops
How do you create your Secret Key

We use AWS Secret Manager, i create like 20 keys manually but we have a lot more. How do you create your Keys?

I don't want to push all the keys to github and then deploy it with Terraform.

But how you create your keys if you have a lot?

https://redd.it/13izsv6
@r_devops
Why I created a new build system based on Alpine Linux

PAKman is one of the 4 core modules that power instellar.app. It's open-sourced and builds your application using github actions into alpine packages that get delivered to an S3 compatible bucket you specify via instellar. Our platform then takes that built package and deploys the application on your infrastructure.

You can continue reading or enjoy the full post with images here

## In the beginning

Back in 2018 I looked at using docker before I embarked on the journey to build my own build system. At that point I had been using docker for a long time. I was an early user of docker and one of the issues I constantly ran into was the following:

Large build artifact (hundreds of MB)
Needed a Registry
Consumes bandwidth
Slow deployments

At first I considered just using docker because it was the 'standard'. Everyone was using docker and docker swarm was in it's hey days, k8s was gaining steam. Most of the docker images were built using ubuntu as the base image, as you can imagine the built images were quite large. Alpine linux was gaining popularity and was starting to be used in the docker community to reduce image size. I often wondered, why the community didn't just build using alpine's native build system. So I tried it for myself. It took me a long time to work through the alpine build system, the documentation was scarce and I had to trial and error my way to understanding it. My little experiment made me realize that while the final output was amazing (built packages were ranging from few MB - 50MB depending on the application) it was extremely complex to use. I figured most people probably just ended up using docker due to simplicity and readily available documentation.

I ended up mastering building with alpine's package system and threw together some scripts that would automate and make things easy to build with alpine packages. There was however one problem, this meant not using docker for running the applications. With docker you build the app into a docker image and you run the entire image. You wouldn't just install the custom package in a docker container because that means the image would need a package manager and that would just make the final image even larger. This is where the concept of docker being an 'application' container hit hard.

I also explored kubernetes to see what it could do and figured that kubernetes was way too complex for most deployments. The conclusion I came to was k8s and docker would work together. If I wanted to use my alpine package build method I would need something else.

## Enter LXD

While doing my research I found LXD, it advertised itself as being a 'system' container this meant creating an LXC container would mean I had the entire OS running including the package manager. This was exactly what I was looking for and would fit with my build system like peas in a pod. LXD containers meant that all I had to do was expose the alpine package in a file system and add it as the repository inside the alpine linux container and I could run apk update && apk add [package] and be done with it. I hacked together a proof of concept with bash and terraform and amazingly it worked! I was actually able to just build my app and just ship my app to my lxc container and it was blazingly fast! Apps were being deployed in a matter of a few seconds! Upgrades were also handled by alpine packages by adding the -u flag. Upgrades were even faster than installing a fresh package.

## A new Invention is needed!

While my proof of concept worked it was far from ready for primetime. I needed something which is robust, written in a language I'm familiar with (elixir), and most importantly worked with an existing infrastructure I didn't have to host. The first versions of PAKman I hacked together was a combination of building packer images using bash script that would run in a custom gitlab runner. While it worked, it was
not elegant and was not flexible. In 2018 Github Action was released I explored github actions and realized that I could create my own custom action inside a docker container which meant I could use whatever programming language I wanted to create the build system.

I realized that I needed to create a simple solution for people and simply telling everyone to simply 'just use alpine's build system' would not work. I had an idea that I could essentially simplify everything down to a .yml file. I needed to develop an intermediary layer that would take the yaml file and convert it into files that the apkbuild system for alpine linux would understand. This is the birth of PAKman

## Project Goal

While I still needed to use a docker container to create the final build since that's how github actions work. I realized that I can simply extract the artifact and ship it to an S3 compatible storage. This was the most simple design. Since once the package was built I could install and run it anywhere Alpine Linux ran. This would achieve the following goals:

No need for custom infrastructure for building
Packages need to be as small as possible
Save on bandwidth costs
Fast deployments (matter of seconds)

While many may challenge my decisions of saving bandwidth. I do have my reasons. I believe if something can be done well it should be done. In the big picture the goals of PAKman serves our mission for instellar.app. Instellar enables anyone to run their own PaaS on their own infrastructure. This means it's important for us to keep the cost of ownership low. If we can save on bandwidth costs for our customers it's our duty to do it. Another valuable asset we save is time. Small packages mean deployments are fast! The update for the blog you are reading now was deployed in 6 seconds! You can see PAKman in action.

The final built artifact that gets shipped over the wire for this NextJS blog weighs in at 5.69 MB

Welcome to the future!

https://redd.it/13j2jp2
@r_devops
Enterprise DevOps- Importance and Key Benefits You Need to Know

Discover the transformative power of Enterprise DevOps in driving business success. Explore its role in fostering agility, automation, and effective communication for accelerated growth and competitive advantage.

Read more- https://www.silvertouch.com/blog/enterprise-devops-importance-and-key-benefits-you-need-to-know/

https://redd.it/13j2dpm
@r_devops
Options to break into the field?

I'm close to finishing my associate degree in Programming at 22. I know

\- Python (+ cleaning, transforming data) from a DS class i did seperate from my degree.- Bash: i know my way around a terminal, had a linux class where we deployed a webapp withdocker, can work with vim and know some scripting.- did a cybersec class with Kali Lunux (basics, also learned more about docker)- My networking knowledge is lacking which leads me to my question.

I can go for a Bachelor degree in 1.5y because i've done my Associates. I have a choise of either not doing a Bachelor, doing one and choosing a specialization Sec, systems and services or Data Science.

I want to get into Devops because i feel like it would be more fulfilling for me to help developers instead of building applications. I like automating stuff and making others and my own work more efficient, i enjoyed all my Linux classes and i still want to Code quite a bit instead of only doing sysadmin work.

Do you think i have the right motivation to get into the field? (wheter Platform Engineering, Cloud, Devops, ...)

And do you think this Bachelor degree with the classes i listed is a good road to take vs just applying now (i have the luxury of still living with my parents) or choosing Data Science as a specialization?

Programming Fundamentals
Scripting
Web Engineering
Python
MYSQL
Database Fundamentals
Computer Systems
Computer Systems
Architecture
OS
OS Advanced
Networking & Security
Networking Fundamentals
Network Architecture
Industrial Networks
Information Sec
Computer Infra & Advanced Networking
Virtualisation & High availability
Cloud Computing

The basic classes i wont have to do anymore because of my Associate degree (like Programming fundamentals, Scripting, Python, MySQL) but i just included all the classes.

https://redd.it/13j1mhw
@r_devops
Tailor AWS Identity Center (SSO) Permissions Per Account with IAMbic

Hey everyone. I wrote a blog post for those of us that might struggle to manage permissions in AWS Identity Center (SSO), and who find that existing tooling like Terraform and CloudFormation
lack visibility into your actual IAM state, and require too much work to manage for permissions. Would love your feedback: https://www.noq.dev/blog/tailor-aws-identity-center-sso-permissions-per-account-with-iambic
The tutorial shows how to do the following with IAMbic:
\- Get a complete, eventually-consistent accounting of your cloud IAM in version control in under an hour, all without writing any code
\- Customize Permission Set Access Rules and IAM permissions per account, in a centralized GitOps workflow
\- Prevent drift on IAM resources you want to be exclusively managed via IAMbic, like sensitive permission sets
Hope you find this helpful!

https://redd.it/13j86y7
@r_devops
Introducing DevPod - Codespaces but Open Source

[https://github.com/loft-sh/devpod](https://github.com/loft-sh/devpod)

[https://loft.sh/blog/Introducing-devpod-codespaces-but-open-source/](https://loft.sh/blog/Introducing-devpod-codespaces-but-open-source/)

DevPod allows dev teams to take full control over their dev environments, without being locked into a specific provider. Developers can write code in any language, and run it anywhere. For example, they can test on virtual machines, or code in Python with VS Code running on Docker Desktop, or in Go running in EKS. If the provider they need doesn’t exist, they can build it.Why DevPod?Compared to hosted services such as Github Codespaces, JetBrains Spaces, or Google Cloud Workstations, DevPod has the following advantages:

* Open-Source: DevPod is 100% open-source and extensible. A provider doesn’t exist? Just create your own.
* Client-only: No need to install a server backend. DevPod runs solely on your computer.Cross IDE support: VS Code and the full JetBrains suite is supported. Other IDEs can be connected through ssh.
* Rich feature set: DevPod already supports prebuilds, auto inactivity shutdown, git & docker credentials sync, with many more features to come.

https://redd.it/13j9oaq
@r_devops
Masking AWS RDS

Hi guys

Im in the task of creating an obfuscated db for QA environment. Currently i have two lambda functions, which takes a Snapshot from prod DB (we use RDS PostgreSQL) and then restoring it, masking it with an Stored Procedure and replacing the old QA db with the new one.

This allows devs to have a weekly updated QA db with the same amount of data as prod.

​

The problem is that the Stored Procedure is inserted by devs on Prod DB, and even if the procedure and the lambda has IF statements to prevent it from executing in Prod, im not comfortable of the way its implemented.

Anyone knows if it is possible to do this with DMS, or in another way?

​

Thanks!

https://redd.it/13j70uh
@r_devops
Got company's code on my personal laptop via Azure DevOps. Am I in trouble?

So the project I am working on is on Azure DevOps. On my personal laptop, I used my company's email ID to log into Visual Studio. Of course, I could see my project in the Team Explorer. I connected to the project and the entire codebase got downloaded on my personal laptop.

AM I IN TROUBLE? WILL THEY KNOW? IF YES, HOW?

https://redd.it/13jdw46
@r_devops
Beta test testdeck, an automated test management platform

Hi, all. We launched a new tool recently and are looking for beta testers/feedback. Would really appreciate poking around if it looks interesting. Our eng team is poised for support and ready to take notes.

blog post:
https://www.aviator.co/blog/announcing-testdeck/

Skip the post and get started:
https://www.aviator.co/testdeck

https://redd.it/13jb9fq
@r_devops
Oracle Cloud Infrastructure

Nearly all of my experience has been with AWS with a sprinkling of GCP. I've never worked with OCI before. Does anyone have any experience with it? How was it? Would taking a position that exclusively works with OCI going to make my experience become too niche? Anything I should be aware of before going to work on it? How much of the services/concepts translate from the other cloud providers?

https://redd.it/13jjc0d
@r_devops
Interview questions about Work/life balance

I am in the process of looking for a new job and my biggest priority is work life balance. At my current job I have essentially 0 time for personal hobbies, making/meeting friends, life in general, etc during the work week. This is not sustainable and I am deeply burned out here after just a short time.

How can I bring up this topic during the interview process? I know that on-call is a reality of this job and I'm fine with that, but at my current job we have a particularly brutal on-call, long working hours, and regular crunch time which is just too oppressive and completely unsustainable.

How can I trust peoples answers to my questions that I bring up? It has been my experience that companies will very easily lie or tell half-truths about these questions?

https://redd.it/13jk9l5
@r_devops
Any advice to integrate my knowledge of DevOps?

I've been a Sofware developer for a year and a half in a DevOps setting and a few months ago I started to train more in areas of Ops.

I'm beyond confused, I thought most Software Dev was to code, test and deploy and that's it. I was fine with using Git, pushing to Bitbucket and tested using unittests and Gherkin BDDs. But now I'm struggling to learn how all of these concepts fit in this methodology like Chef Cookbooks, recipes, SCM, Jenkins CI/CD Pipelines, Docker and Containers, Kubernetes Pods, Cloud Services (AWS, GCP, AZR), Bash Scripting/Bootstrapping. And every video I watched is like it's in a different language (Not a coding one lol)

I'm trying to understand the process of DevOps, why do we use all of these tools and what's the full process like from the beginning to the end of the cycle. I seem to have an idea of a few of them but it's very vague and I don't know if it's realistic.

If you have any advices on books, content, tutorials I can check out please let me know! Also, I would really appreciate any help with further questions I have in this area.

https://redd.it/13jlbwc
@r_devops
What does it take to be a true DevOps?

Besides fancy tools such as k8s, Jenkins, ansible etc... What does it take to be a DevOps Engineer, one of the top

I think DevOps is a complex role cause combines different IT skills and is way more beyond knowing "x" or "y" technology

What do you think?

https://redd.it/13jn3vf
@r_devops
Simple cost-effective way to deploy multiple docker containers?

Read a bunch of threads on this but they were all 2+ years old. Thought maybe there are new or better ways to do things now.

I have multiple side projects that I host on an EC2 instance (t4g.medium) using a process manager and nginx as a reverse proxy. Works incredibly well and is simple and cheap. I use either lets-encrypt or cloudflare for SSL.

As my number of projects grow, I find myself using newer tech, but it's a hassle to update things, say your language runtime or security updates to the OS.

So now I want to move over to docker containers but keep things simple and cheap and add on CI/CD. Some of my considerations:

- Optimize for cost
- Add in CI/CD (mostly just to move my secrets into the cloud so if my EC2 instance or laptop dies, it's not an issue)
- Keep RDS connections low (I tried to go serverless, but I'd have to pay for an RDS Proxy)
- Keep complexity low, have as few components as possible (looking at you k8s)
- Remain on AWS, ok with lock-in
- Scalability is not important, I'm ok with a single container for a monolithic service
- Hopefully reduce some maintenance
- Try to keep to the t4g.medium instance

I've looked around at previous threads and options are:
1. Swap the process manager with docker on the EC2 instance, call it a day: cheap and effective, but if you need to update the server, all containers go down, also not sure how easy this is going to be to set up CI/CD
2. ECS + EC2: I hear you have to use ELB, which ads ~$18/mo, or hack it together with Service Discovery or Service Connect
3. ECS + Fargate: Probably the best option but has a cold start and some of my older apps have built-in cron jobs, so the service needs to always be up
4. Self-managed K8s on the EC2: really don't want to go this route, too much of heavy-lift and also I don't think it's recommended to attempt a single-node configuration
5. Don't bother with any infra and use AWS App Runner
6. Suck it up and go to Vercel

Ideally, I'd be able to go from container to live deployment in half an hour.

What's everyone else doing?

https://redd.it/13j85pu
@r_devops
Building logging-as-a-service with ClickHouse

I work as an engineer at a startup focused on application monitoring, and recently we introduced a logging product powered by ClickHouse. For those interested, here's a brief overview of what we learned during the process.
To begin with, a significant amount of time was dedicated to designing the architecture for this product, particularly the schema of our logging table. We initially used the OTEL specification as a starting point to expedite the design, but we still had to experiment with column definitions. To optimize query performance, we made several adjustments such as modifying the precision of our timestamp column, introducing indices to our attributes map, and configuring the primary key in a way that facilitated cursor pagination (further details are available in the post below). Implementing support for multi-tenancy posed an interesting challenge as each of our customers had unique data retention requirements.
Overall, it has been an enjoyable journey, and we have found ClickHouse to be exceptionally fast (previously, we used OpenSearch). We hope this will be beneficial for future startups building their applications from scratch with ClickHouse; here's a full blog link:
Link to post: https://www.highlight.io/blog/how-we-built-logging-with-clickhouse

https://redd.it/13j78w1
@r_devops
So, I just had an incident today

and it cost the company an unrecoverable amount of USD 2,742,111.75.

Spoiler: it’s due to a sneaky bug 🐛

https://redd.it/13j9uha
@r_devops
Udacity's nanodegree reviews

I bought Udacity's SRE nano-course to upskill and get better for interviews. It promised 10 hours weekly would take you four months to complete.


I literally completed it in a day, their statements are a joke. Also worse they try to get you to sign up for four months for a single-class. They do not have a subscription model where you can take different courses or nanos whatever they call them. Luckily they offer a 7 day refund period


In terms of the content, it mainly consists of very short videos and reading some modules, and completing some simple quizzes. There was only one hands on assignment -- and the final project very much resembles this assignment that is even too easy for a junior engineer. The instructions/solutions for one lab was wrong and outdated -- had to go to forums to see that they dont update their content. Final project was literally setting up Prometheus/Grafana on a K8s cluster and monitoring four metrics.

One of there other Cloud Engineer courses was reputable, and they have other programs too https://www.udacity.com/course/cloud-native-application-architecture-nanodegree--nd064 but it seems sus after my encounter. Maybe their other nanos like ML/robotics is good but the quality of this course was a falsly advertised.

https://redd.it/13jrxqw
@r_devops