Reddit DevOps
270 subscribers
7 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Here's a quick summary of my job search and the offer I received - Software Developer with 20+ years of experience

\-To paint a clear picture, I'm an older developer (56 years old), I don't have a college degree, and I haven't worked at FAANG. I started 24 years ago. The salary I was looking for was 160k to 170k, and fully remote work.

\-Started looking for a job: December 2nd

\-Applications/resumes sent: Around 40

\-Number of interviews: 2 (4 with the company that hired me, and 1 with another company. This second company is the one that contacted me).

\-Accepted the offer: January 10th. (Meaning only one month of searching, but the company that hired me started the process after the first week of searching)

\-I only used LinkedIn.

\-I only applied to jobs where my skills were a very strong match. Sometimes I made exceptions for opportunities in areas where I have extensive experience (usually in e-commerce or education). The company that hired me was a combination of a good technological fit and vertical experience (related to education).

\-I focused on companies in my NYC area so I could sell the advantage of being able to meet them in the company if they needed to. But none of them responded to me, even though it seemed like a good plan.

\-I ignored job postings that were older than a few days, and focused on the brand new ones that had less than 150 applicants.

\-I tailored my resume for each posting by removing any technology that was completely unrelated to the requirements.

\-I excluded all years of experience except for the last 15 years to avoid age discrimination and outdated technology.

\-I studied Leetcode problems.

\-using AI tools like chatGpt or interviewhammer

https://redd.it/1jf7zqm
@r_devops
Runs-on vs. terraform-aws-github-runner

Hey guys 👋

I’m planning on implementing both solution for POC and comparison for my client soon, anything I should be aware of / known issues?
How was your experience with either solution and why did you end up selecting one over the other?

Runs-on fairly new, and require licensing both offer greater flexibility (resource requests are made in the workflow manifest)

terraform-aws-github-runner is and enhanced version of Phillips’ original solution, well known and popular.

This is NOT an ARC (github k8s controller), I won’t spin up a cluster and maintain it just for that. Doesn’t fit my client needs.

https://redd.it/1jf593d
@r_devops
What are available career pathways for me to take as a junior DevOps?

So for record, I have 2 years of Software Engineering experience working on Fullstack web apps, and I am currently in a Junior DevOps position.

I am curious if anyone has any advice for me with my credentials on where I could potentially advance in my skillset. I am most likely going to do an Azure Certification, possibly both AZ-204 and AZ-104.

I am possibly interested in security as well. But I was wondering what are my options for advancing my skill set and what career pathways there are for me?

https://redd.it/1jfbi1u
@r_devops
Thinking of moving from New Relic to Datadog or Observe

My company is thinking of moving from NR to either DD or Observe. Wondering if anyone has done this change and how it went?

If so, how much of a lift was it to move from NR to DD or Observe?

I’m a bit concerned about how much time and effort it may take to move over & get everything configured - especially with alerts.

Any advice would be greatly appreciated !

https://redd.it/1jfbmly
@r_devops
Framing work experience

Hi DevOps community. I was hoping that the community could shed some light on how to frame a particular year of my work experience while looking for new roles? For context, I have 4 total years of professional experience. 1 of those years I worked as a Systems Engineer for a well-known IT management consulting firm that is primarily a DoD contractor (wont directly say the name of the company but it’s the one that “House of Lies” is based on), and while there I had an active Secret clearance. On top of that there was so much red tape that I was only ever assigned to two (very) slow-moving projects. I don’t know how to properly frame my experience there in interviews. Please be constructive but kind. Thanks everyone!

https://redd.it/1jfk5d0
@r_devops
Anyone use Cribl?

I have a team at work that is doing a PoC of the Cribl product for a very specific use case, but wondering if it is worth a closer look as an enterprise 0lly pipeline tool.

https://redd.it/1jfp117
@r_devops
Weird situation after reorg

Hey all. I am looking for some advice. As part of a reorg, I was transitioned to the ops team's manager, who manages a team of infra/devops engineers. Previously, I used to report to the engineering team director and I am the only devops guy managing an app.

It's been over 2 weeks but I haven't heard anything from this new manager. I even sent an email 4 days ago asking to set up a quick call, but no response. He also doesn't look to be on PTO, his status always shows available or in a meeting. I am feeling a bit stuck and left out. To add to the challenge, the other team members of this team manage totally different products/apps, so there hasn't been much overlap or opportunities to naturally connect.

Just wanted to get any ideas on how to approach this. I'm also worried about lack of communication going forward working with his team.

Thanks!


https://redd.it/1jfrd5r
@r_devops
AWS DevOps & SysAdmin: Your Biggest Deployment Challenge?

Hi everyone, I've spent years streamlining AWS deployments and managing scalable systems for clients. What’s the toughest challenge you've faced with automation or infrastructure management? I’d be happy to share some insights and learn about your experiences.

https://redd.it/1jfscf7
@r_devops
How to set realistic expectations for adhoc work

I'm a DevOps consultant and a previous employer. The feedback I got from my manager was that I wasn't scanning Slack enough for ad-hoc work. I was a team of 1 in charge of everything infrastructure and security related for the startup. Sometimes if I was working on something that required a lot of concentration and debugging I would not want to context switch to a slack thread partially if I wasn't tagged or sent a direct message.

Basically I was expected to constantly scan slack channels and respond to any issues developers were having asap and drop everything I was doing. For example one of the gitlab runners was slow and having poor performance. The gitlab runner was still operational but builds were taking 10 to 15 minutes longer than normal for a job that usually takes 10 minutes. My Manager told me because I didn't stop everything I was working on reply that I was working on a fix with 15 minutes and resolve the issue within 1 to 2 hours that I was at fault. I was told this days later after the issue had been fixed because I was worked on the fix for a slow gitlab runner later in the day.

I was not getting direct messages or being tagged so this would mean scanning the common slack channels every 5 to 10 minutes all day which seemed unrealistic if I am doing active development work through out the day on other features. I didn't want to seem lazy because I was willing to work 70 hour weeks if it was required but the client got mad because I would not respond to messages within 20 minutes at 8 PM at night when I was at the gym for a code review for something not urgent.

Is these just really odd expectations of devops at startups or has any else encounter unrealistic expectations from a manager similar to this and how you met them or convinced the manager of more realistic expectations?

https://redd.it/1jfr7pn
@r_devops
Need help for PipeLines

# TLDR;

Junior dev, the only one on the team who cares about pipelines, looking for advice on how to go about serverless.

# Thanks a lot

So I'm back. I'm the guy from this post. I'm very grateful for the help you guys gave me a couple of months ago. We're using Liquibase that a lot of you recommended and I managed to create a couple of pipelines in GitLab trying to automate a couple of things. I'm here because, while I enjoyed trying out Liquibase and building those little pipes, I'm pretty lost.

Let me explain:

## What we have

We started using Liquibase as I mentioned before and it's really helping. After that I decided to try Gitea and test some pipes (we were using GitHub Enterprise Server on-premises). Long story short, I really liked it, but I felt like it wasn't as enterprise-ready as GitLab.

We started using GitLab and with its sprint management and pipes the whole team was impressed. Well, more for sprint management. I decided that automating things was good, so I got to work and after a week I had a set of usable steps for pipes.

We are not using a repo for pipes because we are still trying it out, we only have a couple of repos and this repo is the only one that has pipes. I read that you can create a single repo for those and have another repo call the step on that or something.

Anyway we develop on .Net for BE and typescript with React for FE. I created 3 groups of pipes distributed in some stages:

- build

- test

- analyze (used for static analysis with SonarQube)

- lint

- deploy (used to publish a new version of lambda and push new files to S3 for FE)

- publish (used to apply that new THING on the various envs dev|test|demo|prod)

Maybe publish and deploy are used for switched things, but you get the idea.

Build, test, analyze and lint are executed on every commit on main (we are using Trunk but no one knows about it except me, I keep it a secret because some people don't like it)

Deploy is executed on tags like Release-v0.5.89 while publish on Release-dev|test|demo|prod-v0.5.89. We started logging the status code of the action executed by BE from both APIs and BusinessLogic to CloudWatch to track the error rate in a future pipe although I don't know how to use this data yet.

I feel like I need a little hint. Like what to look for or what the purpose of the next action should be. I was thinking about a way to auto rollback but our site is not in production so we are the only ones using it at the moment. Help?? 🥹

If it helps I can post the pipes via a pastebin or something tomorrow morning (Central European TZ zone).

Edit: fixed syntax and linting 😆. The first published was a rush through and i don't really read back what i wrote

https://redd.it/1jfuczh
@r_devops
The outdated and the new tools you use/prefer?

I'm a fresher (3rd year undergrad), I heard docker is getting outdated and container runtime is not docker anymore and it is containerd from senior, its a new thing for me , I have heard of containerd and never worked on it, what else are there like these to differentiate me from others?

https://redd.it/1jfxtqy
@r_devops
Problem solving, troubleshooting for juniors

Hello,
I am a junior (I mentioned before that I am currently on an internship) and I would like to ask you about your approach to debugging, troubleshooting, and problem-solving. Do you have any interesting books or courses that could help or guide me on different methodologies and improve these skills? Right now, what I do is I write the bug description in the chat and I know what it relates to, then I look at the code to see what’s wrong.
I have found this book https://artoftroubleshooting.com/book/
What do you Think

https://redd.it/1jfzl7h
@r_devops
How do you leverage your TAM's?

We are multi-cloud, but mostly AWS. We have enterprise accounts but honestly we almost never talk to them except to escalate a ticker, and even that is extremely rare.

What kinds of things do you use a TAM for? I honestly don't even know what I would ask them to support with.

https://redd.it/1jfxvf9
@r_devops
How much traction does SLSA have? With ML pipeline safety trending, is it getting more interest?

I remember there was a big splash a few years ago with Google kicking off a pubic SLSA (Supply-chain Levels for Software Artifacts, it's a mouthful) group. Is anyone actually actively adopting SLSA? Or under pressure to adopt it?


Just looking at public sources, there's a lot of regular activity on https://slsa.dev/, with release 1.1 coming out soon. And I've found some papers that are recently published, and the occasional blog post on the topic. And I did notice a recent small spike in google search queries.


Is there more to it than that? I don't see very many Reddit posts about it at any rate.

https://redd.it/1jg2hak
@r_devops
AWS costs. Save me.

Why does it feel impossible to forecast application hosting prices? I have used AWS calculator and it is like another language.I literally want to host a KeyCloak server and .NET/Postgres RDS calendar scheduling, pdf storage and note taking application that will serve initially 4 people but could serve 5000 active daily users by next year. AWS calculator gives me anywhere between £100 and £20,000 a month.Why isn't there a human guide to these costs? Like "10,000 people transferring x mb per session per day would cost X amount"

https://redd.it/1jg3154
@r_devops
I’ve applied to over 100 jobs with no luck. Can you please roast my resume?

What’s wrong with my resume? I have yet to receive any positive responses from the companies I’ve applied to. I would appreciate some feedback. Thanks in advance!

Here’s my resume: https://imgur.com/a/akSS1FL

https://redd.it/1jg33yr
@r_devops
Newbie to DevOps here - General advice requested

Hi. I'm starting with DevOps and would like to do a Proof of Concept deployment of an application to experiment and learn.

The application has 3 components (frontend, backend and keycloak) which can be deployed as containers. The data tier is implemented through an PostgreSQL database.

There is not development involved for the components. The application is an integration of existing components.

We are using GitLab with Ultimate licenses and target AWS for the deployment.

We would like to deploy on a Kubernetes cluster using AWS EKS service. For the database we want to use Aurora RDS for postgresql.

The deployment will be replicated in 4 environments (test, uat, stage, production), each of them with different sizing for the components (e.g. number of nodes in the kubernetes cluster, number of availability zones, size of the ec2 instances...). Each of those environments is implemented in a different AWS account, all of them part of the same AWS Organization.

In our vision we will have a pipeline that will have 4 jobs, each of them deploying the infrastructure components in the relevant AWS account using terraform. The first job (deploy to test) is triggered by a commit on the main branch. And the rest are triggered manually with the success of the previous as requisite.

And we have some (millions of) doubts... but I will include here only a few of them:

1. GitLab groups/projects: a single project for everything or should we have a group including then a project for the infrastructure and another for the deployment of the application? Or it is better to organize it in a complete different way.

2. Kubernetes/EKS: a single cluster per environment or a cluster per component (e.g. frontend, backend, keycloak...)?

3. Helm: we plan to do the deployment on the kubernetes cluster using helm charts. Any thoughts on that?


Thanks in advance to everybody reading this and trying to help!

https://redd.it/1jg1jbm
@r_devops
Any Dev or User Experience with CoreWeave or Nebius for AI/ML Workloads?

I’m curious to hear about your experience—good or bad—as a developer or user working with CoreWeave or Nebius, especially for AI or machine learning workloads.
• How’s the developer experience (e.g., SDKs, APIs, tooling, documentation)?
• What’s the user experience like in terms of performance, reliability, and support?
• How do they compare in cost, scalability, and ease of integration with existing ML pipelines?
• Anything you love or hate about either platform?

Would love to hear your insights or compare notes if you’ve used one or both

https://redd.it/1jg90ks
@r_devops