Reddit DevOps
266 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Does this seem like an efficient route for me to get into DevOps?

I currently have experience as a Software Engineer around 3 years.

I currently work remotely as a Software Engineer.

I have a CS degree that I recently finished.

I have one AWS cert which is the cloud practitioner.

My plan is to get another AWS Cert Solutions Architect Associate.

Finish this course which a resume challenge: https://cloudresumechallenge.dev/docs/the-challenge/aws/

Then put all this up on my Linkedin and Personal website with the intention of landing a remote job in DevOps.

Curious on other opinions

https://redd.it/13992xz
@r_devops
Datadog Metrics for Terminated Kubernetes Pods+Nodes

I've recently implemented an EKS cluster for Jenkins agents using the kubernetes plugin. The plugin creates ephemeral pods that run a given Jenkins pipeline/job and then terminates the pod. I've also implemented an autoscaler group to add nodes when needed.

I've recently installed Datadog on the cluster and it's working but it appears that once a pod terminates or the cluster scales down (therefore terminating nodes) the data for the given node or pod disappears in Datadog. I would like to see this historical data so that I can fine-tune our requests/limits for pods. I would also like to choose the best instance type to use for our cluster by looking at historical data for nodes.

I've googled this topic for a day and haven't found anything that touches this subject. Is this possible? I'm surprised I haven't found others on the Internet that have run into this issue, so that also begs the question: Am I going about this the wrong way?

https://redd.it/1396yn7
@r_devops
Preferred way of handling/exposing gRPC backends on K8S?

Working on a PoC (proof of concept) project that utilizes K8S and a framework written in Go that spins up a service that has HTTP and gRPC back ends. (One service object for each type of connection)

As this is not the final productionized version, I could cut some corners and simply point the Auth0 and ingress to a singular port on the gRPC endpoint (headless service manifest) but according to my limited understanding of gRPC, this wouldn't scale well as it would only end up pointing to one pod's IP address. And I would probably have to create/expose more endpoints through load balancers/DNS records to more pods to enable this gRPC workflow for greater scale. And the power of gRPC is keeping long living connections open and multiplexing requests through those connections, rather than having parallel connections like HTTP.

But after more research, it seems the way to work around this is to implement a service mesh such as Linkerd or Istio (and the ten million other service mesh services out there).

I guess this was a very long winded way to give context and to ask the community at large this question:

On kubernetes, what is your preferred method to load balance and expose gRPC services (whether it be through service mesh deployments or headless services, port forwarding)? Hopefully methods that integrate well with Auth0 and AWS load balancer controllers.

EDIT: should clarify the only reason I'm thinking of cutting corners on this PoC is that there's a deadline to demo this to clients of the company in about a month's time.

https://redd.it/139appz
@r_devops
where can i get to know tech stacks of big companies other than stackshare(which seems to be incomplete often)

like in netflix's techstack spring boot is not mentioned etc.highscalability . com blogs seem to be quite old and not updated.

https://redd.it/1397act
@r_devops
Laptop suggestion

Hi,

I wanted to know your suggestion for which laptop to go for. I haven’t worked with a Mac before and have been currently using a Windows system.

I will be mainly working on Cloud/DevOps tools like Docker, Terraform, Ansible, Azure CLI, Jenkins, Kubectl and others.

I’m not familiar with the M2/M2 Pro chipset or whether it offers any advantage over a windows laptop when it comes to work stuff.

So should I go for a Mac or look for a windows laptop (and which one) ?

Thanks

https://redd.it/139fftn
@r_devops
Looking for projects ideas for experienced devops engineers

I've been working in the same place for 4+ years and have 8 years of experience in general.

Im looking to do some side projects to broader my experience with other technologoes and make things more interesting.

Ideally i would like to hear a project idea that maybe you had fun completing or taking part in.

Maybe your dream project that includes full system spec and tool list but you never had time to do?

I only ever worked with bitbucket so a project involving github / gitlab could be cool.

Don't be afraid to throw in some interesting cloud services or open source tools.

Technologies should be relevant to 2023 - no Jenkins please 😶‍🌫️

https://redd.it/139pynp
@r_devops
How do you write documentation that developers without experience are able to follow and understand ?

This seem like my nemesis. I cannot put myself in the shoes of someone who has a hard time googling for „What is Helm” or „aws lambda in python”..

I just cant. Its fundamentally written in my DNA that if I dont know something -> I learn, search and understand.

But it seems most recent developers are NOT LIKE THAT.

Im supposed to create an introduction documentation to any tech we use. Something like „From ZERO to HERO”..

I just cant do it, most things are so obvious to me, I always forget to put them in documentation.

Worst part is that I know that even, if I write an amazing doc piece -> ppl dont read it and still complain they dont understand and want to be spoon fed or hand holded.

I cannot be the only one with this issue. How other platform teams are handing this ? How do you nit lose your mind with those new devs ?

https://redd.it/139ryg1
@r_devops
Trunk based dev to deployment

At the company I work at, for an internal tool, we recently switched our git strategy from using long lived branches for our 4 environments DEV, QA, UAT & PROD to using a single branch called mainline.

The current CI/CD setup just deploys from the respective branches whenever there is a merge on their branch. Having moved to trunk based development model, I was looking around for solutions to setup our CI/CD to allow devs to tests their code in lower environments before promoting to higher envs for QA or UAT testing.

What possible ways can this be done without having the need to trigger the CI/CD pipeline for each environment manually and also guarantee that devs or QAs have confirmed that the code works as expected.

https://redd.it/139xotd
@r_devops
Amidst Docker, Podman why does one not hear about systemd-nspawn, mkiso and debootstrap often?

Containerization at its core works with Namespacing, User Ids, Networks in the userspace.

There is a great write up by Benjamin Toll about using systemd containers, where he breaks down all the capabilities that systemd-nspawn along side with tools like mkiso / debootstrap can do without Daemon Layers like Docker / Podman and runc, containerd.

I am quite curious if there are certain things these set of tools are not able to achieve in the Container Space. If DevOps peeps who use them, can you share your experience with systemd-nspawn?

https://redd.it/139u04t
@r_devops
Keeping contents in the same git repository from two different folders

We have a parent folder ABC upon which git is configured. Which means that if we run git config -l after cd to ABC we can see the git repository it is pointing to.

​

Now I wanted to add fstab file which is under /etc directory to the same git repository. Is there any way I can acheieve this?

we are thinking to do rsync between /etc/fstab with fstab file(under ABC), so that it can pushed to the same repository.

​

Please let me know your suggestions.

https://redd.it/139qdlx
@r_devops
Issue with auth when using a bitnami postgres helm chart

helm install --set auth.username="admin",auth.password="postgres123",auth.database="database",architecture="replication", release oci://registry-1.docker.io/bitnamicharts/postgresql

const sequelize = new Sequelize('database', null, null,{
dialect: 'postgres',
port: 5432,
replication: {
read: [
{
host: 'release-postgresql-read',
username: process.env.DB_USER || 'admin',
password: process.env.DB_PASSWORD || 'postgres123'
},
],
write: {
host: 'release-postgresql-primary',
username: process.env.DB_USER || 'admin',
password: process.env.DB_PASSWORD || 'postgres123'
}
},
pool: {
max: 10,
idle: 30000
},
});

This is what I have.

When I go kubectl get services, I get:

release-postgresql-primary ClusterIP 10.111.239.207 <none> 5432/TCP 85m
release-postgresql-primary-hl ClusterIP None <none> 5432/TCP 85m
release-postgresql-read ClusterIP 10.107.56.248 <none> 5432/TCP 85m
release-postgresql-read-hl ClusterIP None <none> 5432/TCP 85m

I use the clusterIp to connect to the read and write instances of the postgres bitnami chart, but I get the error:

Error: SequelizeConnectionError: password authentication failed for user "admin" (node:19) UnhandledPromiseRejectionWarning: SequelizeConnectionError: password authentication failed for user "postgres" at Client._connectionCallback (/app/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:143:24) at Client._handleErrorWhileConnecting (/app/node_modules/pg/lib/client.js:318:19) at Client._handleErrorMessage (/app/node_modules/pg/lib/client.js:338:19) at Connection.emit (events.js:400:28) at /app/node_modules/pg/lib/connection.js:116:12 at Parser.parse (/app/node_modules/pg-protocol/dist/parser.js:40:17) at Socket. (/app/node_modules/pg-protocol/dist/index.js:11:42) at Socket.emit (events.js:400:28) at addChunk (internal/streams/readable.js:293:12) at readableAddChunk (internal/streams/readable.js:267:9)

https://redd.it/13a3wug
@r_devops
Lightweight ELK alternative for ingesting and analyzing local logs?

Looking for something like ELK stack that I could spin up quickly locally to forward some structured logs and analyze them. What are existing solutions?

https://redd.it/13ab0l5
@r_devops
ingress-service not working on minikube

NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service nginx kub.com 192.168.49.2 80 10m

I made a curl request:

&#x200B;

curl -X GET "https://kub.com/api/mark?name=peter"

&#x200B;

I am getting:

&#x200B;

DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot GET /</pre>
</body>
</html>

When I use docker-compose, I get a response, but with kubernetes, it's not getting routed.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: kub.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: cluster-ip-service
port:
number: 3000

I executed these:

minikube ssh
sudo ip link set docker0 promisc on
minikube addons enable ingress
minikube addons enable ingress-dns


But I can't seem to make it work. I made a similar config for a simpler app and I could make a GET request to example.com/

https://stackoverflow.com/questions/66275458/could-not-access-kubernetes-ingress-in-browser-on-windows-home-with-minikube

&#x200B;

Is there something wrong with the ingress config?

https://redd.it/13aac21
@r_devops
Aren't we just glorified linux admins?

At least I wish I were.
In my sysadmin times documentation was god.
The inquisition came if you did not write one.
You had time. Not much time if any server or user had an emergency... but any other day you had time to actually improve the systems.

Working at a software company now and it's clear as day to me that they don't understand that devops & cloud engineering are in no way that much different from normal sysadmin work... but now I'm seen mostly as a productive force whose time on projects is another bill for the customer... and not a maintenance role like in past times.
If I write docs while working on projects I could easily double my time working on them. And I'm already at >150h for a pipeline with new tech and all environments (dev to prod).
There was a project where I ate away a third of the allocated budget. No documentation there... there was no time for that. That happens if you want a new, shiny and novel pipeline that your 12 year old jenkins can't handle.
Our devs can use the same tools for years or decades and don't have to learn that many new things. Every new project is novel to me because of different requitements.

No time for RnD.
No time for learning.
No documentation.
No time for me to write documentation.
Can't even make templates. No time, who pays for that?
Often only temporary, makeshift fixes possible (that become permanent)... because... WHO PAYS FOR FIXING THE FIXES?

On a range of 1 to 10.
How fast should I run?

https://redd.it/13adr4v
@r_devops
Transitioning back to a hands-on DevOps/platform engineering role

Hey Reddit,

I'm currently leading a successful team of DevOps/platform engineers across America, Europe, and South Asia. While I love the challenge of leading a team, I miss the hands-on work of DevOps and platform engineering.

In my current role, I spend a lot of time managing my team, setting priorities, and working with stakeholders to understand their needs. While these are important skills to have as a leader, I miss the technical challenge of building and deploying systems at scale.

I want to transition back to a hands-on DevOps/platform engineering role, but I'm worried about the interview process. Many companies these days require candidates to spend hours on coding challenges, often with little context or relevance to the actual job. While I'm confident in my skills and experience, I don't want to spend a week coding for someone when I could be working on real projects.

So, I'm turning to the Reddit community for advice. Have you successfully transitioned back to a hands-on DevOps/platform engineering role after leading a team? What tips do you have for someone looking to make the switch? How did you navigate the interview process and prove your skills and experience without spending hours on a coding challenge?

Additionally, I'd love to hear from hiring managers and recruiters. What do you look for in candidates who want to transition back to a hands-on role? Is there anything I can do to stand out during the interview process and prove my skills and experience without spending hours on a coding challenge?

I appreciate any advice or insight you can provide. Thanks in advance

https://redd.it/13aja3f
@r_devops
Can I build you a website for free?

Hey guys, I'm a new entrepreneur and am looking to get my foot into starting a web dev. I'm pretty good actually and just need about 10-15 people who'll let me build their sites for free to build my portfolio. Dm me!!

https://redd.it/13aighj
@r_devops
Help required

I am backend developer. I have an API I have to host in series of VPS.

I have setup load balancer for my API using nginx, mysql master-master replication as well as reverse proxy on top of it all. All my VPS are 1vcpu 2gb ram variant.

I am looking for properly managing this infrastructure.

I was researching into DevOps tools for my need and found some:
Jenkins
Ansible

I got overwhelmed by these services and their resources.

Where do I start? How do I begin?

I have version control set up in github with automated testing and Cron jobs in VPS to pull every 1 hour

I know nothing about DevOps.

Happy to answer further questions for help.

Thanks in advance.

https://redd.it/13amxs7
@r_devops
What do people use for quota management?

As the title says. I have various accounts on AWS, GCP or other tools where I need to adhere to quotas (hourly/weekly/monthly) for certain services (e.g can’t get over X messages in AWS SNS).

I'm wondering how I can monitor/alert on these before it will have an impact on my pricing.

Thanks for any pointers!

https://redd.it/13apl9h
@r_devops