Reddit DevOps
268 subscribers
1 photo
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Thank you, WeaveWorks!

Not sponsored or something.
I started using fluxcd heavily three months ago and immediately fell in love with it. The way it is designed, it’s speed and it’s robustness is very cool. Seeing how CD really becomes CD without all the hassle of managing infrastructure and pipelines is a true game changer.

I recently started implementing tools to optimize our automated testing, because we are planning to implement flagger using k6 load testing. After a couple of days I figured that all these new tools (testkube, tracetest, etc.) need an own management k8s cluster.

I was wondering: how can I install an EKS cluster quickly, without terraform? That’s when I discovered another tool: eksctl. Made by WeaveWorks. Again. I won’t explain why I love it, but believe me when I say it’s great. Best things: flux integration and IRSA management.

So now my cluster is running and my apps use pvc. Some of them also need Aurora DBs. I was wondering: how can I configure and create the Auroras using GitOps ways? That’s when I encountered crossplane. I installed it, wrote some XRs and XRDs and liked it initially, but it installing thousands of CRDs made the cluster unresponsive. The helm controller of flux even started crashing. So I decided to remove it. But I love the concept and idea of deploying infrastructure with k8s as orchestrator while using GitOps and started searching a little. That’s when I discovered another component: tf-controller. Made by WeaveWorks. Again.
I just deployed a s3 bucket and the speed of it is so great! I will try to deploy a rds cluster like I did with crossplane already to see how it compares tomorrow.

But: I just want to say thank you already. You can’t believe how much easier your genius makes my everyday life. Everything you guys do and develop has such a high quality and design. You are the Apple of DevOps for me. I can’t thank you enough for the work you have done. I am a big fan and if there is a chance to get any insights about the way you decide on designing your apps and tools, please let me know. Your ideas are perfect and just what our industry needs. Thanks <3

https://redd.it/11t8tbp
@r_devops
Terraform automation with GitHub and GCP Workload Identity Federation

This is how I automate IaC following the least privilege principle with GitHub and Google Workload Identity Federation. Hope you find it useful...

The workflow will run terraform plan and apply base on the event triggering the workflow, and based on that will use a dedicated service account to allow us to strictly follow the least privilege principle. If the workflow is triggered by a pull_request event the workflow will execute the step terraform plan with the tf-plan service account. If instead it is triggered by a push against main it will execute the apply step using service account authorised to manage the resources in GCP.

https://youtu.be/DMwl9WcSAL8

https://redd.it/11t5r2z
@r_devops
Prometheus Push Architecture

I know Prom is extremely opinionated and only does pulls. That's fine, I understand where they're coming from.

But I have some highly mobile devices (think Raspberry Pis or phones) that may not be connected at all times. So whatever application they're running, metrics can't be collected at all times.

Another use case might be egress-only networks. So if you can't set up VPNs to your edge devices, they can only push out to a well known Prom endpoint.

Therefore I want to push metrics (and queued metrics) instead.

Is pushgateway still the way to go? Or are Prometheus "extensions" like Thanos?

https://redd.it/11t3yyv
@r_devops
How to run jenkins pipeline jobs in parallel which call the same downstream job

I am a beginner to jenkins and making groovy scripts for pipelines, I want to trigger a downstream pipeline in parallel for all the files inside a folder given by the user... below is the sample code I wrote:-

&#x200B;

def GLOBAL_RELEASE_NUMBER
def GLOBAL_BUILD_NUMBER
pipeline {

agent { label 'centos7-itest' }
options {
timestamps()
buildDiscarder(
logRotator(
daysToKeepStr: '100'
)
)
ansiColor('xterm')
}

parameters {
//some parameters
}




environment {
// For python3

}
stages{
stage("setting environment") {
environment {
//setting up environment
}
steps {
script{
// deciding build number and release number
}
}
}
stage("Clone repo & replace variables & call my pipeline") {
steps {
withCredentials([
//credentials
]){
cleanWs()
deleteDir()
git branch: "${params.branch}", credentialsId: 'jenkins-user-key-vcs', url: '[email protected]:some_repo/devops.git '
script {
sizingFiles = []
def branches = [:]
def counter=0

if (params.sizing_directory.endsWith(".yaml")) {
sizingFiles.add(params.sizing_directory)
} else {

sh(
returnStdout: true,
script: "find ${params.sizing_directory} -type f -name '*.yaml'"
).trim().split('\n').each { sizingFile ->
sizingFiles.add(sizingFile)
}

}
for (def sizingFile in sizingFiles) {
echo "Processing ${sizingFile}"

sh """
sed -i 's/{{[[:space:]]*user[[:space:]]*}}/${params.test_user}/g;
s/{{[[:space:]]*owner[[:space:]]*}}/my_team/g;
s/{{[[:space:]]*dept[[:space:]]*}}/team/g;
s/{{[[:space:]]*task[[:space:]]*}}/sizing/g;
s/{{[[:space:]]*SoftwareVersion[[:space:]]*}}/$GLOBAL_RELEASE_NUMBER-b$GLOBAL_BUILD_NUMBER/g' ${sizingFile}
cat ${sizingFile}

"""

branches[counter] = {
stage('yb'){
build job: "Myteam/myPipeline",
wait: false,
parameters: [
text(name: 'sample_yaml', value: readFile(file: sizingFile)),
string(name: 'branch', value: "${params.branch}")
]
}
counter+=1

}
}
parallel branches

}
}
}
}
}
}

The issue is when I trigger this pipeline with a folder containing 2 yaml files, I notice that the the job is triggered for first file and first completes the job and then goes to the next file to run the job for this file. I want to run all the jobs in parallel hence I gave the "wait:false" for the individual jobs. Can someone point out if what I am doing wrong?

https://redd.it/11thwql
@r_devops
Devops interview types?

Hey fellow devops folks, what has been you interview type like? Take home assignment or some leetcode programming live session or drilling devops technical knowledge of various tools?

https://redd.it/11t3rp2
@r_devops
Any alternative to Redshift for streaming data from Aurora for analytics?

Hey guys,

So we use Redash to run lot of analytics queries on Aurora and we are seeing its limitations now.

The obvious choice is to use Redshift since our DBs are AWS Aurora.

But was wondering if there is a better alternative to look for?

We are thinking BigQuery, ClickHouse and Snowflake.

Anybody has any experience with this?

Our requirements are:

* Connector for streaming data from Aurora. Ideally real-time.
* Connection with Redash
* Ruby ActiveRecord gem

BigQuery, Clickhouse and Redshift have ActiveRecord gems for connection, but Snowflake doesn't.

But Redshift seems the only option if we need real time streaming of data.

&#x200B;

Thank you.

https://redd.it/11tjo1a
@r_devops
How often do you do deployments at your startup/company? A poll (version 2)

Just to get a feel for how DevOps/SRE culture has impacted the deployment frequency at various companies/startups for your PRODUCTION environment.

And just to clarify it means "how often do you deploy 1 particular selected component", not if you have one artifact and you need to deploy it to hundreds of prod environments.

&#x200B;

Thank you very much for your answer!

View Poll

https://redd.it/11tktk3
@r_devops
Authentication of SQL Db in pipeline.

I am trying to deploy azure sql db using dacpac. I ran the deployment job but I need to authenticate sql db by setting authenticationType as connection string and I am using keyvault to store the connection string. Is there any other way to do authentication?

https://redd.it/11tlvc8
@r_devops
Why You Can’t Find Anything in Your Monitoring Dashboards

Too often we run into an incident, jump to the dashboard, just to find ourselves drowning in endless data and unable to find what we need. This could be caused not just by the data overload, but also due to seeing too many or too few colors, inconsistent conventions or the lack of visual cues.

The dashboard needs to be designed in a way that allows users easily access and interpret the data. It requires more than the engineer mindset to do it right. Take these two guiding principles:
*When designing a dashboard,* ***think like a UX designer****, and* ***keep it simple****.*
Here are some guidelines for effective dashboard design:

* Understand your dashboard’s user persona and use case
* Utilize the right data visualizations
* Create clean layout with an intuitive flow
* Keep Consistency of the layout
* Correlate between different dashboards and views
* Annotate thresholds, alerts and events on the graph
* Overlay values on the same graph only when makes sense

See dashboard examples and more details on this guide: [https://medium.com/p/12fcc23d34c8](https://medium.com/p/12fcc23d34c8)

https://redd.it/11t3i0i
@r_devops
Feedback needed: Will this help you?

Hi alL! I'm the growth PM at ngrok, and I've been working hard to make our product easier to use and understand for people testing and debugging web apps. As part of this, I'm working to expand our free tier, and have included webhook verification and OAuth.

&#x200B;

My question for this group is, does webhook verification matter? We don't see a ton of usage but part of that was because it was in the paid plan. Is this something you care about? If not, why?

https://redd.it/11su2gg
@r_devops
is anyone using garden.io for Kubernetes development?

Hi, just wanted to ask if anyone of you has experience with introducing Garden.io at your company for Kubernetes development? Did it help with providing a better developer experience or speeding things up or improving overall developer satisfaction? Or why did you even introduced in first place?

Would appreciate any insights on garden.io. Thanks.

https://redd.it/11sydxp
@r_devops
Elasticsearch upgrade delimma

I am performing a rolling upgrade on our elastic search cluster from 7.6.2 to 7.16.2, there are total 3 nodes. Cordinator node is already updated. Should I keep the value of cluster.initialmasternodes as empty in elasticsearch.yml while upgrading or keep the values.

https://redd.it/11tqyk9
@r_devops
Folks on my team never want to have a "white boarding" session to review stories that I pick up...?

Some stories I pickup I'd like to dive a bit deeper with my colleagues before I start development. In the past folks would always be willing to jump on a call or meet in a room and start mocking out a potential architecture for automation / CICD... normal *right*?

Every person on this new team I'm on requires a PR before they'll review/consult against the work increment. This seems counterintuitive as the development has already been through multiple phases.

Yea, we can do this in grooming, but that's typically for 10-15mins on a story, and you're not positive if that particular story will even be assigned to you.

Is this normal practice?

https://redd.it/11ts7mo
@r_devops
How do you handle subnet reservation ?

looking for a better way to handle subnet allocation rather than rely on a spreadsheet. would like it to auto update if possible .

&#x200B;

at the moment we have subnets reserverd for failovers , but there is no record of them in the azure portal , as they dont exist until the point of deployment , ant the moment this is handles through the traditional spreadsheet , which sucks

&#x200B;

so who has a better way?

https://redd.it/11tqrv1
@r_devops
Guys/girls, I need your help!

So, i am doing an internship in this company after graduating. I applied for a mobile development position.I had really good mentors and everything was going according to plan. Long story short: there have been some fuckups in the company and now i have to decide between DevOps and data science.
So I need advice from you guys about what can I expect. What is it like working as a DevOps/data scientist. What do you like/dislike? How stressful/hard/fun it is etc.
Excuse my broken english as it is my second language.
I'm posting this question in both communities (feel free to redirect me somewhere else if you think it would suit my question better) .
Thanks everyone in advance!

https://redd.it/11tvz5o
@r_devops
Master EKS Clusters, Terraform & ArgoCD with this Comprehensive DevOps Tutorial!

Hey, DevOps enthusiasts! 👋

I recently created an in-depth tutorial covering the entire process of creating and managing an EKS cluster using Terraform modules and installing ArgoCD on it. I wanted to share it with you all, as I believe it can be a valuable resource for those looking to enhance their DevOps skills.

In this tutorial, you'll learn:

How to set up an EKS cluster with Terraform modules
Best practices for managing your infrastructure
Installing and configuring ArgoCD for seamless deployment
And finally, how to properly destroy the cluster once you're done
Whether you're new to DevOps or an experienced pro, I'm confident that you'll find this tutorial useful and informative!

🎥 Check out the video here: https://youtu.be/zgNs2xz1eLk

I'd love to hear your thoughts, feedback, or any questions you might have. Let's discuss and learn from each other!

Happy learning! 🚀

https://redd.it/11tx9gs
@r_devops
SonarCloud and golang code in Azure

Hi, i'm new in devops, and Azure. I'm trying to use SonarCloud in my project with Golang in Azure. I already install the extension, the prepare analysis configuration, run code analysis and publish code gate quality in the CI pipeline. When i run my code CI everything do fine but when i go see the results in SonarCloud, apparently just the golang code its not analyzed. i can see in Sumarry 0 Bugs, Code Smells, Vulnerabilities and Security Hotspots, and in Code i can see just the Dockerfile, the pipeline yml and manifests folder.

I tried to install the golang during CI pipeline but not work.

https://redd.it/11twr6w
@r_devops
March 29, Free Talk on the Future of DevOps with Sasha Rosenbaum, Principal at Ergonautic

March 29 at 12 pm ET (17:00 UTC), join Sasha Rosenbaum, principal at Ergonautic, for the ACM TechTalk " Future of DevOps." Andrew Clay Shafer of Ergonautic will moderate.

The term DevOps first appeared in 2009, and since then has been used to describe a cultural shift, an engineering job title, and many products in the Continuous Integration and Continuous Delivery space. In this session, Sasha will talk through the brief history of DevOps as a methodology, a set of technical skills, and an umbrella of technologies, and then dive into what the next 5 to 10 years are likely to look like in the DevOps space.

Register to attend this talk live or on demand.

https://redd.it/11tzpfj
@r_devops
What are some of your favorite projects to support on GitHub?

Hey y'all, happy Friday. I'm interested in discovering what kinds of projects devops professionals like to support. Maybe it's an open-source project led by a global team, or maybe it's one person's passion project to improve accessibility to K8s. Are there any you support on a regular basis, either through contributing or through sponsorships? Thanks for entertaining my curiosity!

https://redd.it/11u1jlf
@r_devops
Any tips on how to run auto scaling self-hosted GitLab runners well?

If you are using AWS and EKS? Or other CSP is fine

https://redd.it/11tstqc
@r_devops