Reddit DevOps
268 subscribers
2 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Webinar - Implementing DevSecOps for Intelligent Security

As software development continues to evolve, integrating security into every stage of the process is no longer optional—it's essential. In this webinar "Implementing DevSecOps for Intelligent Security", we will explore how to build secure software while ensuring intelligent decision-making in the development process. Register Now

https://redd.it/1g9nlo7
@r_devops
Need Schema Help: Fun with The Bitcoin Chain

I'm diving into a personal project as a learning experience and could really use some guidance from more experienced minds. I’m a full-stack developer, but my experience leans heavily toward middle/front-end development. Now I’m dealing with a massive dataset that’s forcing me to rethink some of my usual "brute force" methods, which just aren't cutting it here.

**The Situation:** I have \~800GB of raw Bitcoin blockchain data that I need to ingest into a PostgreSQL database in a way that’s usable for analytics (locally).

**Hardware Setup:**

* CPU: Ryzen 7700x (AIO cooled)
* Storage: 2TB SSD
* RAM: 32GB (might be a limitation)
* No GPU (yet)
* OS: Ubuntu Server

I know this setup is a bit overkill for just running a full Bitcoin node, but I'm concerned it might be underpowered for the larger-scale analytics and ingestion tasks I’m tackling.

**What I've Done So Far:**

* I’ve stood up a Bitcoin full node and fully synced the blockchain.
* Built a basic local PostgreSQL structure with tables for `blocks`, `transactions`, `inputs`, `outputs`, `UTXO`, and `addresses`.
* Created a Python ingest script using `bitcoinrpc` to process the blockchain data into these tables.

**The Challenge:** Initially, the script processed the first \~300k blocks (pre-2015) pretty quickly, but now it’s crawling, taking about 5-10 seconds to process each block, whereas before, it was handling hundreds per second.

I still have \~1.2TB of space left after the sync, so storage shouldn’t be the issue. I suspect that as my tables grow (especially `transactions`), PostgreSQL is becoming a bottleneck. My theory is that every insert operation is checking the entire table structure to prevent conflicts, and the `ON CONFLICT DO NOTHING` clause I’m using is severely slowing things down.

At this rate, processing the full dataset could take months, which is clearly not sustainable.

**Questions:**

* Is there a better approach to handling large datasets like this in PostgreSQL, or should I consider another database solution?
* Are there strategies I can use to speed up the ingestion process without risking data integrity?
* Is there a more efficient way to handle conflict resolution for such large tables, or is my approach inherently flawed?

Ultimately, I want to use this data for visualizing blockchain trends, changes over time, and price/scarcity models.

Any advice or insights would be greatly appreciated. Thanks in advance!


Edit: structure and typos fixed...

https://redd.it/1g9lr0r
@r_devops
GitHub actions cost monitoring/optimizations

Hi,

Recently, I’ve started thinking about the costs associated with GitHub Actions and have noticed some issues. Has anyone found GitHub Actions costs difficult to manage as their projects scale? How are you optimizing or controlling these expenses?

Information seems to be quite limited, and I’m considering building a simple tool, but perhaps there are already tools available on the market?

Thanks in advance!

https://redd.it/1g9r3gw
@r_devops
KodeKloud exam lend


Hi, I'm a student currently preparing for the Certified Kubernetes Administrator exam, but I can't afford a KodeKloud subscription to access their mock exam series. I was wondering if anyone would be willing to lend me their KodeKloud account for a short time so I can practice with the mock exams and gauge my readiness. Your help would really mean a lot to me! Thanks in advance!

https://redd.it/1g9u59u
@r_devops
DevOps panic

I'm in my mid-twenties and work as a Junior SysAdmin/Technical Lead for a support team that specializes in niche Microsoft technologies. I started in the IT world less than three years ago and have been job-hopping every year for new challenges. I've taken a couple of courses, but that's it—no bachelor's degree or anything. I haven't even created a Kubernetes lab or opened up the interface.

Recently, I entered the interview process for a DevOps position at my company (a large firm that works with different clients around the world). At first, I didn't think my application would even be considered, but then, on Monday, out of the blue, I had a phone screening. The interviewer liked me, and that same night, he told me I’d have a more technical interview the next day.

I studied that night and the morning before the interview, but it was a lot to process in such a short time: Kubernetes, Git, methodologies, Docker, etc. Honestly, I thought I was going to show up and get humbled by my lack of experience (If any) in DevOps.

When Tuesday arrived, I was a bundle of nerves, not wanting to make a fool of myself. Somehow, I passed the interview, and they want another technical interview on Wednesday or Friday at most. They provided me with some topics to study for the client screening, and that's it.

Now, fear sets in as they told me there would be no peers to ask for help, no documentation, and no training or ramp-up. I would be on my own, and I need to create SOPs, manuals, guides, processes, and document everything to "pave the way for future peers."

One friend who works at a DevSecOps team told me that this is a really great opportunity, and he thinks i'll make it (Though he warned me that it will be a really difficult, soul crushing and just plain hard process until i get a grip on everything)

I'm really scared. What if I mess up and can't make the cut on my own? At the same time, I think this situation could force me to learn and move forward, but I'm just really fucking scared. But, who the fuck with no bachelors, less than three years of experience, and some cookie cutter certs would get a chance like this?

I want this, its a great chance to get a better life and a job I think i'll love, but im just scared of failure.

Any tips on how to study for the interview? This will be the hardest one. Sorry for the long text, its been crazy at least for me.

https://redd.it/1g9vsv1
@r_devops
Freelancer client acquisition methods

So for all of you free lancers, what is the biggest client acquisition that a free lancer must do

https://redd.it/1g9wcql
@r_devops
VSCode with cfn-lint and cfnnag on Windows

Can anyone give me a fairly decent step-by-step for installing cfn-lint and cfn\
nag for use on Windows with VSCode and applicable extensions?


I found what appeared to be straight forward steps for using WSL to install both via Ubuntu, however the steps were outdated and did not work (from what I gather is due to recent Ubuntu security updates). From there I found numerous workarounds to get each to install, however then neither seemed to properly function with the appropriate extensions via Code.


At this point, given all the various things I had to attempt, I simply removed everything (VSCode extensions, installed packages, Ubuntu, and even WSL itself) and am starting fresh with hopes of getting something working fairly easily.


TIA for any advice you can provide. Appreciated.

https://redd.it/1g9z8so
@r_devops
How should I approach logging when loading testing?

I'm working on setting up a locust+prometheus+grafana stack for a client. Up until this moment I only wrote unit tests so I dealt with failed requests on a singular, manual basis. Thus I am unaccustomed to the scale of handling the potential failure of thousands and tens of thousands of requests. (i'm a fullstack dev repurposed into a devops).

I'd appreciate if you could answer a couple questions

1. How should I catch a failed request? Right now I am making event hooks in locust and logging the response body when debugging why a certain request fails. I have a feeling this isn't scaleable, yet I lack a clear path in thinking about a better solution.
2. Should I even try to "catch" or "debug" failed requests? I feel I am approaching load-testing from a bias of project development (i.e. print statements or unit tests) and maybe need some tweak in how I think about debugging when doing load testing and telemetry.
3. Should I persist in any shape or form the result of locust runs, or is it better delegated to a prometheus exporter which scrapes locust?
4. What should be the "shelf life" of the load tesing outputs (i.e. logs, performance data, failed request CSVs etc)? Should everything I produce live forever?
5. After I get a tool/service working with my clients environment, when do I begin autiomation? Devops has many tedious tasks (which is why almost everything is automated) but I want to avoid spending time on overengineering too soon.

https://redd.it/1ga2mve
@r_devops
Most Critical Issue in Current Project and how you are dealing with it?

I am new to devops role and currently panicking.

https://redd.it/1ga3oc3
@r_devops
how can you tell ansible-pull has done anything?

You don't receive any feedback that it ran successfully in a central way. How can you be sure it really ran and that your machines are compliant?

https://redd.it/1ga2d7p
@r_devops
How do you guys use PowerShell remoting ?

I have been working with PowerShell for more than 10 years. When it became open sourced and cross platform, I started to manage linux (and at home my macOs ) with PowerShell.

I was wondering how the devops community is using PowerShell remoting for remote management of machines.

If so, I think it can be interesting to discuss how and for what type of machines (and how many?) this is done. To keep things clear and short, maybe each can include the following data:

\- How many servers
\- Which type of machine (Windows / Linux / MacOs?)
\- Which protocol is used (OpenSSH / WinRM over HTTPS etc..)


\-----------

I'll go first:

\- Servers: 3000
\- Mostly Windows
\- WinRM over HTTP at first, then over HTTPS.

Details:

I have been managing mostly Windows machines (around 3000) with raw Powershell remoting. We went and got a licence of Ansible Tower, and this migrated on that platform for our configuration management system.

We use WinRM over HTTPS, as our machines are not always in our main active directory. Ansible is quite cool, but for Windows Management it still uses classical WinRM, and actually works great.

Since I was evaluating OpenSSH as an alternative (since Powershell supports that now) I actually took A LOT of notes. regarding POwerShell in general. I kept having the quesiton: Is PowerShell remoting actually secure ? (And a LOT of people would say it is not - for some obscure reason...)

I have summarized my notes and answer most of the general questions (how to configure, how does it work, is it secure etc....) in the following video -> https://www.youtube.com/watch?v=sg\_9r0PHnnM








https://redd.it/1ga5oxa
@r_devops
Image Extraction Issue with WMF Format on Linux - Need Help Converting to PNG for OCR

Hi, everyone. I’ve built an app that processes PPT uploads by extracting text and images from the slides. The app also performs OCR on the images and saves them. It works perfectly on my development environment (Windows), but I hit a snag when I try to run it on an AWS Ubuntu instance. The issue is that when images are extracted from the PPT on Linux, they are in WMF format, and the system can't seem to work with these for further preprocessing (like OCR). This doesn't happen on Windows. I need to convert the extracted WMF images into PNG format before preprocessing, but I haven’t found a solid solution to handle WMF files on Linux. Has anyone dealt with this issue before? Any libraries or tools that could help with WMF to PNG conversion on Linux would be greatly appreciated! I appreciate any help you can provide.

https://redd.it/1ga5wq0
@r_devops
General question regarding AWS

I am new to devops world, I have an existing project in my organisation, I want to draw network architecture of the project. I want to visualise everything it contains in my VPC. Example, I want to know how rds are connected, nacls, security group rules including inbound out bound rules everything in an architectural diagram, is there a way?

https://redd.it/1ga91hv
@r_devops
?? what takes too much of your time at work that could be automated and for some reason isn't

follow up question: why haven't you automated it yet?

https://redd.it/1gaa7gj
@r_devops
Provision serverless service with Terraform or not? (Planning to use GCP Cloud Run)

Hi, I would like to deploy several services on GCP Cloud Run and a bit unsure about the recommended way to provision the services.

Should I create it through Terraform or just use the "gcloud run deploy" command?

https://redd.it/1ga8ohi
@r_devops
Asking for advice

I'm computer science student, the job market in my country is hiring DevOps interns all the time for end of year internships, and I'm trying to get this opportunity since I'm really interested in a DevOps carrer. Can any of the Tech leads here, member of recruitment who is actively hiring DevOps engineers give me some advices on what makes someone a good candidate when it comes to DevOps.

I studied really hard for the last two years and I have good knowledge of DevOps practices and concepts. I've had so much hands on experience on different conepts (GitOps IaC Cloud) and technologies like Jenkins GitLab ArgoCD Ansible Terraform, also some CLI tools using Go and Python, projects on AWS GCP, and had some software engineering internships where I got the picture of how softwares are built and delivered.

I am really interested on what are the key skills that makes difference also the project you'd like to see in the resume.

I am ready to hear you feedback, also if possible I can share my resume with you so you can roast it.

Thank you 🙏🏻

https://redd.it/1gabhh3
@r_devops
Need help with Google Oauth 2 for Argo Workflows DEX authentication using Argo CD Dex

I went through the documentation that argo provides for adding dex authentication using the dex server that argo cd has, it was a bit weird with many fields in the current values yaml file in the helm chart not matching position or even name. I got google's oauth2 working on argo cd with dex using the default config provided in the values file for the helm chart. The problem is when adding the same dex auth method to argo workflows isn't as simple as argo workflows requires a service account so I followed the documentation to map a service account to a group, this requires reinstalling argo workflows so I did that then instead asking me to choose an account I get

# Access blocked: authorisation errorAccess blocked: authorisation error

Some requested scopes were invalid. {valid=[openid\], invalid=[groups\]} Learn more about this errorIf you are a developer of invite automation, see error details.Error 400: invalid_scope

does anyone here know how to implement argo cd dex authentication on the argo server used by argo workflows?

https://redd.it/1gaanw3
@r_devops
Request for Features OneUptime: Open source observability platform.

We're building an open source observability platform - OneUptime (https://oneuptime.com). Think of it as your open-source alternative to Datadog, NewRelic, PagerDuty, and Incident.io—100% FOSS and Apache Licensed.

Already using OneUptime? Huge thanks! We’d love to hear your feedback.

Not on board yet? We’re curious why and eager to know how we can better serve your needs. What features would you like to see implemented? We listen to this community very closely and will ship updates for you all.

Looking forward to hearing your thoughts and feedback!

https://redd.it/1gag4vx
@r_devops
Avoiding unexpcted overae

For those managing multiple APIs, how do you keep track of usage and avoid unexpected overages?

https://redd.it/1gagvwv
@r_devops