2 Months in as a DevOps engineer, need advice!
I'm a CS-grad, and got a DevOps job at an enterprise company. I have like zero experience in DevOps but I am willing to learn. The thing is everything is moving sooooooo slowly. The people here are on the older side (40s - 50s), with kids and they don't seem too interested in solving stuff and moving quickly. They are pretty much there for the pay. And I get it, they are at later stages in their careers and don't have the same drive.
The thing is that me as a junior that has zero experience in this field is learning stuff at an excruciating slow pace. I ask for tasks to do and am given very little to do. So I have a lot of time over and try to fill it with things to learn.
What can I do in this position? Just learn stuff on my own and change companies after a year or so? How do I go about learning new stuff in this field? My company is heavily into the Azure DevOps framework, we are not on cloud yet, but rumors are we will start migrating to the cloud soon. Maybe that's something interesting I should focus on? Right now my only task has been fixing CI/CD pipelines and integrating some tools to our pipelines, basic stuff really.
Any advice?
https://redd.it/1g8onwi
@r_devops
I'm a CS-grad, and got a DevOps job at an enterprise company. I have like zero experience in DevOps but I am willing to learn. The thing is everything is moving sooooooo slowly. The people here are on the older side (40s - 50s), with kids and they don't seem too interested in solving stuff and moving quickly. They are pretty much there for the pay. And I get it, they are at later stages in their careers and don't have the same drive.
The thing is that me as a junior that has zero experience in this field is learning stuff at an excruciating slow pace. I ask for tasks to do and am given very little to do. So I have a lot of time over and try to fill it with things to learn.
What can I do in this position? Just learn stuff on my own and change companies after a year or so? How do I go about learning new stuff in this field? My company is heavily into the Azure DevOps framework, we are not on cloud yet, but rumors are we will start migrating to the cloud soon. Maybe that's something interesting I should focus on? Right now my only task has been fixing CI/CD pipelines and integrating some tools to our pipelines, basic stuff really.
Any advice?
https://redd.it/1g8onwi
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Devops career path from taking Cloud Practitioner CLF-C02 and what else can I do to help improve job prospects?
Hi all,
Iam looking to break into Devops. Iam stuck in a QA manual role with no career paths in my current company I work for.Ive done QA testing for at least 10 years and I want to branch out to do something different. Iam not interested in doing QA Automation for the time being.
I did see the AWS certification career paths. Iam currently nearing completion and taking my exam for the CLF-C02.
What I wanted to know was, what type of jobs can I get with just taking the CLF-C02? Or would it be better if I continued with taking more AWS certifications before starting applying for Devop jobs and doing project works on the side to help build my knowledge and experience?
It might just be me being impatient or just the toxic work environment Iam in at the moment, I'm just looking to exit out of my current role asap.
https://redd.it/1g8qd7j
@r_devops
Hi all,
Iam looking to break into Devops. Iam stuck in a QA manual role with no career paths in my current company I work for.Ive done QA testing for at least 10 years and I want to branch out to do something different. Iam not interested in doing QA Automation for the time being.
I did see the AWS certification career paths. Iam currently nearing completion and taking my exam for the CLF-C02.
What I wanted to know was, what type of jobs can I get with just taking the CLF-C02? Or would it be better if I continued with taking more AWS certifications before starting applying for Devop jobs and doing project works on the side to help build my knowledge and experience?
It might just be me being impatient or just the toxic work environment Iam in at the moment, I'm just looking to exit out of my current role asap.
https://redd.it/1g8qd7j
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I created a Free DevOps Learning Path using free online material (starting with youtube) – would love feedback
I’ve been working on a learning path with free DevOps courses to make it easier for people to start from scratch and progress step-by-step. It's still a work in progress, and I’d appreciate any feedback or suggestions.
If you have a go-to resource that really helped you, I’d love to hear about it. Here’s the link if you want to check it out:
https://www.alldevopscourses.com/
would love any feedback or suggestions!
Thanks for your time!
https://redd.it/1g8tbav
@r_devops
I’ve been working on a learning path with free DevOps courses to make it easier for people to start from scratch and progress step-by-step. It's still a work in progress, and I’d appreciate any feedback or suggestions.
If you have a go-to resource that really helped you, I’d love to hear about it. Here’s the link if you want to check it out:
https://www.alldevopscourses.com/
would love any feedback or suggestions!
Thanks for your time!
https://redd.it/1g8tbav
@r_devops
I'm 42 and have been in tech my whole life, but my resume reflects the reality of too many layoffs, personal medical issues, and I just haven't been serious about professional growth. What can I do between certifications, classes, even cover letters to get a job despite my experience?
https://imgur.com/a/RuDmov0
I've strongly considered just rewinding, getting a 4-year degree in CS, and going into something like full stack like I used to do. I haven't been exposed to k8s and industrial AWS at scale and I'm honestly not sure I want to get the certifications in that to make myself marketable because I have an adversion to tooling and prefer programming.
I've primarily approached DevOps from a developer and hardware friendly approach lately which reflects in CI but the jobs don't seem to be there either because of my crap resume or the realities of the market.
https://redd.it/1g8vx8r
@r_devops
https://imgur.com/a/RuDmov0
I've strongly considered just rewinding, getting a 4-year degree in CS, and going into something like full stack like I used to do. I haven't been exposed to k8s and industrial AWS at scale and I'm honestly not sure I want to get the certifications in that to make myself marketable because I have an adversion to tooling and prefer programming.
I've primarily approached DevOps from a developer and hardware friendly approach lately which reflects in CI but the jobs don't seem to be there either because of my crap resume or the realities of the market.
https://redd.it/1g8vx8r
@r_devops
Imgur
Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.
Proper Secrets Manager vs Cloud Storage Bucket with fine-grained access control, what am I missing?
Working on updating some existing Ansible automation during a slow period at work. Our team recently got access to our Org's HashiVault instance(s) and we've started populating some static secrets there. Our automation can now retrieve those secrets at runtime, and it makes things easier to manage over having ansible-vaulted values in our code.
One of the steps in our code needs to access an SSH key for connecting to a specific machine. We currently keep the SSH key in a GCP Storage bucket. The bucket has fine-grained access control enabled on it, basically needing to be a member of an AD Group to get access to it, (or have a GCP SA with the same permissions).
I started to move the contents of those SSH keys into HashiVault, but it got me wondering, what am I gaining by doing so?
If i'm not wrong, Google encrypts all storage bucket data at rest, access to that data is controlled by group membership ACL. Storage buckets contents are versioned.
All those above things are the same thing HashiVault offers.
I know Vault seems like the proper solution still but wondering if there's some obvious thing i'm missing here.
https://redd.it/1g8x0r9
@r_devops
Working on updating some existing Ansible automation during a slow period at work. Our team recently got access to our Org's HashiVault instance(s) and we've started populating some static secrets there. Our automation can now retrieve those secrets at runtime, and it makes things easier to manage over having ansible-vaulted values in our code.
One of the steps in our code needs to access an SSH key for connecting to a specific machine. We currently keep the SSH key in a GCP Storage bucket. The bucket has fine-grained access control enabled on it, basically needing to be a member of an AD Group to get access to it, (or have a GCP SA with the same permissions).
I started to move the contents of those SSH keys into HashiVault, but it got me wondering, what am I gaining by doing so?
If i'm not wrong, Google encrypts all storage bucket data at rest, access to that data is controlled by group membership ACL. Storage buckets contents are versioned.
All those above things are the same thing HashiVault offers.
I know Vault seems like the proper solution still but wondering if there's some obvious thing i'm missing here.
https://redd.it/1g8x0r9
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DevOps Engineers - Are These Really Your Biggest Pain Points?
I’m doing some research to better understand the real-world pain points DevOps Engineers face. I've gathered some high-level information on what I believe are common challenges in the DevOps space, and I’d love to get your feedback. Are these legit from a high-level?
# Here are a few key pain points I’ve identified:
1. Performance Bottlenecks: Ensuring consistent high IOPS and ultra-low latency, especially when dealing with data-intensive workloads in cloud environments.
2. Infrastructure Complexity: Managing multi-cloud or hybrid environments without creating operational silos or increasing system complexity.
3. Scaling Automation: Automating infrastructure provisioning and scaling, while ensuring the performance keeps up with growing workloads.
4. Incident Management: Dealing with unexpected downtime and the need for systems that self-heal quickly to prevent major outages.
5. Cost Optimization: Balancing performance and cloud infrastructure costs to ensure you’re not overspending while keeping everything running smoothly.
# Does this align with your experience? How would you validate these pain points in your day-to-day operations?
Additionally, I’m curious to hear more about your personal pain points! What’s one or two real-life pain points that inhibit you from doing your job well? It could be related to infrastructure, tooling, processes, or even communication issues within your team.
Lastly, I’m also looking for feedback on the stages of the DevOps lifecycle.
Do you think these stages (planning, coding, building, testing, release, deployment, monitoring, and feedback) cover the full picture? Feel free to add any missing pieces!
https://redd.it/1g8vx30
@r_devops
I’m doing some research to better understand the real-world pain points DevOps Engineers face. I've gathered some high-level information on what I believe are common challenges in the DevOps space, and I’d love to get your feedback. Are these legit from a high-level?
# Here are a few key pain points I’ve identified:
1. Performance Bottlenecks: Ensuring consistent high IOPS and ultra-low latency, especially when dealing with data-intensive workloads in cloud environments.
2. Infrastructure Complexity: Managing multi-cloud or hybrid environments without creating operational silos or increasing system complexity.
3. Scaling Automation: Automating infrastructure provisioning and scaling, while ensuring the performance keeps up with growing workloads.
4. Incident Management: Dealing with unexpected downtime and the need for systems that self-heal quickly to prevent major outages.
5. Cost Optimization: Balancing performance and cloud infrastructure costs to ensure you’re not overspending while keeping everything running smoothly.
# Does this align with your experience? How would you validate these pain points in your day-to-day operations?
Additionally, I’m curious to hear more about your personal pain points! What’s one or two real-life pain points that inhibit you from doing your job well? It could be related to infrastructure, tooling, processes, or even communication issues within your team.
Lastly, I’m also looking for feedback on the stages of the DevOps lifecycle.
Do you think these stages (planning, coding, building, testing, release, deployment, monitoring, and feedback) cover the full picture? Feel free to add any missing pieces!
https://redd.it/1g8vx30
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Backstage - IDP, ¿Why?
We're trying to adopt Backstage because some clients are interested in it.
So, the team and I got hands-on with it, and we really can't understand the hype. I mean, it's difficult to install, the Kubernetes plugin is bad, and while plugins are supposed to be the main feature, you have to do everything yourself, just with Node...
If that's the case, we'd really prefer to set up some Tofu + Ansible behind an API like AWX for example ..
But our clients want Backstage, so... here I am. How are you using Backstage in your organization?
https://redd.it/1g8yw66
@r_devops
We're trying to adopt Backstage because some clients are interested in it.
So, the team and I got hands-on with it, and we really can't understand the hype. I mean, it's difficult to install, the Kubernetes plugin is bad, and while plugins are supposed to be the main feature, you have to do everything yourself, just with Node...
If that's the case, we'd really prefer to set up some Tofu + Ansible behind an API like AWX for example ..
But our clients want Backstage, so... here I am. How are you using Backstage in your organization?
https://redd.it/1g8yw66
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What new tools are you using locally or in your environments?
I know this is asked a lot but I'm always interested in new tools or applications and it's constantly evolving. Anything cool you've come across that you'd like to share?
https://redd.it/1g8y8fx
@r_devops
I know this is asked a lot but I'm always interested in new tools or applications and it's constantly evolving. Anything cool you've come across that you'd like to share?
https://redd.it/1g8y8fx
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I'm tired of manually executing commands on VMs, is there a declarative alternative to something like Dokku?
I think I've reached a point where I'm just fed up with going through VMs and manually running commands to deploy/configure/manage multiple apps.
I'm a developer, not a devops, and I have several large VMs with a dozen apps - 20+ SaaS tools, services, each with at least an "app" container and sqlite/postgres/clickhouse/redis with backup services near them.
So far I've gone through bare systemd, docker-compose, dokku, caprover, fly.io and frankly I hate most of them. Fly is a good service, but it still requires me to manually manage my services, link them from my terminal, etc.
I need something like Terraform - setup once, write configs, push to git, plan, deploy. E.g.,
- one "control service" config in a git repository that configures a system of 1, 2 or more VMs.
- per-app config that defines the services it needs, backups, etc.
git checkout, get secrets into env, run plan, run deploy, start working on the next feature. I want to forget about ssh into VMs, manually binding my database like with Fly, restarting traefik or binding a domain for a specific app from the CLI. I don't want to run commands or click through UIs anymore, all this information should be stored in config files, not in my head, I'm sure it can be picked up after a developer runs a simple "plan&deploy" command on config files.
Would something like k3s + ArgoCD allow me to forget about manually executing commands?
Maybe there are simpler tools so I can avoid managing k8s?
https://redd.it/1g924e8
@r_devops
I think I've reached a point where I'm just fed up with going through VMs and manually running commands to deploy/configure/manage multiple apps.
I'm a developer, not a devops, and I have several large VMs with a dozen apps - 20+ SaaS tools, services, each with at least an "app" container and sqlite/postgres/clickhouse/redis with backup services near them.
So far I've gone through bare systemd, docker-compose, dokku, caprover, fly.io and frankly I hate most of them. Fly is a good service, but it still requires me to manually manage my services, link them from my terminal, etc.
I need something like Terraform - setup once, write configs, push to git, plan, deploy. E.g.,
- one "control service" config in a git repository that configures a system of 1, 2 or more VMs.
- per-app config that defines the services it needs, backups, etc.
git checkout, get secrets into env, run plan, run deploy, start working on the next feature. I want to forget about ssh into VMs, manually binding my database like with Fly, restarting traefik or binding a domain for a specific app from the CLI. I don't want to run commands or click through UIs anymore, all this information should be stored in config files, not in my head, I'm sure it can be picked up after a developer runs a simple "plan&deploy" command on config files.
Would something like k3s + ArgoCD allow me to forget about manually executing commands?
Maybe there are simpler tools so I can avoid managing k8s?
https://redd.it/1g924e8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Does a Terraform certificate do anything for a beginner's resume?
I am not sure if Azure developer or a Terraform certificate is better value for time to get as a beginner. How useful will Terraform be in the future?
https://redd.it/1g90joq
@r_devops
I am not sure if Azure developer or a Terraform certificate is better value for time to get as a beginner. How useful will Terraform be in the future?
https://redd.it/1g90joq
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Is pagerduty too expensive for you? Or setting up grafana oncall inconvenient?
To me both are either expensive or inconvenience.
Context: I work for retail company with 100+ personel in tech department. With pagerduty the bill is just above the budget, yes we have a lot of other things.
It doesn't make sense to use pagerduty partially,
Everyone should be on board, otherwise we won't get the full benefits.
Using tools like grafana oncall might requires maintenance, which we cant afford given the current work load.
What do you guys use for oncall system? What do you think about it?
Is there any cheap alternative? I just need something that can schedule and page someone if shit go down?
Thanks in advance
https://redd.it/1g96q0c
@r_devops
To me both are either expensive or inconvenience.
Context: I work for retail company with 100+ personel in tech department. With pagerduty the bill is just above the budget, yes we have a lot of other things.
It doesn't make sense to use pagerduty partially,
Everyone should be on board, otherwise we won't get the full benefits.
Using tools like grafana oncall might requires maintenance, which we cant afford given the current work load.
What do you guys use for oncall system? What do you think about it?
Is there any cheap alternative? I just need something that can schedule and page someone if shit go down?
Thanks in advance
https://redd.it/1g96q0c
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How Do You Discover New Tools and Architectures in DevOps?
Hey everyone,
I’m curious about how you all stay updated on the latest tools and architectures in DevOps.
I’ve been working as a DevOps engineer for over 2 years now and have significantly improved my skills. I have hands-on experience with various tools, including Azure DevOps, Kubernetes, scripting, and managing infrastructure on GCP, Azure, and AWS.
I’ve been enhancing our current infrastructure with ideas from my seniors, such as consolidating VMs to Kubernetes, automating patching, and creating chatbots. However, I struggle to come up with innovative ideas on my own.
My seniors often introduce new tools or architectures that I’ve never heard of, and they usually turn out to be very beneficial. How do you all keep up with these advancements? Where do you find information about the latest and most effective tools and architectures in DevOps?
I’ve tried searching this subreddit but haven’t found much relevant information.
Can anyone guide me on what I’m missing or what I need to do to stay updated and bring new ideas to the table?
Thanks in advance for your help!
https://redd.it/1g9axfv
@r_devops
Hey everyone,
I’m curious about how you all stay updated on the latest tools and architectures in DevOps.
I’ve been working as a DevOps engineer for over 2 years now and have significantly improved my skills. I have hands-on experience with various tools, including Azure DevOps, Kubernetes, scripting, and managing infrastructure on GCP, Azure, and AWS.
I’ve been enhancing our current infrastructure with ideas from my seniors, such as consolidating VMs to Kubernetes, automating patching, and creating chatbots. However, I struggle to come up with innovative ideas on my own.
My seniors often introduce new tools or architectures that I’ve never heard of, and they usually turn out to be very beneficial. How do you all keep up with these advancements? Where do you find information about the latest and most effective tools and architectures in DevOps?
I’ve tried searching this subreddit but haven’t found much relevant information.
Can anyone guide me on what I’m missing or what I need to do to stay updated and bring new ideas to the table?
Thanks in advance for your help!
https://redd.it/1g9axfv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Changed the password of Digital Ocean Droplet and websiter crashed.
Hi I changed the root password of my Digital Ocean Droplet and the website crashed. Now I am try rebooting and turning it on and off but nothing is working at all. Urgent help is needed.
https://redd.it/1g9gsa9
@r_devops
Hi I changed the root password of my Digital Ocean Droplet and the website crashed. Now I am try rebooting and turning it on and off but nothing is working at all. Urgent help is needed.
https://redd.it/1g9gsa9
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
The "complete" Kustomize tutorial
Of course I don't think this is a 100% complete Kustomize, but it's my attempt at explaining most things cloud engineers should know about when using Kustomize (IMHO).
Hope somebody finds it useful:
Link to blog -> https://glasskube.dev/blog/patching-with-kustomize/
https://redd.it/1g9ithz
@r_devops
Of course I don't think this is a 100% complete Kustomize, but it's my attempt at explaining most things cloud engineers should know about when using Kustomize (IMHO).
Hope somebody finds it useful:
Link to blog -> https://glasskube.dev/blog/patching-with-kustomize/
https://redd.it/1g9ithz
@r_devops
glasskube.dev
The complete Kustomize tutorial | Glasskube
A guide to modifying Kubernetes resource configurations using strategic merge and JSON patches, among other transformations with tips for managing resources across environments.
So I took the Certified GitOps Associate exam, and here are my thoughts.
https://beatsinthe.cloud/blog/journeys-in-certification-certified-gitops-associate/
If you’ve been thinking of taking it, I wouldn’t advise against it. I do believe there is value in the credential and the learning you will get preparing for it.
With that being said…show you know what you just got certified in.
Hope someone finds this helpful!
https://redd.it/1g9ii1o
@r_devops
https://beatsinthe.cloud/blog/journeys-in-certification-certified-gitops-associate/
If you’ve been thinking of taking it, I wouldn’t advise against it. I do believe there is value in the credential and the learning you will get preparing for it.
With that being said…show you know what you just got certified in.
Hope someone finds this helpful!
https://redd.it/1g9ii1o
@r_devops
BeatsintheCloud Blog
Journeys in Certification: Certified GitOps Associate
Why did I think this was important? Because it’s 2024, and by now, we should all know GitOps principles matter for accountability, disaster recovery, faster delivery, and automation. Right?
Docker Pull Command Not Blocked Despite Blocking docker.io and registry-1.docker.io – Need Advice
Hey everyone,
I’m working on a project where we’re trying to block developers from downloading artifacts from public registries, specifically DockerHub, and enforce the use of our internal JFrog Artifactory.
What we’ve done so far:
We’ve created a list of registry URLs to block (like [`https://registry-1.docker.io`](https://registry-1.docker.io) and others) and are using tools like Zscaler and Palo Alto to block traffic to these URLs.
I’m monitoring DNS traffic with
Despite this, when I run the default command:
docker pull ubuntu
It still works, and I can pull the image without any issues, even though `docker.io` and `registry-1.docker.io` should be blocked.
Does anyone have any ideas why this might be happening or something I can test to ensure the
Thanks for the help!
https://redd.it/1g9lhl9
@r_devops
Hey everyone,
I’m working on a project where we’re trying to block developers from downloading artifacts from public registries, specifically DockerHub, and enforce the use of our internal JFrog Artifactory.
What we’ve done so far:
We’ve created a list of registry URLs to block (like [`https://registry-1.docker.io`](https://registry-1.docker.io) and others) and are using tools like Zscaler and Palo Alto to block traffic to these URLs.
I’m monitoring DNS traffic with
tcpdump -lvi any udp port 53 to capture DNS queries during artifact downloads, and I’ve confirmed that traffic to `docker.io` and `registry-1.docker.io` is being blocked.Despite this, when I run the default command:
docker pull ubuntu
It still works, and I can pull the image without any issues, even though `docker.io` and `registry-1.docker.io` should be blocked.
Does anyone have any ideas why this might be happening or something I can test to ensure the
docker pull command gets blocked as intended?Thanks for the help!
https://redd.it/1g9lhl9
@r_devops
Deploying Infrastructure & Application using Gitops.
I have used countless of tools to deploy infrastructure and applications in the past. I tried using crossplane, aws cdk, terraform, pulumi, aws ack etc.
I found this article very helpful. It talks about ArgoCD + Terraform K8s Operator.
https://thejogi.medium.com/deploying-infrastructure-application-using-gitops-862fc89b6325
https://redd.it/1g9mj4t
@r_devops
I have used countless of tools to deploy infrastructure and applications in the past. I tried using crossplane, aws cdk, terraform, pulumi, aws ack etc.
I found this article very helpful. It talks about ArgoCD + Terraform K8s Operator.
https://thejogi.medium.com/deploying-infrastructure-application-using-gitops-862fc89b6325
https://redd.it/1g9mj4t
@r_devops
Medium
Deploying Infrastructure & Application using GitOps
Combining Terraform & ArgoCD to continously deploy application along with its infrastructure needs.
Do DevOps engineers perform physical work?
Hello everyone,
I hope this doesn’t come across as a strange question, but I feel the need to ask. I’m currently working as a Network Engineer and have been feeling increasingly dissatisfied with the field. One of the major aspects I dislike is the physical work involved, such as running cables, installing racks in dusty server rooms, patching cables, etc. Not only is it unpleasant, but I have a couple of injuries that make these tasks particularly challenging.
I’m considering a career change to DevOps and was wondering if this role requires similar physical work. If so, how often would such tasks typically be expected?
I appreciate your insights. Thank you in advance!
https://redd.it/1g9mmas
@r_devops
Hello everyone,
I hope this doesn’t come across as a strange question, but I feel the need to ask. I’m currently working as a Network Engineer and have been feeling increasingly dissatisfied with the field. One of the major aspects I dislike is the physical work involved, such as running cables, installing racks in dusty server rooms, patching cables, etc. Not only is it unpleasant, but I have a couple of injuries that make these tasks particularly challenging.
I’m considering a career change to DevOps and was wondering if this role requires similar physical work. If so, how often would such tasks typically be expected?
I appreciate your insights. Thank you in advance!
https://redd.it/1g9mmas
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Webinar - Implementing DevSecOps for Intelligent Security
As software development continues to evolve, integrating security into every stage of the process is no longer optional—it's essential. In this webinar "Implementing DevSecOps for Intelligent Security", we will explore how to build secure software while ensuring intelligent decision-making in the development process. Register Now
https://redd.it/1g9nlo7
@r_devops
As software development continues to evolve, integrating security into every stage of the process is no longer optional—it's essential. In this webinar "Implementing DevSecOps for Intelligent Security", we will explore how to build secure software while ensuring intelligent decision-making in the development process. Register Now
https://redd.it/1g9nlo7
@r_devops
Microsoft
Microsoft Virtual Events Powered by Teams
Need Schema Help: Fun with The Bitcoin Chain
I'm diving into a personal project as a learning experience and could really use some guidance from more experienced minds. I’m a full-stack developer, but my experience leans heavily toward middle/front-end development. Now I’m dealing with a massive dataset that’s forcing me to rethink some of my usual "brute force" methods, which just aren't cutting it here.
**The Situation:** I have \~800GB of raw Bitcoin blockchain data that I need to ingest into a PostgreSQL database in a way that’s usable for analytics (locally).
**Hardware Setup:**
* CPU: Ryzen 7700x (AIO cooled)
* Storage: 2TB SSD
* RAM: 32GB (might be a limitation)
* No GPU (yet)
* OS: Ubuntu Server
I know this setup is a bit overkill for just running a full Bitcoin node, but I'm concerned it might be underpowered for the larger-scale analytics and ingestion tasks I’m tackling.
**What I've Done So Far:**
* I’ve stood up a Bitcoin full node and fully synced the blockchain.
* Built a basic local PostgreSQL structure with tables for `blocks`, `transactions`, `inputs`, `outputs`, `UTXO`, and `addresses`.
* Created a Python ingest script using `bitcoinrpc` to process the blockchain data into these tables.
**The Challenge:** Initially, the script processed the first \~300k blocks (pre-2015) pretty quickly, but now it’s crawling, taking about 5-10 seconds to process each block, whereas before, it was handling hundreds per second.
I still have \~1.2TB of space left after the sync, so storage shouldn’t be the issue. I suspect that as my tables grow (especially `transactions`), PostgreSQL is becoming a bottleneck. My theory is that every insert operation is checking the entire table structure to prevent conflicts, and the `ON CONFLICT DO NOTHING` clause I’m using is severely slowing things down.
At this rate, processing the full dataset could take months, which is clearly not sustainable.
**Questions:**
* Is there a better approach to handling large datasets like this in PostgreSQL, or should I consider another database solution?
* Are there strategies I can use to speed up the ingestion process without risking data integrity?
* Is there a more efficient way to handle conflict resolution for such large tables, or is my approach inherently flawed?
Ultimately, I want to use this data for visualizing blockchain trends, changes over time, and price/scarcity models.
Any advice or insights would be greatly appreciated. Thanks in advance!
Edit: structure and typos fixed...
https://redd.it/1g9lr0r
@r_devops
I'm diving into a personal project as a learning experience and could really use some guidance from more experienced minds. I’m a full-stack developer, but my experience leans heavily toward middle/front-end development. Now I’m dealing with a massive dataset that’s forcing me to rethink some of my usual "brute force" methods, which just aren't cutting it here.
**The Situation:** I have \~800GB of raw Bitcoin blockchain data that I need to ingest into a PostgreSQL database in a way that’s usable for analytics (locally).
**Hardware Setup:**
* CPU: Ryzen 7700x (AIO cooled)
* Storage: 2TB SSD
* RAM: 32GB (might be a limitation)
* No GPU (yet)
* OS: Ubuntu Server
I know this setup is a bit overkill for just running a full Bitcoin node, but I'm concerned it might be underpowered for the larger-scale analytics and ingestion tasks I’m tackling.
**What I've Done So Far:**
* I’ve stood up a Bitcoin full node and fully synced the blockchain.
* Built a basic local PostgreSQL structure with tables for `blocks`, `transactions`, `inputs`, `outputs`, `UTXO`, and `addresses`.
* Created a Python ingest script using `bitcoinrpc` to process the blockchain data into these tables.
**The Challenge:** Initially, the script processed the first \~300k blocks (pre-2015) pretty quickly, but now it’s crawling, taking about 5-10 seconds to process each block, whereas before, it was handling hundreds per second.
I still have \~1.2TB of space left after the sync, so storage shouldn’t be the issue. I suspect that as my tables grow (especially `transactions`), PostgreSQL is becoming a bottleneck. My theory is that every insert operation is checking the entire table structure to prevent conflicts, and the `ON CONFLICT DO NOTHING` clause I’m using is severely slowing things down.
At this rate, processing the full dataset could take months, which is clearly not sustainable.
**Questions:**
* Is there a better approach to handling large datasets like this in PostgreSQL, or should I consider another database solution?
* Are there strategies I can use to speed up the ingestion process without risking data integrity?
* Is there a more efficient way to handle conflict resolution for such large tables, or is my approach inherently flawed?
Ultimately, I want to use this data for visualizing blockchain trends, changes over time, and price/scarcity models.
Any advice or insights would be greatly appreciated. Thanks in advance!
Edit: structure and typos fixed...
https://redd.it/1g9lr0r
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
GitHub actions cost monitoring/optimizations
Hi,
Recently, I’ve started thinking about the costs associated with GitHub Actions and have noticed some issues. Has anyone found GitHub Actions costs difficult to manage as their projects scale? How are you optimizing or controlling these expenses?
Information seems to be quite limited, and I’m considering building a simple tool, but perhaps there are already tools available on the market?
Thanks in advance!
https://redd.it/1g9r3gw
@r_devops
Hi,
Recently, I’ve started thinking about the costs associated with GitHub Actions and have noticed some issues. Has anyone found GitHub Actions costs difficult to manage as their projects scale? How are you optimizing or controlling these expenses?
Information seems to be quite limited, and I’m considering building a simple tool, but perhaps there are already tools available on the market?
Thanks in advance!
https://redd.it/1g9r3gw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community