A Blockchain ETL and efficient data pipline management webinar
Blockchain ETL has unique challenges for DevOps teams managing data pipelines. This webinar explores practical solutions and best practices for handling blockchain data at scale.
Webinar: Optimizing DevOps for Blockchain ETL Pipelines
Date: August 8th, 12 PM EDT
Topics:
1. Blockchain data architecture for high-throughput systems
2. Containerization and orchestration strategies for blockchain nodes
3. Monitoring and alerting for blockchain-specific metrics
4. CI/CD pipelines for blockchain data services
5. Live demo: Real-time blockchain data synchronization and indexing
Speakers:
Andrei Terentiev, CTO of [Bitcoin.com](https://Bitcoin.com)
Seb Melendez, ETL Software Engineer at Artemis
Key takeaways:
Strategies for maintaining data consistency across distributed ledgers
Performance tuning for blockchain data ingestion and processing
Security considerations in blockchain data pipelines
Q&A session addressing DevOps-specific blockchain challenges
Target audience: DevOps engineers, SREs, and technical leads working with blockchain infrastructure
Registration: Webinar Registration Link
https://redd.it/1ekusu0
@r_devops
Blockchain ETL has unique challenges for DevOps teams managing data pipelines. This webinar explores practical solutions and best practices for handling blockchain data at scale.
Webinar: Optimizing DevOps for Blockchain ETL Pipelines
Date: August 8th, 12 PM EDT
Topics:
1. Blockchain data architecture for high-throughput systems
2. Containerization and orchestration strategies for blockchain nodes
3. Monitoring and alerting for blockchain-specific metrics
4. CI/CD pipelines for blockchain data services
5. Live demo: Real-time blockchain data synchronization and indexing
Speakers:
Andrei Terentiev, CTO of [Bitcoin.com](https://Bitcoin.com)
Seb Melendez, ETL Software Engineer at Artemis
Key takeaways:
Strategies for maintaining data consistency across distributed ledgers
Performance tuning for blockchain data ingestion and processing
Security considerations in blockchain data pipelines
Q&A session addressing DevOps-specific blockchain challenges
Target audience: DevOps engineers, SREs, and technical leads working with blockchain infrastructure
Registration: Webinar Registration Link
https://redd.it/1ekusu0
@r_devops
Bitcoin
Buy Bitcoin & cryptocurrency | Wallet, news, education.
The world's gateway to Bitcoin & cryptocurrency. Buy, sell, spend, swap, and invest in BTC, ETH, BCH, AVAX, MATIC & hundreds more digital assets. Stay informed about crypto, DeFi, and Web3.
RESUME REVIEW
Hello Everyone,
I need some feedback on my resume. I created it with a specific focus on achievements and improvements at the product/business level.
In particular, I need serious suggestions for point number 3 under the work experience section. I want to highlight my achievement of adding KEDA to the entire data warehouse pipeline, which significantly improved data processing efficiency. However, I'm struggling with how to word this effectively as an achievement in 2 lines to match the theme of overall resume
If you have any suggestions, please share them as they will help me a lot.
Thanks!
=============> https://imgur.com/a/ec9Gptt <====================
https://redd.it/1ekzo6c
@r_devops
Hello Everyone,
I need some feedback on my resume. I created it with a specific focus on achievements and improvements at the product/business level.
In particular, I need serious suggestions for point number 3 under the work experience section. I want to highlight my achievement of adding KEDA to the entire data warehouse pipeline, which significantly improved data processing efficiency. However, I'm struggling with how to word this effectively as an achievement in 2 lines to match the theme of overall resume
If you have any suggestions, please share them as they will help me a lot.
Thanks!
=============> https://imgur.com/a/ec9Gptt <====================
https://redd.it/1ekzo6c
@r_devops
Imgur
Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.
New boss says I should be OK with being on call every other week
Had an interesting conversation with my new boss today that I'd love to get some perspective on. I work on a two person devops team supporting an application used by some fairly large players in the transportation industry in a critical role. This is an application that has SLAs with associated financial penalties and to be honest our customers, I think, expect that we have more invested in our operational capabilities than we actually do considering how little revenue we make a year from the whole thing.
Currently, myself and a junior engineer split an on call rotation that I set up 'voluntarily'. Previously, our alerts were just coming in to emails or SNS, which wasn't effective obviously, and so not having an easy way to get phone alerts I setup a free pager duty account. Thus began our 26 weeks each of 'official' on-call a year for which I am the escalation point so functionally speaking i'm on call 24/7/365 for the last few years. This has led to some pretty great uptime compared to what things were looking like previously but I never had a formal conversation about what should be expected of me in regards to on call
This past Saturday, we had an issue where a pet reporting service (Jasper Reporting Server, biggest pain in the ass ever I do not recommend) that had recently been updated to a new version became unresponsive due to a thread issue and unfortunately it did not get detected prior to a support ticket getting raised. My co-worker wasn't available when support contacted her and I was out for a walk and didnt have my phone so users were unable to generate reports for about 3 hours until I was back home
This incident prompted a retrospective today where I raised the point that we needed an incident response strategy in place for these types of situations because it was unreasonable to expect two people to split an on call rotation like this and say to our transportation customers that we're taking incident response seriously. I personally want to open up the on-call rotation to the development team as well and roll out some runbook automation for common tasks (such as restarting a service althought my boss was incredulous that i'd have to train people to do this). I can still be an escalation point but I don't need to//cannot be on call 24/7
My boss responded by making what I perceived to be a kind of shitty comment that two people managed the devops program at his previous job and being on call, even every other week or all the time, isn't that big of a deal. It was kind of a shitty comment because the way it was said kind of implied that we're lesser than the two people he worked with previously and that because we're lesser engineers thats why we have more operational issues and that the only reason we don't like on call is because of our own problems. There was a lot to unpack in that statement, especially given that I am on a team with a non-existent tooling budget, but whatever, I wont get sour because of some difficult talk after basically an undetected service outage
However, I do not personally agree with his position that being on call every other week is acceptable as having to plan to have a laptop with me is a non-trivial thing and the stress of knowing you could get an alert while I'm out at dinner is a lot, even if you don't get 'that many' alerts. I'm curious what other people's thoughts are on frequent on call for small teams?
It's probably time for me (I wasted too much time not learning kubernetes already) to move on but I wasn't sure if I was overreacting to his position about on-call because of the perceived slight
TL;DR Is expecting someone to take an on-call rotation every other week reasonable given that they're on a two person team one person being significantly more junior?
*edit* we are not compensated for on call hours worked outside of our yearly salaries
https://redd.it/1el1bfq
@r_devops
Had an interesting conversation with my new boss today that I'd love to get some perspective on. I work on a two person devops team supporting an application used by some fairly large players in the transportation industry in a critical role. This is an application that has SLAs with associated financial penalties and to be honest our customers, I think, expect that we have more invested in our operational capabilities than we actually do considering how little revenue we make a year from the whole thing.
Currently, myself and a junior engineer split an on call rotation that I set up 'voluntarily'. Previously, our alerts were just coming in to emails or SNS, which wasn't effective obviously, and so not having an easy way to get phone alerts I setup a free pager duty account. Thus began our 26 weeks each of 'official' on-call a year for which I am the escalation point so functionally speaking i'm on call 24/7/365 for the last few years. This has led to some pretty great uptime compared to what things were looking like previously but I never had a formal conversation about what should be expected of me in regards to on call
This past Saturday, we had an issue where a pet reporting service (Jasper Reporting Server, biggest pain in the ass ever I do not recommend) that had recently been updated to a new version became unresponsive due to a thread issue and unfortunately it did not get detected prior to a support ticket getting raised. My co-worker wasn't available when support contacted her and I was out for a walk and didnt have my phone so users were unable to generate reports for about 3 hours until I was back home
This incident prompted a retrospective today where I raised the point that we needed an incident response strategy in place for these types of situations because it was unreasonable to expect two people to split an on call rotation like this and say to our transportation customers that we're taking incident response seriously. I personally want to open up the on-call rotation to the development team as well and roll out some runbook automation for common tasks (such as restarting a service althought my boss was incredulous that i'd have to train people to do this). I can still be an escalation point but I don't need to//cannot be on call 24/7
My boss responded by making what I perceived to be a kind of shitty comment that two people managed the devops program at his previous job and being on call, even every other week or all the time, isn't that big of a deal. It was kind of a shitty comment because the way it was said kind of implied that we're lesser than the two people he worked with previously and that because we're lesser engineers thats why we have more operational issues and that the only reason we don't like on call is because of our own problems. There was a lot to unpack in that statement, especially given that I am on a team with a non-existent tooling budget, but whatever, I wont get sour because of some difficult talk after basically an undetected service outage
However, I do not personally agree with his position that being on call every other week is acceptable as having to plan to have a laptop with me is a non-trivial thing and the stress of knowing you could get an alert while I'm out at dinner is a lot, even if you don't get 'that many' alerts. I'm curious what other people's thoughts are on frequent on call for small teams?
It's probably time for me (I wasted too much time not learning kubernetes already) to move on but I wasn't sure if I was overreacting to his position about on-call because of the perceived slight
TL;DR Is expecting someone to take an on-call rotation every other week reasonable given that they're on a two person team one person being significantly more junior?
*edit* we are not compensated for on call hours worked outside of our yearly salaries
https://redd.it/1el1bfq
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Flyway with Jenkins
Anybody here tried using this stack before? How was your experience? Does anyone have any use case I can use a reference? Currently trying out flyway if we can adapt it in our dev environment and if we should get the subscription... Any insight is appreciated.. thanks
https://redd.it/1el21aa
@r_devops
Anybody here tried using this stack before? How was your experience? Does anyone have any use case I can use a reference? Currently trying out flyway if we can adapt it in our dev environment and if we should get the subscription... Any insight is appreciated.. thanks
https://redd.it/1el21aa
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Configure ec2 in Github Actions workflow via SSH or use Ansible?
Working on a Github Actions workflow of which part is deploying an AWS ec2 via Terraform. To configure the ec2 instance for a Nodejs application, I could theoretically SSH or remotely run commands on the instance in the workflow - but is there an advantage to running an Ansible playbook via Actions workflow instead? One reason that may be in favor of Ansible: increases the modularity of the pipeline, meaning I could more easily port to another workflow or even CI/CD platform (Jenkins, etc) as the Ansible playbook is agnostic to CI/CD platform on which it rurns. Any other thoughts?
https://redd.it/1el1ryf
@r_devops
Working on a Github Actions workflow of which part is deploying an AWS ec2 via Terraform. To configure the ec2 instance for a Nodejs application, I could theoretically SSH or remotely run commands on the instance in the workflow - but is there an advantage to running an Ansible playbook via Actions workflow instead? One reason that may be in favor of Ansible: increases the modularity of the pipeline, meaning I could more easily port to another workflow or even CI/CD platform (Jenkins, etc) as the Ansible playbook is agnostic to CI/CD platform on which it rurns. Any other thoughts?
https://redd.it/1el1ryf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Careers after DevOps - experience or suggestions?
Awful economy and a stupidly wide-range of roles within "DevOps Engineer" that are almost impossible to fulful. So what are good exit careers after DevOps?
obviously development (if your programming skills are up to scratch)
what else?
https://redd.it/1elav9p
@r_devops
Awful economy and a stupidly wide-range of roles within "DevOps Engineer" that are almost impossible to fulful. So what are good exit careers after DevOps?
obviously development (if your programming skills are up to scratch)
what else?
https://redd.it/1elav9p
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How OpenAI Scaled Kubernetes to 7,500 Nodes by Removing One Plugin
Hi everyone. I recently read an article about how OpenAI scaled Kubernetes to 7,500 nodes.
There was a lot of information in there but I thought the most important part was how they replaced Flannel with Azure CNI.
So I spent a lot of hours doing a bit more research into the specifics and here are my takeaways:
• Flannel is a Container Network Interface (CNI) plugin that is perfect for pod-to-pod communication between nodes
• Flannel works well for smaller clusters, it was not designed for thousands of nodes
• Flannel's performance got worse with the increased node count because of things like route table creation and traffic routing
• OpenAI already hosted its infrastructure on Azure and used the Azure Kubernetes Service (AKS)
• They switched from Flannel to Azure CNI, which is specifically designed for AKS
• Azure CNI is different from Flannel in several ways which made it a better solution for OpenAI
• The switch to Azure CNI ended up making pod-to-pod communication a lot faster
Okay, this is a super basic summary, but if you want a more detailed explanation with nice visuals, check out the full article.
https://redd.it/1eld525
@r_devops
Hi everyone. I recently read an article about how OpenAI scaled Kubernetes to 7,500 nodes.
There was a lot of information in there but I thought the most important part was how they replaced Flannel with Azure CNI.
So I spent a lot of hours doing a bit more research into the specifics and here are my takeaways:
• Flannel is a Container Network Interface (CNI) plugin that is perfect for pod-to-pod communication between nodes
• Flannel works well for smaller clusters, it was not designed for thousands of nodes
• Flannel's performance got worse with the increased node count because of things like route table creation and traffic routing
• OpenAI already hosted its infrastructure on Azure and used the Azure Kubernetes Service (AKS)
• They switched from Flannel to Azure CNI, which is specifically designed for AKS
• Azure CNI is different from Flannel in several ways which made it a better solution for OpenAI
• The switch to Azure CNI ended up making pod-to-pod communication a lot faster
Okay, this is a super basic summary, but if you want a more detailed explanation with nice visuals, check out the full article.
https://redd.it/1eld525
@r_devops
Betterstack
How OpenAI Scaled Kubernetes to 7,500 Nodes by Removing One Plugin
The one change that improved OpenAI's network.
What Python Frameworks do you use?
I was using the search feature as was surprised to not see a question raised about this. What frameworks should you learn as a devops engineer / what modules do you use? I know for a fact that everyone should learn to import csv or even flask / fast api.
What do you all use / think everyone should know how to use even on a basic level?
https://redd.it/1elgr21
@r_devops
I was using the search feature as was surprised to not see a question raised about this. What frameworks should you learn as a devops engineer / what modules do you use? I know for a fact that everyone should learn to import csv or even flask / fast api.
What do you all use / think everyone should know how to use even on a basic level?
https://redd.it/1elgr21
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Pull request branch auto-pull on target branch update
I haven't done so much DevOps in my life and need some advice on an issue I am facing. I didn't find something close to what I needed, either I missed it or didn't know how to phrase my question.
In my team, we tend to have 15-20+ open pull requests at a time, and it's quite bothersome when one gets merged the TL refuses to review anything else until they are up to date.
As you can imagine it gets annoying, and because the issue couldn't be solved by having them review the PR anyway, even if it's a couple of commits behind, I thought I would solve it technically.
Here is what I could stitch together as a CI-CD step:
updatebranches:
stage: update-branches
script:
- git fetch --all
- TARGETBRANCH=$(git branch --contains $CICOMMITSHA | sed -n 's/^* //p')
- | for branch in $(git branch -r | grep -v '\->' | grep -v "$TARGETBRANCH" | sed 's/ *origin\///')
git checkout $branch
git merge origin/$TARGETBRANCH
if $? -eq 0 ; then
git push origin $branch
else
echo "Merge conflict in $branch. Resolve conflicts manually."
fi
done
I would love any advice. Please tell me if this is bad practice, how I could approach it another way. What other options I have etc
https://redd.it/1eligfi
@r_devops
I haven't done so much DevOps in my life and need some advice on an issue I am facing. I didn't find something close to what I needed, either I missed it or didn't know how to phrase my question.
In my team, we tend to have 15-20+ open pull requests at a time, and it's quite bothersome when one gets merged the TL refuses to review anything else until they are up to date.
As you can imagine it gets annoying, and because the issue couldn't be solved by having them review the PR anyway, even if it's a couple of commits behind, I thought I would solve it technically.
Here is what I could stitch together as a CI-CD step:
updatebranches:
stage: update-branches
script:
- git fetch --all
- TARGETBRANCH=$(git branch --contains $CICOMMITSHA | sed -n 's/^* //p')
- | for branch in $(git branch -r | grep -v '\->' | grep -v "$TARGETBRANCH" | sed 's/ *origin\///')
git checkout $branch
git merge origin/$TARGETBRANCH
if $? -eq 0 ; then
git push origin $branch
else
echo "Merge conflict in $branch. Resolve conflicts manually."
fi
done
I would love any advice. Please tell me if this is bad practice, how I could approach it another way. What other options I have etc
https://redd.it/1eligfi
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Blue/Green on Internal Service Microservice
Hi all, for those that are running a microservices environment and are able to perform blue/green deployments on an individual microservice basis - how exactly are you achieving this when performing blue/green on an api service that is consumed only by another microservice (and does not have a front end)?
Suppose the following traffic flow in AWS.
Client desktop browser -> ALB -> microservice_1 -> ALB -> microservice_2 -> ALB -> microservice_3
Suppose I wanted to perform blue/green on microservice two. I create another target group (blue) for micro-service 2 and keep traffic pointing to green. I now have the ability to directly hit the microservice-2-blue from some other machine and run a suite of smoke tests. That said I also validate the end-2-end flow from the client desktop to microservice_3, using microservice_2 blue.
I would imagine this would require some mechanism like an HTTP cookie (use_microservice_2_dark) that each of the intermediary devices would have to pass through each of the hops but I might be over thinking it.
Has anyone come across this particular pattern before?
Thanks!
https://redd.it/1elke7d
@r_devops
Hi all, for those that are running a microservices environment and are able to perform blue/green deployments on an individual microservice basis - how exactly are you achieving this when performing blue/green on an api service that is consumed only by another microservice (and does not have a front end)?
Suppose the following traffic flow in AWS.
Client desktop browser -> ALB -> microservice_1 -> ALB -> microservice_2 -> ALB -> microservice_3
Suppose I wanted to perform blue/green on microservice two. I create another target group (blue) for micro-service 2 and keep traffic pointing to green. I now have the ability to directly hit the microservice-2-blue from some other machine and run a suite of smoke tests. That said I also validate the end-2-end flow from the client desktop to microservice_3, using microservice_2 blue.
I would imagine this would require some mechanism like an HTTP cookie (use_microservice_2_dark) that each of the intermediary devices would have to pass through each of the hops but I might be over thinking it.
Has anyone come across this particular pattern before?
Thanks!
https://redd.it/1elke7d
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
TechWorld with Nana DevOps Bootcamp vs KodeKloud Bootcamp
Hey everyone!
I know this has been asked in the past before, but I wanted to know if anyone has had any recent experience with taking the DevOps Bootcamp from TechWorld with Nana, or doing the DevOps / SRE learning path from KodeKloud?
I’m fortunate to have a learning budget at my company, so I’m not necessarily looking for the cheapest option vs finding the best fit in terms of learning material and practical experience. If anyone has other options as well or recommendations I’m happy to hear those as well!
https://redd.it/1elm09m
@r_devops
Hey everyone!
I know this has been asked in the past before, but I wanted to know if anyone has had any recent experience with taking the DevOps Bootcamp from TechWorld with Nana, or doing the DevOps / SRE learning path from KodeKloud?
I’m fortunate to have a learning budget at my company, so I’m not necessarily looking for the cheapest option vs finding the best fit in terms of learning material and practical experience. If anyone has other options as well or recommendations I’m happy to hear those as well!
https://redd.it/1elm09m
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Missed call from AWS HR
I attended a 5 round technical interview with AWS recently. I got a call from HR today to tell me the results of the interview, but I missed it.
If anybody in this subreddit has experience with Amazon HR, please let me know what you think might happen.
I have been driving myself crazy thinking about the possibilities.
1. Either they reject me, but this call is a courtesy where they go in detail as to how much I suck
2. Or they tell me I have cleared the technical rounds and now have to go through HR round
3. Or they just tell me I have cleared and wanna work out the logistics.
What do you guys think?
https://redd.it/1elmjsf
@r_devops
I attended a 5 round technical interview with AWS recently. I got a call from HR today to tell me the results of the interview, but I missed it.
If anybody in this subreddit has experience with Amazon HR, please let me know what you think might happen.
I have been driving myself crazy thinking about the possibilities.
1. Either they reject me, but this call is a courtesy where they go in detail as to how much I suck
2. Or they tell me I have cleared the technical rounds and now have to go through HR round
3. Or they just tell me I have cleared and wanna work out the logistics.
What do you guys think?
https://redd.it/1elmjsf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Best side hustle/side job for a DevOps engineer?
Hey everyone,
I'm a DevOps engineer with about 3.5 years of experience working at a Fortune 500 company in the US. I mostly deal with Infrastructure as Code, pipelines, GitHub Actions, and some Python scripting—basically a mix of sys admin and coding/automation.
I have a decent salary and a great work-life balance, which gives me some extra time to explore side hustles. Earlier this year, I started teaching an online computer science class. It brings in an extra $1000 a month and takes about 9 hours a week, mostly grading assignments and helping students.
I'm looking for more ways to make some extra cash on the side without committing to another full-time job. Ideally, something that only takes a few hours a week and uses my cloud engineering, programming, or DevOps skills. I also get the occasional consulting gig through AlphaSights, but that's rare.
Any suggestions for side gigs or income streams that fit this criteria? I’d love to hear your ideas or experiences. Thanks!
https://redd.it/1elp1pw
@r_devops
Hey everyone,
I'm a DevOps engineer with about 3.5 years of experience working at a Fortune 500 company in the US. I mostly deal with Infrastructure as Code, pipelines, GitHub Actions, and some Python scripting—basically a mix of sys admin and coding/automation.
I have a decent salary and a great work-life balance, which gives me some extra time to explore side hustles. Earlier this year, I started teaching an online computer science class. It brings in an extra $1000 a month and takes about 9 hours a week, mostly grading assignments and helping students.
I'm looking for more ways to make some extra cash on the side without committing to another full-time job. Ideally, something that only takes a few hours a week and uses my cloud engineering, programming, or DevOps skills. I also get the occasional consulting gig through AlphaSights, but that's rare.
Any suggestions for side gigs or income streams that fit this criteria? I’d love to hear your ideas or experiences. Thanks!
https://redd.it/1elp1pw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DB access and all night pings
My devops team is based in the US but about half of our engineers are in Serbia and India. We currently have no plans to add devops headcount at our international sites. As a result overnight pages are extremely common and on call is pretty brutal for us right now. The WORST part is it’s usually minor issues that the dev could fix on their own, but they don’t have access to our prod DBs, etc. so they can’t do anything until we come online.
I’m looking into ways to give them self-serve access to specific tables outside of normal working hours (needs to be auditable and tables are a must due to compliance requirements). My wife who wakes up every time I get paged will be extremely grateful for any recs.
https://redd.it/1elp829
@r_devops
My devops team is based in the US but about half of our engineers are in Serbia and India. We currently have no plans to add devops headcount at our international sites. As a result overnight pages are extremely common and on call is pretty brutal for us right now. The WORST part is it’s usually minor issues that the dev could fix on their own, but they don’t have access to our prod DBs, etc. so they can’t do anything until we come online.
I’m looking into ways to give them self-serve access to specific tables outside of normal working hours (needs to be auditable and tables are a must due to compliance requirements). My wife who wakes up every time I get paged will be extremely grateful for any recs.
https://redd.it/1elp829
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Adding subfolders in Artifactory Repository Tree while deploying
I am trying to add a subfolder to a repository tree and cannot find a way to do it. I’ve tried appending to the path name before adding the file I want to deploy but nothing seems to help.
https://redd.it/1elo3sc
@r_devops
I am trying to add a subfolder to a repository tree and cannot find a way to do it. I’ve tried appending to the path name before adding the file I want to deploy but nothing seems to help.
https://redd.it/1elo3sc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you describe the prod ways to deploy to production?
I wanna have paths to learn to deploy apps to production and also have the develop/staging environments too.
I’m try to do this one for example:
A github project, with a dockerfile, a github workflow will build that dockerfile and deploy in GHCR and then later that build will be picked up by Railways and deploy it.
I can make it work fine with the environments vars but the secrets, these are give me hardtime. I think if someone gets the docker image they can in theory see the secrets, will no longer be secrets right?. Do I in the build process (docker or github workflow) copy/create the secrets folder in the docker ?
/run/secrets/api\_key
/run/secrets/password
> node index.js
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086114000Z)
Server is running on port[ ~https://localhost:6684/~](https://localhost:6684/)
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086199000Z)
Environment Variables:
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086212000Z)
APP\_VERSION: 1.0.1
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086225000Z)
BUILD\_ENV: development
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086233000Z)
NODE\_ENV: production
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086244000Z)
PORT: 6684
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.492825000Z)
Error: ENOENT: no such file or directory, open '/run/secrets/db\_password'
name: CI/CD
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Install dependencies
run: npm install
# - name: Build the app
# run: npm run build
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
platforms: linux/amd64
push: true
tags: ghcr.io/${{ github.repository_owner }}/simple-web-server:latest
build-args: |
APP_VERSION=${{ env.APP_VERSION }}
BUILD_ENV=${{
I wanna have paths to learn to deploy apps to production and also have the develop/staging environments too.
I’m try to do this one for example:
A github project, with a dockerfile, a github workflow will build that dockerfile and deploy in GHCR and then later that build will be picked up by Railways and deploy it.
I can make it work fine with the environments vars but the secrets, these are give me hardtime. I think if someone gets the docker image they can in theory see the secrets, will no longer be secrets right?. Do I in the build process (docker or github workflow) copy/create the secrets folder in the docker ?
/run/secrets/api\_key
/run/secrets/password
> node index.js
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086114000Z)
Server is running on port[ ~https://localhost:6684/~](https://localhost:6684/)
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086199000Z)
Environment Variables:
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086212000Z)
APP\_VERSION: 1.0.1
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086225000Z)
BUILD\_ENV: development
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086233000Z)
NODE\_ENV: production
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.086244000Z)
PORT: 6684
[~Aug 06 15:13:47~](https://railway.app/project/2986007e-164a-4999-8d82-ed13cf57953a/logs?filter=%40deployment%3A6280c3f5-5e71-478b-b39b-6212a94f63bf+-%40replica%3A10ea2a15-2ad8-4fab-9989-83268adac972&context=2024-08-06T20%3A13%3A47.492825000Z)
Error: ENOENT: no such file or directory, open '/run/secrets/db\_password'
name: CI/CD
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Install dependencies
run: npm install
# - name: Build the app
# run: npm run build
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
platforms: linux/amd64
push: true
tags: ghcr.io/${{ github.repository_owner }}/simple-web-server:latest
build-args: |
APP_VERSION=${{ env.APP_VERSION }}
BUILD_ENV=${{
Railway
Railway is an infrastructure platform where you can provision infrastructure, develop with that infrastructure locally, and then deploy to the cloud.
env.BUILD_ENV }}
secrets: |
db_password=${{ secrets.DB_PASSWORD }}
api_key=${{ secrets.API_KEY }}
# Stage 1: Build the application
FROM node:20 AS builder
# Set build-time arguments
ARG APP_VERSION
ARG BUILD_ENV
# Log build-time arguments
RUN echo "Building with APP_VERSION=${APP_VERSION} and BUILD_ENV=${BUILD_ENV}"
# Set environment variables
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the application code to the container
COPY package*.json ./
COPY . .
# Install dependencies
RUN npm install
# Stage 2: Run the application
FROM node:20
# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the build artifacts from the builder stage
COPY --from=builder /app ./
# Log environment variables
RUN echo "Running with APP_VERSION=${APP_VERSION}, BUILD_ENV=${BUILD_ENV}, NODE_ENV=${NODE_ENV}, and PORT=${PORT}"
# Expose the application port
EXPOSE 3000
# Start the application
CMD ["npm", "start"]
this one that use for local, ir works fine because I'm copy the secrets
Dockerfile.local
# Stage 1: Build the application
FROM node:20 AS builder
# Set build-time arguments
ARG APP_VERSION
ARG BUILD_ENV
# Log build-time arguments
RUN echo "Building with APP_VERSION=${APP_VERSION} and BUILD_ENV=${BUILD_ENV}"
# Set environment variables
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the application code to the container
COPY package*.json ./
COPY . .
# Install dependencies
RUN npm install
# Stage 2: Run the application
FROM node:20
# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the build artifacts from the builder stage
COPY --from=builder /app ./
# Log environment variables
RUN echo "Running with APP_VERSION=${APP_VERSION}, BUILD_ENV=${BUILD_ENV}, NODE_ENV=${NODE_ENV}, and PORT=${PORT}"
# Create Environment Variables in GitHub Actions:
# Go to your GitHub repository.
#
# Click on Settings > Secrets and Variables > Actions.
#
# Add the following variables:
#
# APP_VERSION
# BUILD_ENV
#
# Add the following secrets:
#
# DB_PASSWORD
# API_KEY
# Copy secrets for local testing
COPY secrets/db_password /run/secrets/db_password
COPY secrets/api_key /run/secrets/api_key
# Expose the application port
EXPOSE 3000
# Start the application
CMD ["npm", "start"]
https://redd.it/1elt24o
@r_devops
secrets: |
db_password=${{ secrets.DB_PASSWORD }}
api_key=${{ secrets.API_KEY }}
# Stage 1: Build the application
FROM node:20 AS builder
# Set build-time arguments
ARG APP_VERSION
ARG BUILD_ENV
# Log build-time arguments
RUN echo "Building with APP_VERSION=${APP_VERSION} and BUILD_ENV=${BUILD_ENV}"
# Set environment variables
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the application code to the container
COPY package*.json ./
COPY . .
# Install dependencies
RUN npm install
# Stage 2: Run the application
FROM node:20
# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the build artifacts from the builder stage
COPY --from=builder /app ./
# Log environment variables
RUN echo "Running with APP_VERSION=${APP_VERSION}, BUILD_ENV=${BUILD_ENV}, NODE_ENV=${NODE_ENV}, and PORT=${PORT}"
# Expose the application port
EXPOSE 3000
# Start the application
CMD ["npm", "start"]
this one that use for local, ir works fine because I'm copy the secrets
Dockerfile.local
# Stage 1: Build the application
FROM node:20 AS builder
# Set build-time arguments
ARG APP_VERSION
ARG BUILD_ENV
# Log build-time arguments
RUN echo "Building with APP_VERSION=${APP_VERSION} and BUILD_ENV=${BUILD_ENV}"
# Set environment variables
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the application code to the container
COPY package*.json ./
COPY . .
# Install dependencies
RUN npm install
# Stage 2: Run the application
FROM node:20
# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000
ENV APP_VERSION=${APP_VERSION}
ENV BUILD_ENV=${BUILD_ENV}
# Create and change to the app directory
WORKDIR /app
# Copy the build artifacts from the builder stage
COPY --from=builder /app ./
# Log environment variables
RUN echo "Running with APP_VERSION=${APP_VERSION}, BUILD_ENV=${BUILD_ENV}, NODE_ENV=${NODE_ENV}, and PORT=${PORT}"
# Create Environment Variables in GitHub Actions:
# Go to your GitHub repository.
#
# Click on Settings > Secrets and Variables > Actions.
#
# Add the following variables:
#
# APP_VERSION
# BUILD_ENV
#
# Add the following secrets:
#
# DB_PASSWORD
# API_KEY
# Copy secrets for local testing
COPY secrets/db_password /run/secrets/db_password
COPY secrets/api_key /run/secrets/api_key
# Expose the application port
EXPOSE 3000
# Start the application
CMD ["npm", "start"]
https://redd.it/1elt24o
@r_devops
Reddit
From the devops community on Reddit: How do you describe the prod ways to deploy to production?
Explore this post and more from the devops community
What tools are there to manage autoscaling for kafka?
I'm familiar with cruise control but I wonder what are the options out there and which are the most popular ones? Are they fully automatic or do they require some level of continuous manual work?
https://redd.it/1eluf1q
@r_devops
I'm familiar with cruise control but I wonder what are the options out there and which are the most popular ones? Are they fully automatic or do they require some level of continuous manual work?
https://redd.it/1eluf1q
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Thinking about getting an extra job
I currently work for a company, and lately, the demand has been quite low to the point where I'm convinced I can handle another job.
However, I’m still not sure about the approach I should take when applying for another position. I’ve done some interviews where I mentioned that I was already working and wanted a second job, but that didn't go very well, haha.
My contract doesn’t have an exclusivity clause, but I wouldn't want them to know that I work somewhere else. I know some companies do a reference check and might end up contacting my current employer.
Any tips on how to proceed? Should I lie about being employed? Tell the truth?
https://redd.it/1elv30q
@r_devops
I currently work for a company, and lately, the demand has been quite low to the point where I'm convinced I can handle another job.
However, I’m still not sure about the approach I should take when applying for another position. I’ve done some interviews where I mentioned that I was already working and wanted a second job, but that didn't go very well, haha.
My contract doesn’t have an exclusivity clause, but I wouldn't want them to know that I work somewhere else. I know some companies do a reference check and might end up contacting my current employer.
Any tips on how to proceed? Should I lie about being employed? Tell the truth?
https://redd.it/1elv30q
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Challenges with CI/CD permissions management in OSS project: GitHub action.
Hi all :)
We have an OSS project sitting under OSS organization and I encountered a challenge with our CI/CD workflows and hope to get some insights.
The project is in GitHub, and it is a multilingual client library for ValKey/Redis OSS.
We are a team working for one of the big cloud companies, mainly dedicated to this project (not owned by the company, fully open source).
Most of the workflows are simple workflows that can be performed on a regular machine offered by GitHub.
But some of our CI tests including interacting with our company service in order to test massive cases and test that the project working also when the server is the cloud hosted version.
The issue is that in order to interact with the service safely, we need to hold the keys under the repo secrets, and those are available just from the main repo.
The maintainers are not working on the main repo but on theirs forks and opening PR's from forks to the main repo - so their PR's don't have access to the secrets and CI cannot accomplish all tests.
It is an OSS project, so we have to find a way to keep the secrets safe but still to make them available for CI triggered by maintainer fork, after approval from one of the organization members (ValKey).
Any ideas, offers, or insights?
Maybe somebody even want to join the community and help us with DevOps challenges? :P
https://redd.it/1elwmy3
@r_devops
Hi all :)
We have an OSS project sitting under OSS organization and I encountered a challenge with our CI/CD workflows and hope to get some insights.
The project is in GitHub, and it is a multilingual client library for ValKey/Redis OSS.
We are a team working for one of the big cloud companies, mainly dedicated to this project (not owned by the company, fully open source).
Most of the workflows are simple workflows that can be performed on a regular machine offered by GitHub.
But some of our CI tests including interacting with our company service in order to test massive cases and test that the project working also when the server is the cloud hosted version.
The issue is that in order to interact with the service safely, we need to hold the keys under the repo secrets, and those are available just from the main repo.
The maintainers are not working on the main repo but on theirs forks and opening PR's from forks to the main repo - so their PR's don't have access to the secrets and CI cannot accomplish all tests.
It is an OSS project, so we have to find a way to keep the secrets safe but still to make them available for CI triggered by maintainer fork, after approval from one of the organization members (ValKey).
Any ideas, offers, or insights?
Maybe somebody even want to join the community and help us with DevOps challenges? :P
https://redd.it/1elwmy3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Cypher for Kubernetes API: An expressive new way to work with k8s
Hey everybody 👋
I created this tool six months ago and it's been a daily driver for me since.
I lets me use a syntax similar to Cypher (Neo4j's query language which I adore) to perform CRUD operations on K8s.
My main use for this is examining resources, crafting custom JSON payloads with data from multiple resource kinds is a breeze.
This is an alpha release and while Cyphernetes has been in real-world use by me and a handful of other folk, test thoroughly before performing create/update/delete operations in production.
https://redd.it/1elvb3v
@r_devops
Hey everybody 👋
I created this tool six months ago and it's been a daily driver for me since.
I lets me use a syntax similar to Cypher (Neo4j's query language which I adore) to perform CRUD operations on K8s.
My main use for this is examining resources, crafting custom JSON payloads with data from multiple resource kinds is a breeze.
This is an alpha release and while Cyphernetes has been in real-world use by me and a handful of other folk, test thoroughly before performing create/update/delete operations in production.
https://redd.it/1elvb3v
@r_devops
GitHub
GitHub - AvitalTamir/cyphernetes: A Kubernetes Query Language
A Kubernetes Query Language. Contribute to AvitalTamir/cyphernetes development by creating an account on GitHub.