I saved 10+ of repetitive manual steps using just 4 GitHub Actions workflows
Hey, I wanted to share a small project I’ve been working on recently with you. It’s called „one branch to rule them all”. What I think will be the most interesting part for this community is the last part: https://www.toolongautomated.com/posts/2025/one-branch-to-rule-them-all-4.html
As part of this project, I’ve managed to automate multiple steps that previously had to be done manually over and over, every time the PR gets merged to trunk (or even on every commit in the PR when running unit tests).
It’s part of a larger design that lets users deploy a containerized application to multiple environments like staging or production conveniently.
I’ve made everything open source on GitHub, here’s the GitHub Actions workflow piece: https://github.com/toolongautomated/tutorial-1/tree/main/.github/workflows
What do you think about it from the automation/design perspective? What would you do differently or what do you think should be added?
https://redd.it/1jbajbr
@r_devops
Hey, I wanted to share a small project I’ve been working on recently with you. It’s called „one branch to rule them all”. What I think will be the most interesting part for this community is the last part: https://www.toolongautomated.com/posts/2025/one-branch-to-rule-them-all-4.html
As part of this project, I’ve managed to automate multiple steps that previously had to be done manually over and over, every time the PR gets merged to trunk (or even on every commit in the PR when running unit tests).
It’s part of a larger design that lets users deploy a containerized application to multiple environments like staging or production conveniently.
I’ve made everything open source on GitHub, here’s the GitHub Actions workflow piece: https://github.com/toolongautomated/tutorial-1/tree/main/.github/workflows
What do you think about it from the automation/design perspective? What would you do differently or what do you think should be added?
https://redd.it/1jbajbr
@r_devops
too long; automated
one branch to rule them all | guided series #4
Join me for the final part of the One branch to rule them all guided series! We'll implement a full-fledged CI/CD pipeline, with automated tests, git tagging, Docker image building and pushing, and a deployment to Cloud Run.
When I say "deployments" what do you think of first?
Ok, trying to get some feedback on what we call a specific feature. I have an inkling, but wanted to pulse check with this group
When I say "deployments" what do you think of first as it relates to your day to day work?
https://redd.it/1jbf1g9
@r_devops
Ok, trying to get some feedback on what we call a specific feature. I have an inkling, but wanted to pulse check with this group
When I say "deployments" what do you think of first as it relates to your day to day work?
https://redd.it/1jbf1g9
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
GitHub Actions - Pull Requests vs Push prioritisation
Hey colleagues!
I am struggling with small issue but I have a feeling that I am missing something obvious. I have a workflow on specific branch and we (as the team) want to have two triggers:
* once we push something to this branch
* once the PR is merged (however we need to have github.event = pull\_request, as we leverage labels in the pipeline, so it's crucial point for us)
It seems quite easy, we just do something like:
on:
push:
branches:
- branch
pull_request:
types: [closed]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
(...)
But the problem occurs when the PR is merged. We have noticed that concurrency cancels one of the job, but sometimes the cancelled job is triggered from PR and sometimes from push. We need to let run PR job only, and not the push one.
I hope that someone from outside looks at this and say we are silly because we miss obvious thing. :)
Thanks in advance for any comment.
https://redd.it/1jbfkt1
@r_devops
Hey colleagues!
I am struggling with small issue but I have a feeling that I am missing something obvious. I have a workflow on specific branch and we (as the team) want to have two triggers:
* once we push something to this branch
* once the PR is merged (however we need to have github.event = pull\_request, as we leverage labels in the pipeline, so it's crucial point for us)
It seems quite easy, we just do something like:
on:
push:
branches:
- branch
pull_request:
types: [closed]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
(...)
But the problem occurs when the PR is merged. We have noticed that concurrency cancels one of the job, but sometimes the cancelled job is triggered from PR and sometimes from push. We need to let run PR job only, and not the push one.
I hope that someone from outside looks at this and say we are silly because we miss obvious thing. :)
Thanks in advance for any comment.
https://redd.it/1jbfkt1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you output logs when using concurrently?
I do a prettier-check and a type-check at the same time using concurrently, but the logs doesn't get output on the screen at the end when it finds errors. How do you log everything whether you're on windows or linux? Is there a solution for this?
https://redd.it/1jbeyag
@r_devops
I do a prettier-check and a type-check at the same time using concurrently, but the logs doesn't get output on the screen at the end when it finds errors. How do you log everything whether you're on windows or linux? Is there a solution for this?
https://redd.it/1jbeyag
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Automated Diagram Solution for AWS Serverless Apps
I am being assigned to build CICD of multiple AWS serverless applications in coming days. Each application will have separate repo. Each repository will be one serverless application consisting of multiple lambdas, apigw, sns, sqs and one YAML fine containing all infra definition. I have experience with aws SAM for building and deploying and mostly we will be using it for CICD.
I am looking for an automated diagram solution where i can feed my yaml file(or something more, if needed) to a CLI or POST URL and it will spit a png file. I know AWS cloudformation can be used to export the image but i dont find it elegant and readable enough.
Anyone have it fully automated and like to share their experience ?
https://redd.it/1jbpel3
@r_devops
I am being assigned to build CICD of multiple AWS serverless applications in coming days. Each application will have separate repo. Each repository will be one serverless application consisting of multiple lambdas, apigw, sns, sqs and one YAML fine containing all infra definition. I have experience with aws SAM for building and deploying and mostly we will be using it for CICD.
I am looking for an automated diagram solution where i can feed my yaml file(or something more, if needed) to a CLI or POST URL and it will spit a png file. I know AWS cloudformation can be used to export the image but i dont find it elegant and readable enough.
Anyone have it fully automated and like to share their experience ?
https://redd.it/1jbpel3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Thinking about migrating from Terraform to Pulumi
I have an entire infrastructure built on Terraform with 500 resources + and im thinking to migrate it to Pulumi since it seems cooler with the GUI part on their website and lets you use Python to provision infrastructure.
What do you think, is it worth it ?
Is the migration painful ?
Thanks
https://redd.it/1jbqwxg
@r_devops
I have an entire infrastructure built on Terraform with 500 resources + and im thinking to migrate it to Pulumi since it seems cooler with the GUI part on their website and lets you use Python to provision infrastructure.
What do you think, is it worth it ?
Is the migration painful ?
Thanks
https://redd.it/1jbqwxg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Illegal IPTV infrastructure: how do they make it happen? costs? bandwidth?
I'm wondering how illegal IPTV services manage their infrastructure. This must require a lot of bandwidth, and I bet they are not using GCP or AWS.
What do you think they use? Do they find cheap VPS options with no egress charges? Do you think they are advanced enough to run Kubernetes, Ansible automation, etc.?
I'm curious to hear your thoughts on how this works...
https://redd.it/1jbs197
@r_devops
I'm wondering how illegal IPTV services manage their infrastructure. This must require a lot of bandwidth, and I bet they are not using GCP or AWS.
What do you think they use? Do they find cheap VPS options with no egress charges? Do you think they are advanced enough to run Kubernetes, Ansible automation, etc.?
I'm curious to hear your thoughts on how this works...
https://redd.it/1jbs197
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
"headless" CI / build server
Hi all!
I'm pretty new to the whole devops game, but I wondered if there was something like Jenkins or Drone I could host on-prem that just takes a tar-ed codebase (which will be Java projects using Gradle or Maven), run the build task (so like `./gradlew build`, and then have it upload the artifacts to something like S3 for me?
I'd want this to be triggerable via an API, but something like Jenkins and Drone always expect to be connected to a repo or have a "project" attached to a build.
But because the codebases I will be building are very disconnected from each other, even be multi-tenant, so not every project even comes from the same customer, I'd want to do the business logic on my own.
Does anyone here know if there's something out there that would fit me here? Or even, prove me wrong and point me somewhere I could learn how to do this *using* Jenkins, or, preferably, Drone?
Thanks in advance!
https://redd.it/1jbru7y
@r_devops
Hi all!
I'm pretty new to the whole devops game, but I wondered if there was something like Jenkins or Drone I could host on-prem that just takes a tar-ed codebase (which will be Java projects using Gradle or Maven), run the build task (so like `./gradlew build`, and then have it upload the artifacts to something like S3 for me?
I'd want this to be triggerable via an API, but something like Jenkins and Drone always expect to be connected to a repo or have a "project" attached to a build.
But because the codebases I will be building are very disconnected from each other, even be multi-tenant, so not every project even comes from the same customer, I'd want to do the business logic on my own.
Does anyone here know if there's something out there that would fit me here? Or even, prove me wrong and point me somewhere I could learn how to do this *using* Jenkins, or, preferably, Drone?
Thanks in advance!
https://redd.it/1jbru7y
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
PyPI Malicious Packages Threaten Cloud Security
Fake packages in the Python Package Index put cloud security at risk. Researchers have identified two malicious packages posing as 'time' utilities and, alarmingly, they gained over 14,100 downloads. The downloaded packages allowed for unauthorized access to sensitive cloud access tokens.
The incident highlights the pressing need for developers and DevOps practices to scrutinize package dependencies more rigorously. With the ties these malicious packages have to popular projects, awareness and caution are crucial in order to avert potential exploitation.
- Over 14,100 downloads of two malicious package sets identified.
- Packages disguised as 'time' utilities exfiltrate sensitive data.
- Suspicious URLs associated with packages raise data theft concerns.
(View Details on PwnHub)
https://redd.it/1jbxxok
@r_devops
Fake packages in the Python Package Index put cloud security at risk. Researchers have identified two malicious packages posing as 'time' utilities and, alarmingly, they gained over 14,100 downloads. The downloaded packages allowed for unauthorized access to sensitive cloud access tokens.
The incident highlights the pressing need for developers and DevOps practices to scrutinize package dependencies more rigorously. With the ties these malicious packages have to popular projects, awareness and caution are crucial in order to avert potential exploitation.
- Over 14,100 downloads of two malicious package sets identified.
- Packages disguised as 'time' utilities exfiltrate sensitive data.
- Suspicious URLs associated with packages raise data theft concerns.
(View Details on PwnHub)
https://redd.it/1jbxxok
@r_devops
Reddit
From the pwnhub community on Reddit: Malicious PyPI Packages Target Users—Cloud Tokens Stolen
Explore this post and more from the pwnhub community
Tj-actions/changed-files GH Action is compromised.
https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised
We use this one in our workflows.
It seems like it shouldn't be a problem if your repos are private or internal.
Public repos will definitely want to determine their level of exposure.
https://redd.it/1jbzdsm
@r_devops
https://www.stepsecurity.io/blog/harden-runner-detection-tj-actions-changed-files-action-is-compromised
We use this one in our workflows.
It seems like it shouldn't be a problem if your repos are private or internal.
Public repos will definitely want to determine their level of exposure.
https://redd.it/1jbzdsm
@r_devops
www.stepsecurity.io
Harden-Runner detection: tj-actions/changed-files action is compromised - StepSecurity
Devops market, real situation.
Guys, I’m out job for along time. Been on and off doing some side hustles, to keep up with bills etc. Have a family. So, long story short, recently I started upgrading my skills, Kubernetes, AWS, Python etc. I’m doing a lot of labs and alot of troubleshooting along the way. But the frustration comes from my surrounding. I have people around me engineers, and whenever we meet, they trying to take me down with crazy stories that the market is terrible, there are no jobs, we all sit at works scared about layoffs might happen any day soon etc. So basically they say ‘don’t even dream about’ But I have hit the rock bottom can pay my bills , or barely pay. So I need some real perspective from you guys, I trust and believe you gonna share the real story. Cuz whenever I google DevOps jobs near me it would pop a lot of jobs. So I don’t know where it’s all fake just for statistics or what is the true situation like. Appreciate your input
https://redd.it/1jc0xaf
@r_devops
Guys, I’m out job for along time. Been on and off doing some side hustles, to keep up with bills etc. Have a family. So, long story short, recently I started upgrading my skills, Kubernetes, AWS, Python etc. I’m doing a lot of labs and alot of troubleshooting along the way. But the frustration comes from my surrounding. I have people around me engineers, and whenever we meet, they trying to take me down with crazy stories that the market is terrible, there are no jobs, we all sit at works scared about layoffs might happen any day soon etc. So basically they say ‘don’t even dream about’ But I have hit the rock bottom can pay my bills , or barely pay. So I need some real perspective from you guys, I trust and believe you gonna share the real story. Cuz whenever I google DevOps jobs near me it would pop a lot of jobs. So I don’t know where it’s all fake just for statistics or what is the true situation like. Appreciate your input
https://redd.it/1jc0xaf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Anyone using GKE with Windows nodes?
Hey,
I have got the task of managing GKE clusters that has Windows nodes with a couple of containers running on them.
The main problem I'm having is cold starts. The containers images are quite big and we have a spiky load, meaning that during working hours we scale up to hundred and something of nodes and then we go back to a dozen.
I have tried multiple approaches to improve this but it seems that GKE doesn't support custom node images nor using secondary disks for image caching/streaming.
If you have any tip it would be highly appreciated.
Thanks!
https://redd.it/1jc0tvh
@r_devops
Hey,
I have got the task of managing GKE clusters that has Windows nodes with a couple of containers running on them.
The main problem I'm having is cold starts. The containers images are quite big and we have a spiky load, meaning that during working hours we scale up to hundred and something of nodes and then we go back to a dozen.
I have tried multiple approaches to improve this but it seems that GKE doesn't support custom node images nor using secondary disks for image caching/streaming.
If you have any tip it would be highly appreciated.
Thanks!
https://redd.it/1jc0tvh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Docker Login to Nexus Failing in Jenkins Pipeline (Mac)
Hey everyone,
I’m struggling with a Jenkins pipeline issue when trying to log in to Nexus using Docker. Here’s the error I’m getting:
*****************************************************************************
docker login -u admin -p ****** https://nexus:8083
WARNING! Using --password via CLI is insecure. Use --password-stdin
Error response from daemon: Get "https://nexus:8083/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
*****************************************************************************
My setup:
• OS: Mac
• Docker: Docker Desktop installed
• CI/CD tools running in Docker containers: Jenkins, SonarQube, Nexus
• Jenkins setup: Docker is installed inside the Jenkins container
• Nexus: Running as a container
• Users & Permissions: Created a group in Nexus and added my user to it
I’ve already tried:
• Running docker login manually inside the Jenkins container → Same timeout error
• Checking if Nexus is accessible (curl https://nexus:8083) → Sometimes works, sometimes times out
• Restarting Nexus & Jenkins → No change
I’ll attach some screenshots from my Jenkins logs, Nexus settings, and Docker setup.
Has anyone faced a similar issue? Could it be a networking issue with Docker? Any suggestions would be appreciated!
Thanks in advance.
https://redd.it/1jc2mzw
@r_devops
Hey everyone,
I’m struggling with a Jenkins pipeline issue when trying to log in to Nexus using Docker. Here’s the error I’m getting:
*****************************************************************************
docker login -u admin -p ****** https://nexus:8083
WARNING! Using --password via CLI is insecure. Use --password-stdin
Error response from daemon: Get "https://nexus:8083/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
*****************************************************************************
My setup:
• OS: Mac
• Docker: Docker Desktop installed
• CI/CD tools running in Docker containers: Jenkins, SonarQube, Nexus
• Jenkins setup: Docker is installed inside the Jenkins container
• Nexus: Running as a container
• Users & Permissions: Created a group in Nexus and added my user to it
I’ve already tried:
• Running docker login manually inside the Jenkins container → Same timeout error
• Checking if Nexus is accessible (curl https://nexus:8083) → Sometimes works, sometimes times out
• Restarting Nexus & Jenkins → No change
I’ll attach some screenshots from my Jenkins logs, Nexus settings, and Docker setup.
Has anyone faced a similar issue? Could it be a networking issue with Docker? Any suggestions would be appreciated!
Thanks in advance.
https://redd.it/1jc2mzw
@r_devops
Reddit
From the devops community on Reddit: Docker Login to Nexus Failing in Jenkins Pipeline (Mac)
Posted by Melodic_Ad6299 - 0 votes and 10 comments
# TracePerf: TypeScript-Powered Node.js Logger That Actually Shows You What's Happening
Hey devs! I just released **TracePerf** (v0.1.1), a new open-source logging and performance tracking library built with TypeScript that I created to solve real problems I was facing in production apps.
# Why I Built This
I was tired of:
* Staring at messy console logs trying to figure out what called what
* Hunting for performance bottlenecks with no clear indicators
* Switching between different logging tools for different environments
* Having to strip out debug logs for production
So I built TracePerf to solve all these problems in one lightweight package.
# What Makes TracePerf Different
Unlike Winston, Pino, or console.log:
* **Visual Execution Flow** \- See exactly how functions call each other with ASCII flowcharts
* **Automatic Bottleneck Detection** \- TracePerf flags slow functions with timing data
* **Works Everywhere** \- Same API for Node.js backend and browser frontend (React, Next.js, etc.)
* **Zero Config to Start** \- Just import and use, but highly configurable when needed
* **Smart Production Mode** \- Automatically filters logs based on environment
* **Universal Module Support** \- Works with both CommonJS and ESM
* **First-Class TypeScript Support** \- Built with TypeScript for excellent type safety and IntelliSense
# Quick Example
// CommonJS
const tracePerf = require('traceperf');
// or ESM
// import tracePerf from 'traceperf';
function fetchData() {
return processData();
}
function processData() {
return calculateResults();
}
function calculateResults() {
// Simulate work
for (let i = 0; i < 1000000; i++) {}
return 'done';
}
// Track the execution flow
tracePerf.track(fetchData);
This outputs a visual execution flow with timing data:
Execution Flow:
┌──────────────────────────────┐
│ fetchData │ ⏱ 5ms
└──────────────────────────────┘
│
▼
┌──────────────────────────────┐
│ processData │ ⏱ 3ms
└──────────────────────────────┘
│
▼
┌──────────────────────────────┐
│ calculateResults │ ⏱ 150ms ⚠️ SLOW
└──────────────────────────────┘
# TypeScript Example
import tracePerf from 'traceperf';
import { ITrackOptions } from 'traceperf/types';
// Define custom options with TypeScript
const options: ITrackOptions = {
label: 'dataProcessing',
threshold: 50, // ms
silent: false
};
// Function with type annotations
function processData<T>(data: T[]): T[] {
// Processing logic
return data.map(item => item);
}
// Track with type safety
const result = tracePerf.track(() => {
return processData<string>(['a', 'b', 'c']);
}, options);
# React/Next.js Support
import tracePerf from 'traceperf/browser';
function MyComponent() {
useEffect(() => {
tracePerf.track(() => {
// Your expensive operation
}, { label: 'expensiveOperation' });
}, []);
// ...
}
# Installation
npm install traceperf
# Links
* [GitHub Repo](https://github.com/thelastbackspace/traceperf)
* [NPM Package](https://www.npmjs.com/package/traceperf)
* [Documentation](https://github.com/thelastbackspace/traceperf#readme)
# What's Next?
I'm actively working on:
* More output formats (JSON, CSV)
* Persistent logging to files
* Remote logging integrations
* Performance comparison reports
* Enhanced TypeScript types and utilities
Would love to hear your feedback and feature requests! What logging/debugging pain points do you have that TracePerf could solve?
https://redd.it/1jc4mjx
@r_devops
Hey devs! I just released **TracePerf** (v0.1.1), a new open-source logging and performance tracking library built with TypeScript that I created to solve real problems I was facing in production apps.
# Why I Built This
I was tired of:
* Staring at messy console logs trying to figure out what called what
* Hunting for performance bottlenecks with no clear indicators
* Switching between different logging tools for different environments
* Having to strip out debug logs for production
So I built TracePerf to solve all these problems in one lightweight package.
# What Makes TracePerf Different
Unlike Winston, Pino, or console.log:
* **Visual Execution Flow** \- See exactly how functions call each other with ASCII flowcharts
* **Automatic Bottleneck Detection** \- TracePerf flags slow functions with timing data
* **Works Everywhere** \- Same API for Node.js backend and browser frontend (React, Next.js, etc.)
* **Zero Config to Start** \- Just import and use, but highly configurable when needed
* **Smart Production Mode** \- Automatically filters logs based on environment
* **Universal Module Support** \- Works with both CommonJS and ESM
* **First-Class TypeScript Support** \- Built with TypeScript for excellent type safety and IntelliSense
# Quick Example
// CommonJS
const tracePerf = require('traceperf');
// or ESM
// import tracePerf from 'traceperf';
function fetchData() {
return processData();
}
function processData() {
return calculateResults();
}
function calculateResults() {
// Simulate work
for (let i = 0; i < 1000000; i++) {}
return 'done';
}
// Track the execution flow
tracePerf.track(fetchData);
This outputs a visual execution flow with timing data:
Execution Flow:
┌──────────────────────────────┐
│ fetchData │ ⏱ 5ms
└──────────────────────────────┘
│
▼
┌──────────────────────────────┐
│ processData │ ⏱ 3ms
└──────────────────────────────┘
│
▼
┌──────────────────────────────┐
│ calculateResults │ ⏱ 150ms ⚠️ SLOW
└──────────────────────────────┘
# TypeScript Example
import tracePerf from 'traceperf';
import { ITrackOptions } from 'traceperf/types';
// Define custom options with TypeScript
const options: ITrackOptions = {
label: 'dataProcessing',
threshold: 50, // ms
silent: false
};
// Function with type annotations
function processData<T>(data: T[]): T[] {
// Processing logic
return data.map(item => item);
}
// Track with type safety
const result = tracePerf.track(() => {
return processData<string>(['a', 'b', 'c']);
}, options);
# React/Next.js Support
import tracePerf from 'traceperf/browser';
function MyComponent() {
useEffect(() => {
tracePerf.track(() => {
// Your expensive operation
}, { label: 'expensiveOperation' });
}, []);
// ...
}
# Installation
npm install traceperf
# Links
* [GitHub Repo](https://github.com/thelastbackspace/traceperf)
* [NPM Package](https://www.npmjs.com/package/traceperf)
* [Documentation](https://github.com/thelastbackspace/traceperf#readme)
# What's Next?
I'm actively working on:
* More output formats (JSON, CSV)
* Persistent logging to files
* Remote logging integrations
* Performance comparison reports
* Enhanced TypeScript types and utilities
Would love to hear your feedback and feature requests! What logging/debugging pain points do you have that TracePerf could solve?
https://redd.it/1jc4mjx
@r_devops
GitHub
GitHub - thelastbackspace/traceperf: TracePerf: Advanced console logging & performance tracking for Node.js. Visualize execution…
TracePerf: Advanced console logging & performance tracking for Node.js. Visualize execution flows with ASCII art, detect bottlenecks, and get optimization suggestions. Features structured l...
What do you use for CI/CD?
I use actions but curious war folks recommend in 2025
https://redd.it/1jc6rtn
@r_devops
I use actions but curious war folks recommend in 2025
https://redd.it/1jc6rtn
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Host in Apache Web server with React
Hello!, im currently practicing deployment in web servers and i really cant find any solu online so i came to ask here..
im currently deploying a Vite react typescript with tanstack routing.. but experience a major problem..
whenever i go to my url which is my subdomain.. it works well but when i navigate to certain routes which is a
(file structure)
/SubDomain
- .htaccess
- ./dist (after build i deleted everything except .dist)
RewriteEngine On
# Force redirect from HTTP to HTTPS
RewriteCond %{HTTPS} off
RewriteRule ^ https://%{HTTPHOST}%{REQUESTURI} L,R=301
# Serve static files from the dist folder
RewriteCond %{REQUESTFILENAME} !-f
RewriteCond %{REQUESTFILENAME} !-d
RewriteRule ^(.)$ /dist/$1 [L]
# Handle SPA routing (React/Tanstack Router)
# Redirect any request that isn't a file or directory to index.html
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.)$ /dist/index.html L
# Explicitly set DirectoryIndex to index.html
DirectoryIndex /dist/index.html
Thankss..
https://redd.it/1jcdwet
@r_devops
Hello!, im currently practicing deployment in web servers and i really cant find any solu online so i came to ask here..
im currently deploying a Vite react typescript with tanstack routing.. but experience a major problem..
whenever i go to my url which is my subdomain.. it works well but when i navigate to certain routes which is a
file routing based.. it gives me a Internal Server error which i really dont have an idea about it.. Heres the steps i did:(file structure)
/SubDomain
- .htaccess
- ./dist (after build i deleted everything except .dist)
.htaccess:RewriteEngine On
# Force redirect from HTTP to HTTPS
RewriteCond %{HTTPS} off
RewriteRule ^ https://%{HTTPHOST}%{REQUESTURI} L,R=301
# Serve static files from the dist folder
RewriteCond %{REQUESTFILENAME} !-f
RewriteCond %{REQUESTFILENAME} !-d
RewriteRule ^(.)$ /dist/$1 [L]
# Handle SPA routing (React/Tanstack Router)
# Redirect any request that isn't a file or directory to index.html
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.)$ /dist/index.html L
# Explicitly set DirectoryIndex to index.html
DirectoryIndex /dist/index.html
Thankss..
https://redd.it/1jcdwet
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
tj-actions/changed-files back on GitHub
After yesterday’s removal, it’s been brought back to GitHub.
„malicious commit has been removed from all tags and branches, and necessary measures have been implemented to prevent similar issues in the future.”
https://github.com/tj-actions/changed-files
https://redd.it/1jcgsk5
@r_devops
After yesterday’s removal, it’s been brought back to GitHub.
„malicious commit has been removed from all tags and branches, and necessary measures have been implemented to prevent similar issues in the future.”
https://github.com/tj-actions/changed-files
https://redd.it/1jcgsk5
@r_devops
GitHub
GitHub - tj-actions/changed-files: :octocat: Github action to retrieve all (added, copied, modified, deleted, renamed, type changed…
:octocat: Github action to retrieve all (added, copied, modified, deleted, renamed, type changed, unmerged, unknown) files and directories. - tj-actions/changed-files
Got into devops. Looking to connect
With people who are career driven and love growth. Would love me to be intouch and learn from you.
My job consists of dual roles where it would be devops + cybersecurity (cloudsec and bit of GRC). I believe i have a once in a lifetime kind of opportunity and i want to make the best out of it. I just want to be surrounded by likeminded people to learn and grow. Looking forward to hearing from you.
Edit: i also intend to work on side projects to learn stuff and make myself more employable.
https://redd.it/1jci2l7
@r_devops
With people who are career driven and love growth. Would love me to be intouch and learn from you.
My job consists of dual roles where it would be devops + cybersecurity (cloudsec and bit of GRC). I believe i have a once in a lifetime kind of opportunity and i want to make the best out of it. I just want to be surrounded by likeminded people to learn and grow. Looking forward to hearing from you.
Edit: i also intend to work on side projects to learn stuff and make myself more employable.
https://redd.it/1jci2l7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What should i pick as a career in devops
Hi everyone, I am 20 yr old . I have worked on java from long time and i want to move towards devops, so far i have started working on shell scripting, python for devops ( from yt ) and worked with docker . What should i do to get a good job by next year as i will be graduated .
Your responses would help me a lot
https://redd.it/1jcjerq
@r_devops
Hi everyone, I am 20 yr old . I have worked on java from long time and i want to move towards devops, so far i have started working on shell scripting, python for devops ( from yt ) and worked with docker . What should i do to get a good job by next year as i will be graduated .
Your responses would help me a lot
https://redd.it/1jcjerq
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
For all wanting to enter DevOps, here's my personal "stand out" tips
Hello all,
Do-everything developer of ~20 years who transitioned into DevOps 5 years ago reporting in - born from the struggles with my own current team members and the vast majority of DevOps candidates we interview, I wanted to share my thoughts about the industry and candidates we come across:
- 95% of good DevOps engineers were developers first - there are exceptions, but being a DevOps Engineer is knowing the pain your devs face and most importantly improving it.
- Leaping from SysAdmin => DevOps is 1000x more difficult to pull off than Dev => DevOps - not impossible, but non-developers in my experience largely do not/will not learn the fundamental good code-writing practices that all devs will learn on day one.
The number of candidates we reject each month that think doing "AZ101" certifications or telling me how much their Golang/Rust stack "could" scale is indescribable - not unimportant having that skillset, but if you operate in a DevOps team just working with brand-new stacks and technologies each day and pay no attention to the business-process pain your staff base is dealing with, you won't last.
- Please, please learn the basics of computer hardware, networking (IPv4/IPv6, DNS, DHCP) outside of a cloud environment - the number of people who claim experience with these but falter as soon as it's not "in an AWS VPC" is unbelievable.
- Be hungry to learn, forever, always. - if you're not one of the most technically-innovative people in your company, and at least somewhat interested in tech/dev outside of work, you will fail - and you should. DevOps is not a role for people to do average and milk it for what it's worth.
At the risk of sounding like a bitter veteran with the above - these are just my own experiences and guidance I would give to new entrants to the industry if I could :)
Bitterness aside - if you really "give a shit" about learning and innovation, my top tips are as follows:
- Innovate and develop new strategies or approaches as a primary goal - you will come across 40-50 year old employees that are bitter about your success and innovation, give them no reasons to have a point, let your good work speak for itself.
- Don't work for any company that you would be worried about spotting a mistake and owning up to it - I'm fortunate where I work that we foster and encourage a "see it, say something" culture and do not tolerate blame culture aside from intentional negligence - you will learn the most working in this kind of environment.
- Don't be afraid to propose huge changes to 20 year old business processes - the amount of stupid bullshit companies will follow for years on end without questioning is endless - chances are if you're a DevOps Engineer and think you've found a novel solution to something, you're very probably right.
- Stay humble and keep close with any engineers/dev staff that you service or look after - these folks are your bread and butter - the second you lose touch with them, you lose your technical sway and influence - and your own sense of "what needs to be improved".
https://redd.it/1jck1r2
@r_devops
Hello all,
Do-everything developer of ~20 years who transitioned into DevOps 5 years ago reporting in - born from the struggles with my own current team members and the vast majority of DevOps candidates we interview, I wanted to share my thoughts about the industry and candidates we come across:
- 95% of good DevOps engineers were developers first - there are exceptions, but being a DevOps Engineer is knowing the pain your devs face and most importantly improving it.
- Leaping from SysAdmin => DevOps is 1000x more difficult to pull off than Dev => DevOps - not impossible, but non-developers in my experience largely do not/will not learn the fundamental good code-writing practices that all devs will learn on day one.
The number of candidates we reject each month that think doing "AZ101" certifications or telling me how much their Golang/Rust stack "could" scale is indescribable - not unimportant having that skillset, but if you operate in a DevOps team just working with brand-new stacks and technologies each day and pay no attention to the business-process pain your staff base is dealing with, you won't last.
- Please, please learn the basics of computer hardware, networking (IPv4/IPv6, DNS, DHCP) outside of a cloud environment - the number of people who claim experience with these but falter as soon as it's not "in an AWS VPC" is unbelievable.
- Be hungry to learn, forever, always. - if you're not one of the most technically-innovative people in your company, and at least somewhat interested in tech/dev outside of work, you will fail - and you should. DevOps is not a role for people to do average and milk it for what it's worth.
At the risk of sounding like a bitter veteran with the above - these are just my own experiences and guidance I would give to new entrants to the industry if I could :)
Bitterness aside - if you really "give a shit" about learning and innovation, my top tips are as follows:
- Innovate and develop new strategies or approaches as a primary goal - you will come across 40-50 year old employees that are bitter about your success and innovation, give them no reasons to have a point, let your good work speak for itself.
- Don't work for any company that you would be worried about spotting a mistake and owning up to it - I'm fortunate where I work that we foster and encourage a "see it, say something" culture and do not tolerate blame culture aside from intentional negligence - you will learn the most working in this kind of environment.
- Don't be afraid to propose huge changes to 20 year old business processes - the amount of stupid bullshit companies will follow for years on end without questioning is endless - chances are if you're a DevOps Engineer and think you've found a novel solution to something, you're very probably right.
- Stay humble and keep close with any engineers/dev staff that you service or look after - these folks are your bread and butter - the second you lose touch with them, you lose your technical sway and influence - and your own sense of "what needs to be improved".
https://redd.it/1jck1r2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Should I learn Oracle DBA as a DevOps/Platform Engineer in 2025?
I am an entry level DevOps Engineer working in a mid size (300+ dev) software company for almost 3 years. I mostly maintain our On-prem PROXMOX cluster, K8S cluster, monitoring/alerting (500+ VM and WS), do some scripting in BASH and python. My senior colleague do the same but additionally he is our Oracle DBA. Lately I realized that was hired to be a substitute of my colleague. But nobody guide me in that way. Recently a few DBA tasks are being assigned to me on the basis that I should know these as I have been working for fairly long time alongside my colleague. So I am thinking to get into a Oracle DBA course.
But I have a lot to learn in DevOps/SRE era in 2025. I was planning to get couple of certs in AWS/K8S and learn a new language like Go/Rust etc.
I don't know what will happen in the future. May be they might move these DB stuff into the cloud. Might be they adopt any service that no DBA is needed. Besides if I switch company, may be they do not need DBA skills for the position that I want to apply. So, my spent time to learn DBA will be a waste. Now, should I spend time to learn complete oracle DBA or just scrapping the web to get things done and focus others?
https://redd.it/1jclciv
@r_devops
I am an entry level DevOps Engineer working in a mid size (300+ dev) software company for almost 3 years. I mostly maintain our On-prem PROXMOX cluster, K8S cluster, monitoring/alerting (500+ VM and WS), do some scripting in BASH and python. My senior colleague do the same but additionally he is our Oracle DBA. Lately I realized that was hired to be a substitute of my colleague. But nobody guide me in that way. Recently a few DBA tasks are being assigned to me on the basis that I should know these as I have been working for fairly long time alongside my colleague. So I am thinking to get into a Oracle DBA course.
But I have a lot to learn in DevOps/SRE era in 2025. I was planning to get couple of certs in AWS/K8S and learn a new language like Go/Rust etc.
I don't know what will happen in the future. May be they might move these DB stuff into the cloud. Might be they adopt any service that no DBA is needed. Besides if I switch company, may be they do not need DBA skills for the position that I want to apply. So, my spent time to learn DBA will be a waste. Now, should I spend time to learn complete oracle DBA or just scrapping the web to get things done and focus others?
https://redd.it/1jclciv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community