EU SysEleven: has anyone worked with it?
hey devops people,
I may start working in a company which will transition from AWS & Azure to SysEleven, which is some German-based open-source provider which offers managed Kubernetes solutions. This decision is taken already, it's just a matter of implementing it now.
has anybody worked with SysEleven? what's the vibe here? what were some pain points during transitions? any opinion and feedback with your work with it is welcomed.
https://redd.it/1je1nen
@r_devops
hey devops people,
I may start working in a company which will transition from AWS & Azure to SysEleven, which is some German-based open-source provider which offers managed Kubernetes solutions. This decision is taken already, it's just a matter of implementing it now.
has anybody worked with SysEleven? what's the vibe here? what were some pain points during transitions? any opinion and feedback with your work with it is welcomed.
https://redd.it/1je1nen
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What's the best starting point for devops?
Hi there, I started self learning IT a couple months ago, I am fascinated about devops world but I know it is not an entry level position. I already looked at the roadmap so I know that many skills like linux, scripting etc are requested in order to get to that point, and it will surely take some years, but in the meantime is it better to start working as a developer or as a helpdesk/sysadmin? Which one would be more helpful for future devops ?
https://redd.it/1je17vs
@r_devops
Hi there, I started self learning IT a couple months ago, I am fascinated about devops world but I know it is not an entry level position. I already looked at the roadmap so I know that many skills like linux, scripting etc are requested in order to get to that point, and it will surely take some years, but in the meantime is it better to start working as a developer or as a helpdesk/sysadmin? Which one would be more helpful for future devops ?
https://redd.it/1je17vs
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DevOps job prospects, EU
For someone who would be fluent in the host nations language and has 5+ years of experience AWS, AZURE etc, how is the job market looking in Germany/Netherlands/Belgium etc. for cybersecurity roles at present? Is there much demand?
https://redd.it/1je33y5
@r_devops
For someone who would be fluent in the host nations language and has 5+ years of experience AWS, AZURE etc, how is the job market looking in Germany/Netherlands/Belgium etc. for cybersecurity roles at present? Is there much demand?
https://redd.it/1je33y5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
List of YouTube channels about DevOps and Cloud
I am working on a repository on GitHub where I will place references to YouTube channels that teaches about DevOps and everything related to Cloud. In this way, we generate an information bank of video content that is valuable to the community.
In principle, the idea is to provide channels in English and also in Spanish. So, I ask you to please post interesting channels, either in English or Spanish.
In the repository you can do a PR, but I will also be doing my part by posting channels that I think share value. Let's make this post a hub for your favorite DevOps and Cloud channels. You can also contribute new ideas.
The repository is as follows: https://github.com/jersonmartinez/DevOps-YouTube-Channels
https://redd.it/1je77sk
@r_devops
I am working on a repository on GitHub where I will place references to YouTube channels that teaches about DevOps and everything related to Cloud. In this way, we generate an information bank of video content that is valuable to the community.
In principle, the idea is to provide channels in English and also in Spanish. So, I ask you to please post interesting channels, either in English or Spanish.
In the repository you can do a PR, but I will also be doing my part by posting channels that I think share value. Let's make this post a hub for your favorite DevOps and Cloud channels. You can also contribute new ideas.
The repository is as follows: https://github.com/jersonmartinez/DevOps-YouTube-Channels
https://redd.it/1je77sk
@r_devops
GitHub
GitHub - jersonmartinez/DevOps-YouTube-Channels: DevOps YouTube Channels
DevOps YouTube Channels. Contribute to jersonmartinez/DevOps-YouTube-Channels development by creating an account on GitHub.
What dev prod metrics are folks actually using?
I've been thinking a lot about how we measure developer productivity and experience (DevEx) at work. There’s the classic DORA and SPACE frameworks, but in reality, it often feels like leadership latches onto things like PR count or velocity, which don't always tell the full story. I was traditionally a big DORA fan myself but I know they all have drawbacks and metrics alone never paint the full picture (though feel free to prove me wrong).
In my experience, the most useful metrics are the ones that help identify blockers and improve flow efficiency—things like time-to-first-feedback or time spent waiting on dependencies. But I’d love to hear from others:
* What dev productivity or DevEx metrics does your team actually track?
* Are they useful, or do they feel like vanity metrics?
* Have they led to any tangible changes in how your team works?
I recently came across [this article](https://thenewstack.io/let-productivity-metrics-and-devex-drive-each-other/) that argues productivity metrics should be used to improve DevEx, not just measure output. But i also kind of think devex is an overly buzzy term/doesnt mean much anymore. IDK.
Curious what DevProd metrics your team tracks/makes you follow. :)
https://redd.it/1jea42t
@r_devops
I've been thinking a lot about how we measure developer productivity and experience (DevEx) at work. There’s the classic DORA and SPACE frameworks, but in reality, it often feels like leadership latches onto things like PR count or velocity, which don't always tell the full story. I was traditionally a big DORA fan myself but I know they all have drawbacks and metrics alone never paint the full picture (though feel free to prove me wrong).
In my experience, the most useful metrics are the ones that help identify blockers and improve flow efficiency—things like time-to-first-feedback or time spent waiting on dependencies. But I’d love to hear from others:
* What dev productivity or DevEx metrics does your team actually track?
* Are they useful, or do they feel like vanity metrics?
* Have they led to any tangible changes in how your team works?
I recently came across [this article](https://thenewstack.io/let-productivity-metrics-and-devex-drive-each-other/) that argues productivity metrics should be used to improve DevEx, not just measure output. But i also kind of think devex is an overly buzzy term/doesnt mean much anymore. IDK.
Curious what DevProd metrics your team tracks/makes you follow. :)
https://redd.it/1jea42t
@r_devops
The New Stack
Let Productivity Metrics and DevEx Drive Each Other
The answer to the debate isn’t to choose one side or the other — it’s to recognize that both are means toward the same end.
Ports "seems" to be not exposed
Hi Folks, I'm setting up a devcontainer to work with Salesforce developement.
One of the required cli tools (sf cli) needs access to port 1717 during the authorization of connection with the orgs.
When I try to authorize, the process in terminal stays hanging, as waiting for the callback from the server.
I used
I noticed in Docker Desktop that port 1717 doesn't show up as exposed, even having all the settings aforementioned in place.
Does anyone have any suggestions?
https://redd.it/1jedc7u
@r_devops
Hi Folks, I'm setting up a devcontainer to work with Salesforce developement.
One of the required cli tools (sf cli) needs access to port 1717 during the authorization of connection with the orgs.
When I try to authorize, the process in terminal stays hanging, as waiting for the callback from the server.
I used
EXPOSE in my devcontainer docker file, portsFoward in the devcontainer.json but it still doesn't work.I noticed in Docker Desktop that port 1717 doesn't show up as exposed, even having all the settings aforementioned in place.
Does anyone have any suggestions?
https://redd.it/1jedc7u
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Active Directory
What's a good quick and dirty way to learn about AD and LDAP. I support a product that works with AD but my knowledge is piss poor and need to ramp up.
https://redd.it/1jefcoz
@r_devops
What's a good quick and dirty way to learn about AD and LDAP. I support a product that works with AD but my knowledge is piss poor and need to ramp up.
https://redd.it/1jefcoz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to Debug a Node.js Microservice in Kubernetes
Sharing a guide on debugging a Node.js Microservice running in a Kubernetes environment. In a nutshell, it show how to run your service locally while still accessing live cluster resources and context, so you can test and debug without deploying.
https://metalbear.co/guides/how-to-debug-a-nodejs-microservice/
https://redd.it/1jehohw
@r_devops
Sharing a guide on debugging a Node.js Microservice running in a Kubernetes environment. In a nutshell, it show how to run your service locally while still accessing live cluster resources and context, so you can test and debug without deploying.
https://metalbear.co/guides/how-to-debug-a-nodejs-microservice/
https://redd.it/1jehohw
@r_devops
MetalBear 🐻
How to Debug Node.js Microservices in Kubernetes
Learn to debug Node.js microservices in Kubernetes with mirrord, using Node.js or the CLI for efficient, real-time troubleshooting without redeploying.
How is artifactory search so uselsess?
I literally copy the repository path verbatim and paste it into the search bar and it cant find it?? what the actual fuck is it searching? How is it possible to make a search this bad?
https://redd.it/1jei3g0
@r_devops
I literally copy the repository path verbatim and paste it into the search bar and it cant find it?? what the actual fuck is it searching? How is it possible to make a search this bad?
https://redd.it/1jei3g0
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
DevOps security architecture
Here is an example of how a secure DevOps architecture diagram can look like when integrating the right tools and following the principles that optimize DevOps implementation into your infrastructures
https://www.clickittech.com/devops/devops-architecture/#h-devops-architecture-diagram-example
https://redd.it/1jeirc3
@r_devops
Here is an example of how a secure DevOps architecture diagram can look like when integrating the right tools and following the principles that optimize DevOps implementation into your infrastructures
https://www.clickittech.com/devops/devops-architecture/#h-devops-architecture-diagram-example
https://redd.it/1jeirc3
@r_devops
ClickIT
DevOps Architecture: A Complete Guide for 2026
To build a DevOps architecture diagram follow practices like a CI/CD pipeline, Microservices, IaC, and Container Orchestration.
Mobile app for phone-sized screen for viewing traces?
Is there a mobile app for "small screens" (phone sized) for viewing traces?
I have been using OTel tracing in all of my recent projects and don't even need logging anymore - because traces have richer semantics and are easier to "navigate".
I would love to be able to check things "on the go". I already send OTel traces to GCP's Cloud Tracing, and to AWS X-ray. So, if there is a mobile-first frontend for Cloud Tracing or X-ray that would work. A mobile-friendly frontend for any other tracing backend are welcome too!
Something like https://github.com/ymtdzzz/otel-tui but for mobile would work as well - I can self-host the backend part.
Thanks!
https://redd.it/1jemv6l
@r_devops
Is there a mobile app for "small screens" (phone sized) for viewing traces?
I have been using OTel tracing in all of my recent projects and don't even need logging anymore - because traces have richer semantics and are easier to "navigate".
I would love to be able to check things "on the go". I already send OTel traces to GCP's Cloud Tracing, and to AWS X-ray. So, if there is a mobile-first frontend for Cloud Tracing or X-ray that would work. A mobile-friendly frontend for any other tracing backend are welcome too!
Something like https://github.com/ymtdzzz/otel-tui but for mobile would work as well - I can self-host the backend part.
Thanks!
https://redd.it/1jemv6l
@r_devops
GitHub
GitHub - ymtdzzz/otel-tui: A terminal OpenTelemetry viewer inspired by otel-desktop-viewer
A terminal OpenTelemetry viewer inspired by otel-desktop-viewer - ymtdzzz/otel-tui
[CFP] Call for Papers – IEEE JCC 2025
Dear Researchers,
We are pleased to announce the **16th** **IEEE International Conference on Cloud Computing and Services (JCC 2025)**, which will be held from **July 21-24, 2025**, in **Tucson, Arizona, United States**.
IEEE JCC 2025 is a leading conference focused on the latest developments in cloud computing and services. This conference offers an excellent platform for researchers, practitioners, and industry experts to exchange ideas and share innovative research on cloud technologies, cloud-based applications, and services. We invite high-quality paper submissions on the following topics (but not limited to):
* AI/ML in joint-cloud environments
* AI/ML for Distributed Systems
* Cloud Service Models and Architectures
* Cloud Security and Privacy
* Cloud-based Internet of Things (IoT)
* Data Analytics and Machine Learning in the Cloud
* Cloud Infrastructure and Virtualization
* Cloud Management and Automation
* Cloud Computing for Edge Computing and 5G
* Industry Applications and Case Studies in Cloud Computing
**Paper Submission:**
Please submit your papers via the following link: [https://easychair.org/conferences/?conf=jcc2025](https://easychair.org/conferences/?conf=jcc2025)
**Important Dates:**
* **Paper Submission Deadline:** March 21, 2025
* **Author Notification:** May 8, 2025
* **Final Paper Submission (Camera-ready):** May 18, 2025
For additional details, visit the conference website: [https://conf.researchr.org/track/cisose-2025/jcc-2025](https://conf.researchr.org/track/cisose-2025/jcc-2025)
We look forward to your submissions and valuable contributions to the field of cloud computing and services.
Best regards,
Steering Committee, CISOSE 2025
https://redd.it/1jem54t
@r_devops
Dear Researchers,
We are pleased to announce the **16th** **IEEE International Conference on Cloud Computing and Services (JCC 2025)**, which will be held from **July 21-24, 2025**, in **Tucson, Arizona, United States**.
IEEE JCC 2025 is a leading conference focused on the latest developments in cloud computing and services. This conference offers an excellent platform for researchers, practitioners, and industry experts to exchange ideas and share innovative research on cloud technologies, cloud-based applications, and services. We invite high-quality paper submissions on the following topics (but not limited to):
* AI/ML in joint-cloud environments
* AI/ML for Distributed Systems
* Cloud Service Models and Architectures
* Cloud Security and Privacy
* Cloud-based Internet of Things (IoT)
* Data Analytics and Machine Learning in the Cloud
* Cloud Infrastructure and Virtualization
* Cloud Management and Automation
* Cloud Computing for Edge Computing and 5G
* Industry Applications and Case Studies in Cloud Computing
**Paper Submission:**
Please submit your papers via the following link: [https://easychair.org/conferences/?conf=jcc2025](https://easychair.org/conferences/?conf=jcc2025)
**Important Dates:**
* **Paper Submission Deadline:** March 21, 2025
* **Author Notification:** May 8, 2025
* **Final Paper Submission (Camera-ready):** May 18, 2025
For additional details, visit the conference website: [https://conf.researchr.org/track/cisose-2025/jcc-2025](https://conf.researchr.org/track/cisose-2025/jcc-2025)
We look forward to your submissions and valuable contributions to the field of cloud computing and services.
Best regards,
Steering Committee, CISOSE 2025
https://redd.it/1jem54t
@r_devops
Call for Papers – IEEE SOSE 2025
Dear Researchers,
I am pleased to invite you to submit your research to the **19th IEEE International Conference on Service-Oriented System Engineering (SOSE 2025)**, to be held from **July 21-24, 2025**, in **Tucson, Arizona, United States**.
IEEE SOSE 2025 provides a leading international forum for researchers, practitioners, and industry experts to present and discuss cutting-edge research on service-oriented system engineering, microservices, AI-driven services, and cloud computing. The conference aims to advance the development of service-oriented computing, architectures, and applications in various domains.
# Topics of Interest Include (but are not limited to):
* Service-Oriented Architectures (SOA) & Microservices
* AI-Driven Service Computing
* Service Engineering for Cloud, Edge, and IoT
* Blockchain for Service Computing
* Security, Privacy, and Trust in Service-Oriented Systems
* DevOps & Continuous Deployment in SOSE
* Digital Twins & Cyber-Physical Systems
* Industry Applications and Real-World Case Studies
# Paper Submission: [https://easychair.org/conferences/?conf=sose2025](https://easychair.org/conferences/?conf=sose2025)
# Important Dates:
* **Paper Submission Deadline:** **April 15, 2025**
* **Author Notification:** **May 15, 2025**
* **Final Paper Submission (Camera-ready):** **May 22, 2025**
For more details, visit the conference website:
[https://conf.researchr.org/track/cisose-2025/sose-2025](https://conf.researchr.org/track/cisose-2025/sose-2025)
We look forward to your contributions and participation in IEEE SOSE 2025!
Best regards,
Steering Committee, CISOSE 2025
https://redd.it/1jeoqaq
@r_devops
Dear Researchers,
I am pleased to invite you to submit your research to the **19th IEEE International Conference on Service-Oriented System Engineering (SOSE 2025)**, to be held from **July 21-24, 2025**, in **Tucson, Arizona, United States**.
IEEE SOSE 2025 provides a leading international forum for researchers, practitioners, and industry experts to present and discuss cutting-edge research on service-oriented system engineering, microservices, AI-driven services, and cloud computing. The conference aims to advance the development of service-oriented computing, architectures, and applications in various domains.
# Topics of Interest Include (but are not limited to):
* Service-Oriented Architectures (SOA) & Microservices
* AI-Driven Service Computing
* Service Engineering for Cloud, Edge, and IoT
* Blockchain for Service Computing
* Security, Privacy, and Trust in Service-Oriented Systems
* DevOps & Continuous Deployment in SOSE
* Digital Twins & Cyber-Physical Systems
* Industry Applications and Real-World Case Studies
# Paper Submission: [https://easychair.org/conferences/?conf=sose2025](https://easychair.org/conferences/?conf=sose2025)
# Important Dates:
* **Paper Submission Deadline:** **April 15, 2025**
* **Author Notification:** **May 15, 2025**
* **Final Paper Submission (Camera-ready):** **May 22, 2025**
For more details, visit the conference website:
[https://conf.researchr.org/track/cisose-2025/sose-2025](https://conf.researchr.org/track/cisose-2025/sose-2025)
We look forward to your contributions and participation in IEEE SOSE 2025!
Best regards,
Steering Committee, CISOSE 2025
https://redd.it/1jeoqaq
@r_devops
🤹♀️ multipr - Make the same change in many GitHub repos!
Announcing
https://github.com/fredrikaverpil/multipr
https://redd.it/1jepw74
@r_devops
Announcing
multipr; create pull requests ”en masse” 🚀🚀🚀https://github.com/fredrikaverpil/multipr
https://redd.it/1jepw74
@r_devops
GitHub
GitHub - fredrikaverpil/multipr: Create (and update) pull requests en masse.
Create (and update) pull requests en masse. Contribute to fredrikaverpil/multipr development by creating an account on GitHub.
GCP DevOps [REMOTE] [INDIA] [FULL TIME]
# Cloud Engineer
Experience: 2 to 4 years of experience
**Requirements**
* Extensive Linux experience, comfortable between Debian and Redhat.
* Experience architecting, deploying/developing software, or internet scale production-grade cloud solutions in virtualized environments, such as Google Cloud Platform or other public clouds.
* Experience refactoring monolithic applications to microservices, APIs, and/or serverless models.
* Good Understanding of OSS and managed SQL and NoSQL Databases.
* Coding knowledge in one or more scripting languages - Python, NodeJS, bash etc and 1 programming language preferably Go.
* Experience in containerisation technology - Kubernetes, Docker
* Experience in the following or similar technologies- GKE, API Management tools like API Gateway, Service Mesh technologies like Istio, Serverless technologies like Cloud Run, Cloud functions, Lambda etc.
* Build pipeline (CI) tools experience; both design and implementation preferably using Google Cloud build but open to other tools like Circle CI, Gitlab and Jenkins
* Experience in any of the Continuous Delivery tools (CD) preferably Google Cloud Deploy but open to other tools like ArgoCD, Spinnaker.
* Automation experience using any of the IaC tools preferably Terraform with Google Provider.
* Expertise in Monitoring & Logging tools preferably Google Cloud Monitoring & Logging but open to other tools like Prometheus/Grafana, Datadog, NewRelic
* Consult with clients in automation and migration strategy and execution
* Must have experience working with version control tools such as Bitbucket, Github/Gitlab
* Must have good communication skills
* Strongly goal oriented individual with a continuous drive to learn and grow
* Emanates ownership, accountability and integrity
**Responsibilities**
* Support seniors on at least 2 to 3 customer projects, able to handle customer communication with the coordination of products owners and project managers.
* Support seniors on creating well-informed, in-depth cloud strategy and manage its adaptation process.
* Initiative to create solutions, always find improvements and offer assistance when needed without being asked.
* Takes ownership of projects, processes, domain and people and holds themselves accountable to achieve successful results.
* Understands their area of work and shares their knowledge frequently with their teammates.
* Given an introduction to the context in which a task fits, design and complete a medium to large sized task independently.
* Perform the tasks review of their colleagues and ensure it conforms to the task requirements and best practices.
* Troubleshoot incidents, identify root cause, fix and document problems, and implement preventive measures and solve issues before they affect business productivity.
* Ensure application performance, uptime, and scale, maintaining high standards of code quality and thoughtful design.
* Managing cloud environments in accordance with company security guidelines.
* Define and document best practices and strategies regarding application deployment and infrastructure maintenance.
https://redd.it/1jept2e
@r_devops
# Cloud Engineer
Experience: 2 to 4 years of experience
**Requirements**
* Extensive Linux experience, comfortable between Debian and Redhat.
* Experience architecting, deploying/developing software, or internet scale production-grade cloud solutions in virtualized environments, such as Google Cloud Platform or other public clouds.
* Experience refactoring monolithic applications to microservices, APIs, and/or serverless models.
* Good Understanding of OSS and managed SQL and NoSQL Databases.
* Coding knowledge in one or more scripting languages - Python, NodeJS, bash etc and 1 programming language preferably Go.
* Experience in containerisation technology - Kubernetes, Docker
* Experience in the following or similar technologies- GKE, API Management tools like API Gateway, Service Mesh technologies like Istio, Serverless technologies like Cloud Run, Cloud functions, Lambda etc.
* Build pipeline (CI) tools experience; both design and implementation preferably using Google Cloud build but open to other tools like Circle CI, Gitlab and Jenkins
* Experience in any of the Continuous Delivery tools (CD) preferably Google Cloud Deploy but open to other tools like ArgoCD, Spinnaker.
* Automation experience using any of the IaC tools preferably Terraform with Google Provider.
* Expertise in Monitoring & Logging tools preferably Google Cloud Monitoring & Logging but open to other tools like Prometheus/Grafana, Datadog, NewRelic
* Consult with clients in automation and migration strategy and execution
* Must have experience working with version control tools such as Bitbucket, Github/Gitlab
* Must have good communication skills
* Strongly goal oriented individual with a continuous drive to learn and grow
* Emanates ownership, accountability and integrity
**Responsibilities**
* Support seniors on at least 2 to 3 customer projects, able to handle customer communication with the coordination of products owners and project managers.
* Support seniors on creating well-informed, in-depth cloud strategy and manage its adaptation process.
* Initiative to create solutions, always find improvements and offer assistance when needed without being asked.
* Takes ownership of projects, processes, domain and people and holds themselves accountable to achieve successful results.
* Understands their area of work and shares their knowledge frequently with their teammates.
* Given an introduction to the context in which a task fits, design and complete a medium to large sized task independently.
* Perform the tasks review of their colleagues and ensure it conforms to the task requirements and best practices.
* Troubleshoot incidents, identify root cause, fix and document problems, and implement preventive measures and solve issues before they affect business productivity.
* Ensure application performance, uptime, and scale, maintaining high standards of code quality and thoughtful design.
* Managing cloud environments in accordance with company security guidelines.
* Define and document best practices and strategies regarding application deployment and infrastructure maintenance.
https://redd.it/1jept2e
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Salary inquiry
Hello folks,
I am currently searching for opportunities for devops profile, i have over 3 years of experience. I am seeing a few openings at EPAM for devops engineer A2 level. I just wanted what salary can i expect from this profile in india.
https://redd.it/1jeqtg2
@r_devops
Hello folks,
I am currently searching for opportunities for devops profile, i have over 3 years of experience. I am seeing a few openings at EPAM for devops engineer A2 level. I just wanted what salary can i expect from this profile in india.
https://redd.it/1jeqtg2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Configurable deployment targets
How to deploy an app to multiple environments so that each env can run a different version of the application?
Here’s a short list of requirements:
1) app has to be deployed, meaning it's either a web app, or e.g. a backend service like an API
2) ut should be possible to deploy the app to multiple different environments/targets (like staging, production, test, etc.)
3) every environment can run a different version of the app
I’ve brainstormed several options here: https://www.toolongautomated.com/posts/2024/one-branch-to-rule-them-all-1.html#req-3-4-configurable-deployment-targets but would be grateful for more perspectives. Is anything I mentioned your go-to option, or maybe you think the listed ones are a strong no-go? If so, please share why you think so and that you’d do instead.
https://redd.it/1jettb5
@r_devops
How to deploy an app to multiple environments so that each env can run a different version of the application?
Here’s a short list of requirements:
1) app has to be deployed, meaning it's either a web app, or e.g. a backend service like an API
2) ut should be possible to deploy the app to multiple different environments/targets (like staging, production, test, etc.)
3) every environment can run a different version of the app
I’ve brainstormed several options here: https://www.toolongautomated.com/posts/2024/one-branch-to-rule-them-all-1.html#req-3-4-configurable-deployment-targets but would be grateful for more perspectives. Is anything I mentioned your go-to option, or maybe you think the listed ones are a strong no-go? If so, please share why you think so and that you’d do instead.
https://redd.it/1jettb5
@r_devops
too long; automated
one branch to rule them all | guided series #1
A lot of us deploy our apps to multiple cloud environments (or soon will). When I had to do that myself, I faced many questions, with the main one being: where do I even start? Let me show you.
Transition To DevOps
Hi fam, I am a data analyst with a work exp of 2 years, I am planning and trying to transition into DevOps domain. What are the challenges i will face when trying for full time jobs as i have my prior experience from a different domain.
PS. I am in indian job market
Please feel free to drop your suggestion or tips that might help me.
Thank you so much:)
https://redd.it/1jetkdz
@r_devops
Hi fam, I am a data analyst with a work exp of 2 years, I am planning and trying to transition into DevOps domain. What are the challenges i will face when trying for full time jobs as i have my prior experience from a different domain.
PS. I am in indian job market
Please feel free to drop your suggestion or tips that might help me.
Thank you so much:)
https://redd.it/1jetkdz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
JFrog Artifactory alternatives on 2025
HI,
i saw this question a few times in the group, but i. guess it will be interesting to now new ideas in 2025.
So i see that licensing of artifactory pro X is going to increase around 50%. i dont really like negotiating with them. I actually pay same price for a test instance than a prod instance.(i need to have a test intance for regulations, but it is actuallty doing anything and holding some Gb of test artifacts).
If i want to have HA design, i need to move to Enterprise, 3 servers in each environment. That´s actually a crazy idea.
My needs (and mostly the majority) are binary registry, proxy registry, containers, oci, etc. And RBAC with SAML/OIDC
I have been checking into Nexus and a new tool called proget. i could also get a cheap of OSS tool for binaries and harbour (im more concern of HA in containers).
https://redd.it/1jeuuo9
@r_devops
HI,
i saw this question a few times in the group, but i. guess it will be interesting to now new ideas in 2025.
So i see that licensing of artifactory pro X is going to increase around 50%. i dont really like negotiating with them. I actually pay same price for a test instance than a prod instance.(i need to have a test intance for regulations, but it is actuallty doing anything and holding some Gb of test artifacts).
If i want to have HA design, i need to move to Enterprise, 3 servers in each environment. That´s actually a crazy idea.
My needs (and mostly the majority) are binary registry, proxy registry, containers, oci, etc. And RBAC with SAML/OIDC
I have been checking into Nexus and a new tool called proget. i could also get a cheap of OSS tool for binaries and harbour (im more concern of HA in containers).
https://redd.it/1jeuuo9
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
CloudFormation template validation in NeoVim
I write a lot of CloudFormation at my job (press `F` to pay respects) and I use NeoVim (btw).
While the YAML language server and my Schema Store integration does a great job of letting me know if I've totally botched something, I really like knowing that my template will validate, and I really hate how long the AWS CLI command to do so is. So I wrote a `:Validate` user command and figured I'd share in case anybody else was in the same boat.
vim.api.nvim_create_user_command("Validate", function()
local file = vim.fn.expand("%") -- Get the current file path
if file == "" then
vim.notify("No file name detected.", vim.log.levels.ERROR)
return
end
vim.cmd("!" .. "aws cloudformation validate-template --template-body file://" .. file)
end, { desc = "Use the AWS CLI to validate the current buffer as a CloudFormation Template" })
As I write this, it occurs to me that a `pre-commit` Git hook would also be a good idea.
I hope somebody else finds this helpful/useful.
https://redd.it/1jez6eg
@r_devops
I write a lot of CloudFormation at my job (press `F` to pay respects) and I use NeoVim (btw).
While the YAML language server and my Schema Store integration does a great job of letting me know if I've totally botched something, I really like knowing that my template will validate, and I really hate how long the AWS CLI command to do so is. So I wrote a `:Validate` user command and figured I'd share in case anybody else was in the same boat.
vim.api.nvim_create_user_command("Validate", function()
local file = vim.fn.expand("%") -- Get the current file path
if file == "" then
vim.notify("No file name detected.", vim.log.levels.ERROR)
return
end
vim.cmd("!" .. "aws cloudformation validate-template --template-body file://" .. file)
end, { desc = "Use the AWS CLI to validate the current buffer as a CloudFormation Template" })
As I write this, it occurs to me that a `pre-commit` Git hook would also be a good idea.
I hope somebody else finds this helpful/useful.
https://redd.it/1jez6eg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Staging database - What is the best approach?
I have a staging environment and production environment. I want to populate the staging environment with data, but I am uncertain what data to use, also regarding security/privacy best practices.
Regarding staging, I came across answers, such as this, stating that a staging enviroment shall essentially mirror a production environment, including the database.
>[...\] You should also make sure the complete environments are as similar as possible, and stay that way. This obviously includes the DB. I normally setup a sync either daily or hourly (depending on how often I am building the site or app) to maintain the DB, and will often run this as part of the build process.
From my understanding, this person implies they copy their production database to staging. I've seen answers how to copy a production database to staging, but what confuses me is that none of the answers raise questions about security. When I looked elsewhere, I saw entire threads concerned about data masking and anonymization.
>(Person A) I am getting old. But there used to be these guys called DBAs. They will clone the prod DB and run SQL scripts that they maintain to mask/sanitise/transpose data, even cut down size by deleting data (e.g. 10m rows to 10k rows) and then instantiate a new non-prod DB.
>(Person B) Back in the days, DBA team dumped production data, into the qa or stage and then CorpSec ran some kind of tool (don't remember the name but was an Oracle one) that anonymized the data. [...\]
However, there're also replies that imply one shouldn't use production data to begin with.
>(Person C) Use/create synthetic datasets.
>(Person D) Totally agree, production data is production data, and truly anonymizing it or randomizing it is hard. It only takes one slip-up to get into problems.
>(Person E) Well it's quite simple, really. Production PII data should never leave the production account.
So, it seems like there are the following approaches.
1. 1:1 copy production to staging without anonymization.
2. 1:1 copy production to staging with anonymization.
3. Create synthetical data to populate your staging database.
Since I store sensitive data, such as account data (e-mail, hashed password) and personal information that isn't accessible to other users, I assume option 3 is best for me to avoid any issues I may encounter in the future (?).
What option would you consider best, assuming you were to host a service which stores sensitive information and allows users to spend real money on it? And what approach do established companies usually use?
https://redd.it/1jezs2f
@r_devops
I have a staging environment and production environment. I want to populate the staging environment with data, but I am uncertain what data to use, also regarding security/privacy best practices.
Regarding staging, I came across answers, such as this, stating that a staging enviroment shall essentially mirror a production environment, including the database.
>[...\] You should also make sure the complete environments are as similar as possible, and stay that way. This obviously includes the DB. I normally setup a sync either daily or hourly (depending on how often I am building the site or app) to maintain the DB, and will often run this as part of the build process.
From my understanding, this person implies they copy their production database to staging. I've seen answers how to copy a production database to staging, but what confuses me is that none of the answers raise questions about security. When I looked elsewhere, I saw entire threads concerned about data masking and anonymization.
>(Person A) I am getting old. But there used to be these guys called DBAs. They will clone the prod DB and run SQL scripts that they maintain to mask/sanitise/transpose data, even cut down size by deleting data (e.g. 10m rows to 10k rows) and then instantiate a new non-prod DB.
>(Person B) Back in the days, DBA team dumped production data, into the qa or stage and then CorpSec ran some kind of tool (don't remember the name but was an Oracle one) that anonymized the data. [...\]
However, there're also replies that imply one shouldn't use production data to begin with.
>(Person C) Use/create synthetic datasets.
>(Person D) Totally agree, production data is production data, and truly anonymizing it or randomizing it is hard. It only takes one slip-up to get into problems.
>(Person E) Well it's quite simple, really. Production PII data should never leave the production account.
So, it seems like there are the following approaches.
1. 1:1 copy production to staging without anonymization.
2. 1:1 copy production to staging with anonymization.
3. Create synthetical data to populate your staging database.
Since I store sensitive data, such as account data (e-mail, hashed password) and personal information that isn't accessible to other users, I assume option 3 is best for me to avoid any issues I may encounter in the future (?).
What option would you consider best, assuming you were to host a service which stores sensitive information and allows users to spend real money on it? And what approach do established companies usually use?
https://redd.it/1jezs2f
@r_devops
Stack Overflow
Staging database good practices
I'm about to deploy to production a fairly complex site and for the first time need a staging environment where I can test things in a more realistic environment, especially with regard to some ext...