Sandboxing tools/best practices?
I'm curious how other developers are using API sandboxes in their workflows. Do you mostly use them for testing third-party integrations, mocking internal APIs, or something else entirely? Also, what are your biggest frustrations with existing sandbox environments?
For context, I’m researching the best sandbox tools for APIs. If you have recs on those, im open to it!!
https://redd.it/1izrkd5
@r_devops
I'm curious how other developers are using API sandboxes in their workflows. Do you mostly use them for testing third-party integrations, mocking internal APIs, or something else entirely? Also, what are your biggest frustrations with existing sandbox environments?
For context, I’m researching the best sandbox tools for APIs. If you have recs on those, im open to it!!
https://redd.it/1izrkd5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How are you separating iac from dev resources?
Hi all!
I'm trying to figure out the best way to decouple a terraform mono repo from things that devs need to interact with.
I've been bootstrapping a project and I'm finally bringing in some devs. So far I've had a frontend repo and a backend repo with my IAC and some micro services.
I have multiple dockerized app directories that are built and deployed into ecr/ecs through a github action. Terraform handles the networking, creation of ecr repos, service and task definitions dbs etc. That action can be broken up easy enough.
But once I have each of these docker apps in their own repo it's not difficult to have an action that just handles the deployment of that container. But if they want to make changes to cpu and memory then I start getting into terrafrom sprawl that I don't want.
Then there's Lambdas. Which is what I'm having the most difficulty figuring out a happy medium on. If there's multiple lambdas spread out across repos for their respective projects that becomes pretty hard to keep track of. The permissions that I create for those lambdas through terraform are probably going to have a different state if a dev changes something along with all the other changes they make to code etc. The only thing I can think of that makes this doable is giving ownership to the lambdas that devs need to interact with to the devs. Then importing the function as an existing resource from staging prod branches for a deployment?
This list goes on, but how do you handle breaking up resources that devs will need to alter, allowing them to dev local and in the cloud for say dev tagged resources but still integrate those resources where needed in iac without going on a goose chase throughout repos?
Maybe having smaller tf projects/modules in those repos as well that handle changes to resources through a json for cpu etc and pulls those variables in when pushed and built? Then the master IAC repo which builds all of the repos modules for a prod build?
Hope this makes sense. But advice on separation of concerns with unified deployment would be greatly appreciate.
Thanks!
https://redd.it/1izu7ct
@r_devops
Hi all!
I'm trying to figure out the best way to decouple a terraform mono repo from things that devs need to interact with.
I've been bootstrapping a project and I'm finally bringing in some devs. So far I've had a frontend repo and a backend repo with my IAC and some micro services.
I have multiple dockerized app directories that are built and deployed into ecr/ecs through a github action. Terraform handles the networking, creation of ecr repos, service and task definitions dbs etc. That action can be broken up easy enough.
But once I have each of these docker apps in their own repo it's not difficult to have an action that just handles the deployment of that container. But if they want to make changes to cpu and memory then I start getting into terrafrom sprawl that I don't want.
Then there's Lambdas. Which is what I'm having the most difficulty figuring out a happy medium on. If there's multiple lambdas spread out across repos for their respective projects that becomes pretty hard to keep track of. The permissions that I create for those lambdas through terraform are probably going to have a different state if a dev changes something along with all the other changes they make to code etc. The only thing I can think of that makes this doable is giving ownership to the lambdas that devs need to interact with to the devs. Then importing the function as an existing resource from staging prod branches for a deployment?
This list goes on, but how do you handle breaking up resources that devs will need to alter, allowing them to dev local and in the cloud for say dev tagged resources but still integrate those resources where needed in iac without going on a goose chase throughout repos?
Maybe having smaller tf projects/modules in those repos as well that handle changes to resources through a json for cpu etc and pulls those variables in when pushed and built? Then the master IAC repo which builds all of the repos modules for a prod build?
Hope this makes sense. But advice on separation of concerns with unified deployment would be greatly appreciate.
Thanks!
https://redd.it/1izu7ct
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Old tech or New tech
I did an interview and it was about tools that I had no experience with.
They were using AWS just for servers, and they had legacy monolithic applications, using Jenkins and so on.
And after the technical interview, I gave the interviewer an honest opinion about the choices they made, running jenkins, no IaC, no Ansible, and why they would migrate the workloads to Kubernetes.
It got me thinking, and I have a question for all of you.
Would you use old technology just because you have been doing it for years and are lazy to learn something new, or would you spend some time learning new tools that will simplify your near future tasks.
It came to the idea that C is one of the most used programming languages. Sure, it is, but mainly because the computing power was something to think about carefully.
Would you start a new application in C? Would you trade the "efficiency" that C gives for simplicity, speed of development and all the new features that Go for example has (as a new technology)?
Personally:
- New tech will save you a lot of time, not only in developing or working with it, but you will not spend all day debugging it.
- It might have some computational overhead, but does that really matter to most companies (except those on embedded systems)?
- I see systems or applications as a package (or container), I do not care what it has inside, all I care is what integrations it needs and what is its architecture.
P.s : If you think "devops is not about tools, is about bla bla bla", go and post it on Linkedin, I do not want to hear your comment.
I would rather use a simple tool that has no bugs, good documentation than a fast tool that gives me a headache and I have to debug it all day to find out what is wrong.
https://redd.it/1iztukl
@r_devops
I did an interview and it was about tools that I had no experience with.
They were using AWS just for servers, and they had legacy monolithic applications, using Jenkins and so on.
And after the technical interview, I gave the interviewer an honest opinion about the choices they made, running jenkins, no IaC, no Ansible, and why they would migrate the workloads to Kubernetes.
It got me thinking, and I have a question for all of you.
Would you use old technology just because you have been doing it for years and are lazy to learn something new, or would you spend some time learning new tools that will simplify your near future tasks.
It came to the idea that C is one of the most used programming languages. Sure, it is, but mainly because the computing power was something to think about carefully.
Would you start a new application in C? Would you trade the "efficiency" that C gives for simplicity, speed of development and all the new features that Go for example has (as a new technology)?
Personally:
- New tech will save you a lot of time, not only in developing or working with it, but you will not spend all day debugging it.
- It might have some computational overhead, but does that really matter to most companies (except those on embedded systems)?
- I see systems or applications as a package (or container), I do not care what it has inside, all I care is what integrations it needs and what is its architecture.
P.s : If you think "devops is not about tools, is about bla bla bla", go and post it on Linkedin, I do not want to hear your comment.
I would rather use a simple tool that has no bugs, good documentation than a fast tool that gives me a headache and I have to debug it all day to find out what is wrong.
https://redd.it/1iztukl
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Which department should the DevOps team report to?
We're hiring our first DevOps engineer, and my manager suggested placing DevOps under the VP of Operations instead of R&D. To me, that sounds completely bonkers. What's the common practice?
https://redd.it/1j023j0
@r_devops
We're hiring our first DevOps engineer, and my manager suggested placing DevOps under the VP of Operations instead of R&D. To me, that sounds completely bonkers. What's the common practice?
https://redd.it/1j023j0
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Announcement: New release of the Jailer database tool has been published
[Jailer is a tool for database subsetting and relational data browsing\](https://github.com/Wisser/Jailer).
It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.
* The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML.Ideal for creating small samples of test data or for local problem analysis with relevant production data.
* The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.
Features
* Exports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.
* Improves database performance by removing and archiving obsolete data without violating integrity.
* Generates topologically sorted SQL-DML, hierarchically structured XML and DbUnit datasets.
* Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.
* SQL Console with code completion, syntax highlighting and database metadata visualization.
* A demo database is included with which you can get a first impression without any configuration effort.
https://redd.it/1j02y4g
@r_devops
[Jailer is a tool for database subsetting and relational data browsing\](https://github.com/Wisser/Jailer).
It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.
* The Subsetter creates small slices from your database (consistent and referentially intact) as SQL (topologically sorted), DbUnit records or XML.Ideal for creating small samples of test data or for local problem analysis with relevant production data.
* The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.
Features
* Exports consistent and referentially intact row-sets from your productive database and imports the data into your development and test environment.
* Improves database performance by removing and archiving obsolete data without violating integrity.
* Generates topologically sorted SQL-DML, hierarchically structured XML and DbUnit datasets.
* Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.
* SQL Console with code completion, syntax highlighting and database metadata visualization.
* A demo database is included with which you can get a first impression without any configuration effort.
https://redd.it/1j02y4g
@r_devops
GitHub
GitHub - Wisser/Jailer: Database Subsetting and Relational Data Browsing Tool.
Database Subsetting and Relational Data Browsing Tool. - Wisser/Jailer
How do you manage dependency updates?
Hey guys!
We have multiple projects at work and we usually use dependabot to manage package updates. However for a time we had to pause it for various reasons.
We're now updating our packages. Some of the updates are major, the majority being minor while a few are patches.
The thing is, its very time consuming going through them all and the thing with dependabot is, it creates a PR (which we have so many of) but the process is still very manual.
I was wondering the following:
- Do you use dependabot, renovate or something else?
- How do you manage so many dependabot PRs?
- How have you handled breaking changes in your project due to dependency updates?
I'm curious to know how teams handle this issue or what could make the process less painful.
Thanks in advance!
https://redd.it/1j02ka6
@r_devops
Hey guys!
We have multiple projects at work and we usually use dependabot to manage package updates. However for a time we had to pause it for various reasons.
We're now updating our packages. Some of the updates are major, the majority being minor while a few are patches.
The thing is, its very time consuming going through them all and the thing with dependabot is, it creates a PR (which we have so many of) but the process is still very manual.
I was wondering the following:
- Do you use dependabot, renovate or something else?
- How do you manage so many dependabot PRs?
- How have you handled breaking changes in your project due to dependency updates?
I'm curious to know how teams handle this issue or what could make the process less painful.
Thanks in advance!
https://redd.it/1j02ka6
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Where should I store images for my live website? (Using MongoDB, need a cost-effective solution)
Hey everyone,
I’m running a live website and need a good way to store product images. I’m using MongoDB as my database and will be uploading around 6-8 images per month (so not a massive load).
I’m also trying to figure out where to deploy both my backend and frontend while keeping costs low. Ideally, I’d like a setup where I can handle image uploads and storage efficiently.
Some questions I have:
Should I store images directly in MongoDB (GridFS) or use something like S3, Cloudinary, or Firebase Storage?
What’s a good place to deploy my backend (Node.js/Express)? Cheap options?
Same for the frontend (React) – where should I host it?
Any cost-effective ways to handle image uploads?
https://redd.it/1j04efw
@r_devops
Hey everyone,
I’m running a live website and need a good way to store product images. I’m using MongoDB as my database and will be uploading around 6-8 images per month (so not a massive load).
I’m also trying to figure out where to deploy both my backend and frontend while keeping costs low. Ideally, I’d like a setup where I can handle image uploads and storage efficiently.
Some questions I have:
Should I store images directly in MongoDB (GridFS) or use something like S3, Cloudinary, or Firebase Storage?
What’s a good place to deploy my backend (Node.js/Express)? Cheap options?
Same for the frontend (React) – where should I host it?
Any cost-effective ways to handle image uploads?
https://redd.it/1j04efw
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Sonarqube token not working?
Hi - I recently found out about redcoffee, a tool which allows you to generate Sonarqube reports free of cost (here), but when I use it it responds with a 401 non-authorized error code. I tried regenerating the token, it works for other stuff but not redcoffee. I tried with a project token, a user token, and I'm an admin. I contacted the author of the tool, who's pretty active on Reddit, but they could not find out why. Any ideas? Thanks!
https://redd.it/1j03sxv
@r_devops
Hi - I recently found out about redcoffee, a tool which allows you to generate Sonarqube reports free of cost (here), but when I use it it responds with a 401 non-authorized error code. I tried regenerating the token, it works for other stuff but not redcoffee. I tried with a project token, a user token, and I'm an admin. I contacted the author of the tool, who's pretty active on Reddit, but they could not find out why. Any ideas? Thanks!
https://redd.it/1j03sxv
@r_devops
GitHub
GitHub - Anubhav9/RedCoffee: RedCoffee is a Python Based CLI Tool that generates PDF Reports for analysis done using SonarQube…
RedCoffee is a Python Based CLI Tool that generates PDF Reports for analysis done using SonarQube Community Edition - Anubhav9/RedCoffee
Some projects on Docker for Self-learning and Resume
I am learning about Docker for containerization. I did sample projects like deploying 2-tier, 3-tier apps on containers. Tell me some unique projects that you made in Docker and also helpful in getting better knowledge of topic.
It would be much appreciated if you share some explainable summary for project too :).
https://redd.it/1j06g71
@r_devops
I am learning about Docker for containerization. I did sample projects like deploying 2-tier, 3-tier apps on containers. Tell me some unique projects that you made in Docker and also helpful in getting better knowledge of topic.
It would be much appreciated if you share some explainable summary for project too :).
https://redd.it/1j06g71
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Delete variables in many variable groups?
Hello everyone,
I'm new to DevOps and trying to learn the best way to approach this task.
I have 20 pipelines, and each pipeline has variable groups containing hundreds of variables. I want to delete a specific variable from any pipeline that is using it.
What is the easiest way to do this without manually checking each pipeline to see if the variable exist?
Azure DevOps
https://redd.it/1j07sl1
@r_devops
Hello everyone,
I'm new to DevOps and trying to learn the best way to approach this task.
I have 20 pipelines, and each pipeline has variable groups containing hundreds of variables. I want to delete a specific variable from any pipeline that is using it.
What is the easiest way to do this without manually checking each pipeline to see if the variable exist?
Azure DevOps
https://redd.it/1j07sl1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Creating docker image for my Laravel application to deploy on AWS ECS. Do I still need nginx?
So I have a PHP Laravel application I am planning on comtainerizing and deploying on AWS ECS. I have only ever deployed on a single VPS before, and configured nginx as a reverse proxy to my php-fpm process and use it to manage SSL certificates. Now that I am trying to containerize my application my original thoughts would be to simply containerize the PHP application and expose the php-fpm process porn out of the container and use AWS load balancer and certificate manager to essentially replace nginx. However I keep reading that I should still put nginx between my php Laravel application container (or include it in the docker image) and the AWS load balancer, but I don't exactly understand why?
https://redd.it/1j0a8u0
@r_devops
So I have a PHP Laravel application I am planning on comtainerizing and deploying on AWS ECS. I have only ever deployed on a single VPS before, and configured nginx as a reverse proxy to my php-fpm process and use it to manage SSL certificates. Now that I am trying to containerize my application my original thoughts would be to simply containerize the PHP application and expose the php-fpm process porn out of the container and use AWS load balancer and certificate manager to essentially replace nginx. However I keep reading that I should still put nginx between my php Laravel application container (or include it in the docker image) and the AWS load balancer, but I don't exactly understand why?
https://redd.it/1j0a8u0
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
❤1
Struggling to move Kibana dashboards between environments?
Rebuilding dashboards, searches, and visualizations from scratch can be a pain. But did you know there’s a simple way to export and import them effortlessly?
In our latest blog, we walk you through the easiest method to transfer Kibana dashboards, searches, and visualizations—saving you hours of manual work.
Check out the full guide
Have you tried exporting Kibana dashboards before? Share your experience in the comments!
\#Kibana #Elasticsearch #DevOps #ITMonitoring #DataVisualization #Observability #Skedler
https://preview.redd.it/ixbdp0japwle1.png?width=1536&format=png&auto=webp&s=0c5bf0798deffea6f05b2cc3be18de55477a880b
https://redd.it/1j0boz7
@r_devops
Rebuilding dashboards, searches, and visualizations from scratch can be a pain. But did you know there’s a simple way to export and import them effortlessly?
In our latest blog, we walk you through the easiest method to transfer Kibana dashboards, searches, and visualizations—saving you hours of manual work.
Check out the full guide
Have you tried exporting Kibana dashboards before? Share your experience in the comments!
\#Kibana #Elasticsearch #DevOps #ITMonitoring #DataVisualization #Observability #Skedler
https://preview.redd.it/ixbdp0japwle1.png?width=1536&format=png&auto=webp&s=0c5bf0798deffea6f05b2cc3be18de55477a880b
https://redd.it/1j0boz7
@r_devops
Skedler
The Best Tools for Exporting Data from Grafana | Grafana to CSV
Learn how to export Grafana dashboards and data efficiently, including CSV and PDF options, using inbuilt tools and Skedler Reports.
AWS ECS - Single account vs multi AWS accounts
Hey everyone,
I’m building a platform to make ECS less of a mess and wanna hear from you.
Do you stick to a single AWS account or run multi-account (per environment)? What’s your setup like?
Thanks for chiming in!
https://redd.it/1j0a6g1
@r_devops
Hey everyone,
I’m building a platform to make ECS less of a mess and wanna hear from you.
Do you stick to a single AWS account or run multi-account (per environment)? What’s your setup like?
Thanks for chiming in!
https://redd.it/1j0a6g1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Resources for “real-world” linux / devops labs
I’m pretty new to devops and i was wondering if there are any cool resources that give you the understanding of how complex distributed systems work and what problems are day-to-day for this kind of work. I feel pretty comfortable in linux and enjoy exploring this world (i am looking forward to switching from mac ( i know, but here me out, i bought it for learning ml which i dropped ofc ) to smth like lenovo thinkpad and run arch as main os on it and never quit the terminal again lol).
I am looking for labs/projects that give you something like: “hey, here’s your system { some configuration }. And here is the problem. Write a script / ansible role / any other tool to solve this issue”.
I rented a vps server that i use to learn ansible / docker / prometheus etc. can i build my own lab with it and some vms and not waste a fortune? And if so, how can i test its reliability?
https://redd.it/1j0dpew
@r_devops
I’m pretty new to devops and i was wondering if there are any cool resources that give you the understanding of how complex distributed systems work and what problems are day-to-day for this kind of work. I feel pretty comfortable in linux and enjoy exploring this world (i am looking forward to switching from mac ( i know, but here me out, i bought it for learning ml which i dropped ofc ) to smth like lenovo thinkpad and run arch as main os on it and never quit the terminal again lol).
I am looking for labs/projects that give you something like: “hey, here’s your system { some configuration }. And here is the problem. Write a script / ansible role / any other tool to solve this issue”.
I rented a vps server that i use to learn ansible / docker / prometheus etc. can i build my own lab with it and some vms and not waste a fortune? And if so, how can i test its reliability?
https://redd.it/1j0dpew
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I was able to sell a little more in my devops/cloud computing services company
Hello, 2 years ago I posted this on this channel: [https://www.reddit.com/r/devops/comments/169a9yy/i\_started\_a\_devops\_consulting\_company\_and\_havent/](https://www.reddit.com/r/devops/comments/169a9yy/i_started_a_devops_consulting_company_and_havent/) stating that I had a lot of difficulties selling in my devops/cloud computing consulting company, at that time I had a lot of fears because I was using a strategy that didn't work for me personally.
I'm writing this because at this moment the situation has improved, I have 2 full-time devops engineers with all the benefits of law, a part-time marketing person, and I outsource an accounting firm for tax reasons. The idea of the post is to share what things worked for me, and what things didn't, since many people asked for that in the previous post (2 years ago).
Things that worked (to sell more):
* Exploiting my previous contacts, not going directly to offer your services, but occasionally asking what their projects are, showing real interest, that way you evaluate if you can really help them, if not, then the contact simply remains on hold.
* Look for opportunities with contacts who work close to those who make the decisions, since they trust your contact, and therefore, you.
* Continue making contacts, it was important to increase my social skills, and have a nose for being everywhere, that is, recognizing potential business happening miles away.
* Be relevant on networks, have constant technical publications, I also have a podcast where I invite relevant people in the field, and occasionally I comment on LinkedIn publications where I can really contribute something of value.
* Opening up to other markets, fortunately I have a development background, and I have been learning a lot about ML and AI engineering, so I was able to close some related contracts, offering developer services, along with my devops who work full-time for the deployment of my applications, without that, I would not have been able to create the work for these people.
Things that didn't work:
* Publishing things generated by AI, don't do it.
* Contact people you don't know on LinkedIn, cold emails, customer databases, etc.
* Being purely technical, it is really necessary to understand the business side to have empathy with your client, that way you create a closer relationship and build trust.
* Going to technology events, honestly, there are a lot, but a lot of people there to sell, and very few to buy, it's a pretty complicated environment.
Maybe I'm missing a lot of things, but these things helped me a lot to sell and to be able to have a stable business initially. If you have any questions, feel free to ask.
https://redd.it/1j0g2oz
@r_devops
Hello, 2 years ago I posted this on this channel: [https://www.reddit.com/r/devops/comments/169a9yy/i\_started\_a\_devops\_consulting\_company\_and\_havent/](https://www.reddit.com/r/devops/comments/169a9yy/i_started_a_devops_consulting_company_and_havent/) stating that I had a lot of difficulties selling in my devops/cloud computing consulting company, at that time I had a lot of fears because I was using a strategy that didn't work for me personally.
I'm writing this because at this moment the situation has improved, I have 2 full-time devops engineers with all the benefits of law, a part-time marketing person, and I outsource an accounting firm for tax reasons. The idea of the post is to share what things worked for me, and what things didn't, since many people asked for that in the previous post (2 years ago).
Things that worked (to sell more):
* Exploiting my previous contacts, not going directly to offer your services, but occasionally asking what their projects are, showing real interest, that way you evaluate if you can really help them, if not, then the contact simply remains on hold.
* Look for opportunities with contacts who work close to those who make the decisions, since they trust your contact, and therefore, you.
* Continue making contacts, it was important to increase my social skills, and have a nose for being everywhere, that is, recognizing potential business happening miles away.
* Be relevant on networks, have constant technical publications, I also have a podcast where I invite relevant people in the field, and occasionally I comment on LinkedIn publications where I can really contribute something of value.
* Opening up to other markets, fortunately I have a development background, and I have been learning a lot about ML and AI engineering, so I was able to close some related contracts, offering developer services, along with my devops who work full-time for the deployment of my applications, without that, I would not have been able to create the work for these people.
Things that didn't work:
* Publishing things generated by AI, don't do it.
* Contact people you don't know on LinkedIn, cold emails, customer databases, etc.
* Being purely technical, it is really necessary to understand the business side to have empathy with your client, that way you create a closer relationship and build trust.
* Going to technology events, honestly, there are a lot, but a lot of people there to sell, and very few to buy, it's a pretty complicated environment.
Maybe I'm missing a lot of things, but these things helped me a lot to sell and to be able to have a stable business initially. If you have any questions, feel free to ask.
https://redd.it/1j0g2oz
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Video resources to understand datadog traces?
I'm trying to implement datadog in an aws lambda (Python). The thing is working so far, but the traces I'm getting are super low level (seems like a profiler more than traces). I don't fully grasp how to configure the traces by reading the docs.
Can you suggest any resources or youtube videos to learn?
https://redd.it/1j0hj6n
@r_devops
I'm trying to implement datadog in an aws lambda (Python). The thing is working so far, but the traces I'm getting are super low level (seems like a profiler more than traces). I don't fully grasp how to configure the traces by reading the docs.
Can you suggest any resources or youtube videos to learn?
https://redd.it/1j0hj6n
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Window ARM
I am planning to buy a Microsoft surface Microsoft Surface Laptop | Copilot+ PC | 13.8 Inch Touchscreen | Snapdragon® X Elite (12 Cores) because is kind of cheaper option. The main reason is for devops related learnings. Please does any one has any experience with it and is it a good choice?
https://redd.it/1j0hfvf
@r_devops
I am planning to buy a Microsoft surface Microsoft Surface Laptop | Copilot+ PC | 13.8 Inch Touchscreen | Snapdragon® X Elite (12 Cores) because is kind of cheaper option. The main reason is for devops related learnings. Please does any one has any experience with it and is it a good choice?
https://redd.it/1j0hfvf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Microservice Integration Testing a Pain? Try Shadow Testing
We published an article yesterday on The New Stack about shadow testing for microservices, and I'm curious about your thoughts on this approach.
Shadow testing essentially takes the concept of canary testing (which most of us do in production) but repurposes it for Pull Request (PR) testing. The core idea is running a new version of your service alongside the current one and running tests on both to directly compare responses before merging.
Why we think this is interesting:
Integration tests often become maintenance nightmares as services evolve
Unlike traditional integration tests with mocks, shadow testing uses real dependencies
You can catch subtle regressions and performance issues pre-merge
It requires minimal ongoing maintenance compared to brittle integration tests
We took inspiration from tools like OpenDiffy (originally from Twitter/X) that pioneered automated response comparison for detecting discrepancies.
Have any of you implemented something similar in your microservices workflows? How does this approach compare with your current integration testing approach for PRs?
Article for reference: Microservice Integration Testing a Pain? Try Shadow Testing
https://redd.it/1j0h4je
@r_devops
We published an article yesterday on The New Stack about shadow testing for microservices, and I'm curious about your thoughts on this approach.
Shadow testing essentially takes the concept of canary testing (which most of us do in production) but repurposes it for Pull Request (PR) testing. The core idea is running a new version of your service alongside the current one and running tests on both to directly compare responses before merging.
Why we think this is interesting:
Integration tests often become maintenance nightmares as services evolve
Unlike traditional integration tests with mocks, shadow testing uses real dependencies
You can catch subtle regressions and performance issues pre-merge
It requires minimal ongoing maintenance compared to brittle integration tests
We took inspiration from tools like OpenDiffy (originally from Twitter/X) that pioneered automated response comparison for detecting discrepancies.
Have any of you implemented something similar in your microservices workflows? How does this approach compare with your current integration testing approach for PRs?
Article for reference: Microservice Integration Testing a Pain? Try Shadow Testing
https://redd.it/1j0h4je
@r_devops
The New Stack
Microservice Integration Testing a Pain? Try Shadow Testing
Shadow testing runs new service versions alongside the current one, processing the same traffic for direct comparison.
500 lines of code distributed file system ( Python )
The distributed file system is created for educational purposes. If you are interested in distributed systems and file systems and want to gain practical knowledge about them, check out this repository:
https://github.com/ARAldhafeeri/Monty-Python-McChunkin
Demo :
https://www.youtube.com/watch?v=cI11PNN8BQw
Fork and Play, if you have any question post message me here.
https://redd.it/1j0m6g9
@r_devops
The distributed file system is created for educational purposes. If you are interested in distributed systems and file systems and want to gain practical knowledge about them, check out this repository:
https://github.com/ARAldhafeeri/Monty-Python-McChunkin
Demo :
https://www.youtube.com/watch?v=cI11PNN8BQw
Fork and Play, if you have any question post message me here.
https://redd.it/1j0m6g9
@r_devops
GitHub
GitHub - ARAldhafeeri/Monty-Python-McChunkin: Monty Python McChunkin is a funny name for a distributed file system that is nothing…
Monty Python McChunkin is a funny name for a distributed file system that is nothing funny about, scalable and fast, created for educational reasons - GitHub - ARAldhafeeri/Monty-Python-McChunkin:...
It took me 20 years
I finally got a job building infrastructure as code. AWS Code Pipeline + Terraform, with a promise to also get hands on with Azure and their devops/pipeline products. I have a chronic health condition that really slowed me down. Miraculously, I found a way to manage it better and my health has started improving. My wife is a rock, she stayed by my side. Today was a good day, and for the first time in a very long time I can see a kind of light at the end of the tunnel, or at least, some sunshine. Some good days ahead, decent health, a decent income, a future while I still have some life left in me to make good use of it.
Onwards
Edit: now that I think about it, I first picked up Linux RedHat 4, that's RHL not RHEL, I paid for an actual CD. I think that was in the late 1990s 1996-1998 so I guess I could say really I started down this path over a quarter of a century ago
https://redd.it/1j0rgrs
@r_devops
I finally got a job building infrastructure as code. AWS Code Pipeline + Terraform, with a promise to also get hands on with Azure and their devops/pipeline products. I have a chronic health condition that really slowed me down. Miraculously, I found a way to manage it better and my health has started improving. My wife is a rock, she stayed by my side. Today was a good day, and for the first time in a very long time I can see a kind of light at the end of the tunnel, or at least, some sunshine. Some good days ahead, decent health, a decent income, a future while I still have some life left in me to make good use of it.
Onwards
Edit: now that I think about it, I first picked up Linux RedHat 4, that's RHL not RHEL, I paid for an actual CD. I think that was in the late 1990s 1996-1998 so I guess I could say really I started down this path over a quarter of a century ago
https://redd.it/1j0rgrs
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Need Help for troubleshooting virtualbox
Trying to add a vm for setting up jenkins
Can any one please help
https://redd.it/1j0v9d8
@r_devops
Trying to add a vm for setting up jenkins
Can any one please help
https://redd.it/1j0v9d8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community