Keep working as developer or become cloud specialist?
I worked for an digital agency for 2 years, kind of a full-stack position, built a lot of websites and mobile apps. I've got a chance to setup cloud and did some DevOps works, just simple CI/CD and Docker.
Now I am getting 2 job offers, one backend developer offer from a bigger agency, and one cloud engineer offer from a large global shipping company.
I am very experienced in frontend, so I want to learn more in backend and DevOps/infrastructure. That's why I am struggling to decide which offer I should take.
While the agency offer pay a bit more and I can further my backend skills, the cloud engineer offer will expose me to K8S and I will still do some development works like internal frameworks for the company application team.
Any advice or anything I should take into consideration?
Should I work as backend developer and play with cloud in my free time? Or take the cloud engineer offer and do some side projects to keep sharpen my backend skills? Which one sounds more doable?
https://redd.it/10focyg
@r_devops
I worked for an digital agency for 2 years, kind of a full-stack position, built a lot of websites and mobile apps. I've got a chance to setup cloud and did some DevOps works, just simple CI/CD and Docker.
Now I am getting 2 job offers, one backend developer offer from a bigger agency, and one cloud engineer offer from a large global shipping company.
I am very experienced in frontend, so I want to learn more in backend and DevOps/infrastructure. That's why I am struggling to decide which offer I should take.
While the agency offer pay a bit more and I can further my backend skills, the cloud engineer offer will expose me to K8S and I will still do some development works like internal frameworks for the company application team.
Any advice or anything I should take into consideration?
Should I work as backend developer and play with cloud in my free time? Or take the cloud engineer offer and do some side projects to keep sharpen my backend skills? Which one sounds more doable?
https://redd.it/10focyg
@r_devops
reddit
Keep working as developer or become cloud specialist?
I worked for an digital agency for 2 years, kind of a full-stack position, built a lot of websites and mobile apps. I've got a chance to setup...
Is it hard to migrate a Mongo DB from one cloud to another?
Lets say I am on AWS and want to move my Mongo DB to Azure. How difficult is that to do? Is it simply just download all the info and reupload to the other cloud?
https://redd.it/10frkvy
@r_devops
Lets say I am on AWS and want to move my Mongo DB to Azure. How difficult is that to do? Is it simply just download all the info and reupload to the other cloud?
https://redd.it/10frkvy
@r_devops
reddit
Is it hard to migrate a Mongo DB from one cloud to another?
Lets say I am on AWS and want to move my Mongo DB to Azure. How difficult is that to do? Is it simply just download all the info and reupload to...
Posting information INTO postman from outside - how is it done?
I've used a variety of other tools to examine payloads from PUT or POST requests.
Pipedream is my current favourite for this - it provides me with an API endpoint that I can POST a json payload to so I can examine the payload and test everything thoroughly before posting to the intended downstream system.
How do I do this with postman? My IT team has set up a postman account that we can all save our work into to make it easier to share but they are not sure how to do this. The only documentation I can find from postman talks about receiving responses from when you POST to another system from postman.
I feel like we are all missing the obvious here - can you do this with postman and if so where is the documentation?
https://redd.it/10fr6u5
@r_devops
I've used a variety of other tools to examine payloads from PUT or POST requests.
Pipedream is my current favourite for this - it provides me with an API endpoint that I can POST a json payload to so I can examine the payload and test everything thoroughly before posting to the intended downstream system.
How do I do this with postman? My IT team has set up a postman account that we can all save our work into to make it easier to share but they are not sure how to do this. The only documentation I can find from postman talks about receiving responses from when you POST to another system from postman.
I feel like we are all missing the obvious here - can you do this with postman and if so where is the documentation?
https://redd.it/10fr6u5
@r_devops
reddit
Posting information INTO postman from outside - how is it done?
I've used a variety of other tools to examine payloads from PUT or POST requests. Pipedream is my current favourite for this - it provides me...
Reproducible builds locally and in the pipeline using docker?
Hey everyone, so I have been working on our pipelines at work and have had some questions for the community on if a similar implementation exists.
​
Currently using dotnet as an example we have the following pipeline setup in azure pipelines
​
Build --> Unit Test / Sonar analysis -- > Docker build & publish
​
all of these stages (bar the docker build) run in an azure-pipeline container job the build stage does a dotnet publish and uploaded the produced artifact, and the Unit test stage runs the sonarqube analysis on the dotnet test build and publishes the coverage/test result file to the azure DevOps server and sonarqube and then the docker stage creates a production-ready image which copies in the artifact published in the first step and pushes it to our private docker registry.
​
all works fine but after speaking to the devs they have requested we make this process a bit more repeatable so they can be azure what they are producing locally is the same as what the pipeline produces. I think is a good idea and started to dive into ways we can achieve this we agreed that we should take advantage of docker more for reproducibility and use a multi-stage build to run the application build, unit tests and obviously the final production-ready image so then in the pipeline we can run a simple docker build and have the same result as if it was running on the dev machine.
​
my only issue with this process is the Code analysis, sonarqube hooks into MSBuild to analyze the code but this is now running inside a docker build process so do we add java and sonarqube to the first multi-stage of the docker image? do we want devs to run this step locally and have their local code analyzed? or do we have a completely separate step with another build inside the pipeline purely for this code analysis?
​
I am struggling to find an elegant solution, everything seems to be very overkill and I am wondering if anyone else has managed to achieve something similar.
please feel free to ask any question, I feel like I have not really explained the situation well but I will try to clear up where I can.
https://redd.it/10fk7mp
@r_devops
Hey everyone, so I have been working on our pipelines at work and have had some questions for the community on if a similar implementation exists.
​
Currently using dotnet as an example we have the following pipeline setup in azure pipelines
​
Build --> Unit Test / Sonar analysis -- > Docker build & publish
​
all of these stages (bar the docker build) run in an azure-pipeline container job the build stage does a dotnet publish and uploaded the produced artifact, and the Unit test stage runs the sonarqube analysis on the dotnet test build and publishes the coverage/test result file to the azure DevOps server and sonarqube and then the docker stage creates a production-ready image which copies in the artifact published in the first step and pushes it to our private docker registry.
​
all works fine but after speaking to the devs they have requested we make this process a bit more repeatable so they can be azure what they are producing locally is the same as what the pipeline produces. I think is a good idea and started to dive into ways we can achieve this we agreed that we should take advantage of docker more for reproducibility and use a multi-stage build to run the application build, unit tests and obviously the final production-ready image so then in the pipeline we can run a simple docker build and have the same result as if it was running on the dev machine.
​
my only issue with this process is the Code analysis, sonarqube hooks into MSBuild to analyze the code but this is now running inside a docker build process so do we add java and sonarqube to the first multi-stage of the docker image? do we want devs to run this step locally and have their local code analyzed? or do we have a completely separate step with another build inside the pipeline purely for this code analysis?
​
I am struggling to find an elegant solution, everything seems to be very overkill and I am wondering if anyone else has managed to achieve something similar.
please feel free to ask any question, I feel like I have not really explained the situation well but I will try to clear up where I can.
https://redd.it/10fk7mp
@r_devops
reddit
Reproducible builds locally and in the pipeline using docker?
Hey everyone, so I have been working on our pipelines at work and have had some questions for the community on if a similar implementation...
U got hired as a DevOps engineer, but you are really a glorified Sysadmin. What do you do to change this?
Curious how people would approach this if this happened at a company. First thing that comes to mind is containerizing applications?
https://redd.it/10fx26f
@r_devops
Curious how people would approach this if this happened at a company. First thing that comes to mind is containerizing applications?
https://redd.it/10fx26f
@r_devops
reddit
U got hired as a DevOps engineer, but you are really a glorified...
Curious how people would approach this if this happened at a company. First thing that comes to mind is containerizing applications?
Need to learn about cert (security)
Hi guys,
I am working for a while in devops and used to be developer. Certs are always scare me away all the time, so never involved in working with them. But recently most of the issues in our env is because of cert whether it is kubernetes or openshift or kafka.
We are having different types of issues and for me its very difficult to understand when our team discuss about it in meeting.
Can you guide me where should I start learning about it and also suggest me if any certification courses will help as well. But my main target is, I should be ready to solve security problems related to certs / keys.
​
Thanks
https://redd.it/10fy6ki
@r_devops
Hi guys,
I am working for a while in devops and used to be developer. Certs are always scare me away all the time, so never involved in working with them. But recently most of the issues in our env is because of cert whether it is kubernetes or openshift or kafka.
We are having different types of issues and for me its very difficult to understand when our team discuss about it in meeting.
Can you guide me where should I start learning about it and also suggest me if any certification courses will help as well. But my main target is, I should be ready to solve security problems related to certs / keys.
​
Thanks
https://redd.it/10fy6ki
@r_devops
reddit
Need to learn about cert (security)
Hi guys, I am working for a while in devops and used to be developer. Certs are always scare me away all the time, so never involved in working...
CORS issue after attaching AWS WAF to load balancer
Guys,
I am facing "Access to fetch at ' ' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled." after I have attached my load balancer to AWS WAF. Otherwise, it works fine, so what may trigger the issue or which rules are responsible for this scenario?
https://redd.it/10fw4wl
@r_devops
Guys,
I am facing "Access to fetch at ' ' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled." after I have attached my load balancer to AWS WAF. Otherwise, it works fine, so what may trigger the issue or which rules are responsible for this scenario?
https://redd.it/10fw4wl
@r_devops
reddit
CORS issue after attaching AWS WAF to load balancer
Guys, I am facing **"Access to fetch at ' ' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested...
Beholder - Documentation search engine with K8S first approach
Hey everybody,
I just finalized the first version of my project: Beholder. When deployed to K8S it allows you to expose OpenAPI documentation for specifically labeled services.
It's the first version, I tested it as much as I could but there could be some lingering bugs. I would be more than grateful for the feedback.
https://github.com/gdulus/beholder
https://redd.it/10g1gxc
@r_devops
Hey everybody,
I just finalized the first version of my project: Beholder. When deployed to K8S it allows you to expose OpenAPI documentation for specifically labeled services.
It's the first version, I tested it as much as I could but there could be some lingering bugs. I would be more than grateful for the feedback.
https://github.com/gdulus/beholder
https://redd.it/10g1gxc
@r_devops
GitHub
GitHub - gdulus/beholder
Contribute to gdulus/beholder development by creating an account on GitHub.
Should I continue my self-taught journey to become Remote worker?
Overthinker here...
There's a thing where it demotivate me to continue my self-taught journey to become DevOps Engineer. I'm from a third world country where there's barely any software job there, It's just web dev with pretty much bad salary..Currently learning sysadmin and "Golang" but it doesn't stop there I know there's much more for sure. I have already made my road map and path. also I'm on my third year in college "computer engineering"But the issue is I hear may people say that DevOps requires you to to work as a sysadmin or software engineer first and get hands-on experience then you move to DevOps, also it's so hard to get remote job outside US. My plan is to get as much knowledge and build up my github so i can land a "Junior remote job role" even if it pays much more below average (not necessarily a DevOps role.. sysadmin or Cloud Specialist first is fine)It's fine for me to work any of those jobs first but remotely? eh.. even for DevOpsWhat do you think guys? should I stay motivated and keep learning? I'm worried that all my studies will go to waste
I do have a carefully considered road map/path. Spent months doing researches and watching videos
Edit: I can't travel outside my country. I'm here taking care of my family alone :q
https://redd.it/10g4hmw
@r_devops
Overthinker here...
There's a thing where it demotivate me to continue my self-taught journey to become DevOps Engineer. I'm from a third world country where there's barely any software job there, It's just web dev with pretty much bad salary..Currently learning sysadmin and "Golang" but it doesn't stop there I know there's much more for sure. I have already made my road map and path. also I'm on my third year in college "computer engineering"But the issue is I hear may people say that DevOps requires you to to work as a sysadmin or software engineer first and get hands-on experience then you move to DevOps, also it's so hard to get remote job outside US. My plan is to get as much knowledge and build up my github so i can land a "Junior remote job role" even if it pays much more below average (not necessarily a DevOps role.. sysadmin or Cloud Specialist first is fine)It's fine for me to work any of those jobs first but remotely? eh.. even for DevOpsWhat do you think guys? should I stay motivated and keep learning? I'm worried that all my studies will go to waste
I do have a carefully considered road map/path. Spent months doing researches and watching videos
Edit: I can't travel outside my country. I'm here taking care of my family alone :q
https://redd.it/10g4hmw
@r_devops
reddit
Should I continue my self-taught journey to become Remote worker?
Overthinker here... There's a thing where it demotivate me to continue my self-taught journey to become DevOps Engineer. I'm from a **third world...
designing guide | DevOps
I have a homogeneous infrastructure on Cloud, I need to design the DevOps way of managing it.
My design should be capable of configuration management, security updates, scaling, automation and automation.
I have a very good knowledge on Linux, storage and operations, but i have no clue about DEVOPS ways of designing.
So any book, or website you could refer me? please
https://redd.it/10g4t22
@r_devops
I have a homogeneous infrastructure on Cloud, I need to design the DevOps way of managing it.
My design should be capable of configuration management, security updates, scaling, automation and automation.
I have a very good knowledge on Linux, storage and operations, but i have no clue about DEVOPS ways of designing.
So any book, or website you could refer me? please
https://redd.it/10g4t22
@r_devops
reddit
designing guide | DevOps
I have a homogeneous infrastructure on Cloud, I need to design the DevOps way of managing it. My design should be capable of configuration...
Do you let devs deploy to production?
Just curious how other are doing. Here the devs need to open a Jira ticket requesting a specific build to be deployed to prod and then our team do the deployment with the cicd pipeline.
https://redd.it/10g3bcb
@r_devops
Just curious how other are doing. Here the devs need to open a Jira ticket requesting a specific build to be deployed to prod and then our team do the deployment with the cicd pipeline.
https://redd.it/10g3bcb
@r_devops
reddit
Do you let devs deploy to production?
Just curious how other are doing. Here the devs need to open a Jira ticket requesting a specific build to be deployed to prod and then our team do...
Azure Keyvault for multi-cloud use (AWS, Rancher onprem, and Azure)
Does anyone have experience utilizing Azure Keyvault outside of Azure? I've been tasked with identifying a multi-cloud solution for secrets management. We have an existing Hashicorp Vault setup, as well as an existing Azure Keyvault setup.
Is it possible to use Hashicorp vault as a secret store that pulls from Azure Keyvault? Alternatively, is it possible to use Azure Keyvault successfully in AWS kubernetes clusters or VMs, or Onprem kube clusters/VMs?
https://redd.it/10g9j9k
@r_devops
Does anyone have experience utilizing Azure Keyvault outside of Azure? I've been tasked with identifying a multi-cloud solution for secrets management. We have an existing Hashicorp Vault setup, as well as an existing Azure Keyvault setup.
Is it possible to use Hashicorp vault as a secret store that pulls from Azure Keyvault? Alternatively, is it possible to use Azure Keyvault successfully in AWS kubernetes clusters or VMs, or Onprem kube clusters/VMs?
https://redd.it/10g9j9k
@r_devops
reddit
Azure Keyvault for multi-cloud use (AWS, Rancher onprem, and Azure)
Does anyone have experience utilizing Azure Keyvault outside of Azure? I've been tasked with identifying a multi-cloud solution for secrets...
Hands-on examples of observability-driven development
https://tracetest.io/blog/observability-driven-development-with-go-and-tracetest
Based on one of my previous discussions about ODD, I wanted to go into more depth and explain how it works with a code demo using open-source tools like Go and Tracetest. The main point I think is that there are no mocks. Instead, you're running E2E and integration tests against real data. I think the biggest pain point in testing on the back end is the amount of coding you need to do to actually just make the test run. Mocking API responses, setting up credentials and env vars to access different services and databases. It's just a lot of hassle to run an integration test.
Disclosure: I am on the Tracetest team, so I'm passionately not disinterested in what you think about the whole ODD movement.
https://redd.it/10gab31
@r_devops
https://tracetest.io/blog/observability-driven-development-with-go-and-tracetest
Based on one of my previous discussions about ODD, I wanted to go into more depth and explain how it works with a code demo using open-source tools like Go and Tracetest. The main point I think is that there are no mocks. Instead, you're running E2E and integration tests against real data. I think the biggest pain point in testing on the back end is the amount of coding you need to do to actually just make the test run. Mocking API responses, setting up credentials and env vars to access different services and databases. It's just a lot of hassle to run an integration test.
Disclosure: I am on the Tracetest team, so I'm passionately not disinterested in what you think about the whole ODD movement.
https://redd.it/10gab31
@r_devops
tracetest.io
Observability-driven development with Go and Tracetest
Hands-on tutorial covering observability-driven development, how to develop microservices with Go & how to run trace-based tests with Tracetest.
mOVING FROM Puppet To Ansible - A few questions around structure and config drift.
So we're on Puppet right now - it's old, out of date, but at the core of everything we do.
We'd like to move to Ansible, which a lot of us are familar with, and I think is the better path forward for us as we're moving a lot of things to the cloud.
Now I have a few thoughts/questions for which I don't have an exact answer for:
1: Configuration Drift
We can make a playbook, chuck it into gitlab, have a pipeline run it...but then what?
What if someone makes a config change on the box but not in git? (it WILL happen)
Puppet runs every 45 minutes or so, without using Ansible Tower, how are people doing this?
Something like Rundeck?
An "Ansible Master" server at each DC running cron jobs every hour?
2: Structure or hierarchy of our Playbooks/Roles, with multiple DCs
There will be quite a few common roles that ALL server will need:
NTP, Security/SSH settings, Log rotation, Log shipping etc etc
Do we just create a playbook for each server type/location, chuck in the "Common" roles and then the app/location specific role into that playbook?
Seems like #2 could get messy quick with lots of servers, doing the same thing over multiple DCs.
e.g. I might want to only affect the mail servers at DC1 today, and then DC2 tomorrow, and DC 3,4,5 & 6 later...but now that means I got 6 versions of the same role to maintain?
EDIT: Damn text editior FORCES YOU TO BE IN CAPS EVEN WHEN YOU'RE NOT SO THE TITLE LOOKS LIKESHIT..
https://redd.it/10gc90g
@r_devops
So we're on Puppet right now - it's old, out of date, but at the core of everything we do.
We'd like to move to Ansible, which a lot of us are familar with, and I think is the better path forward for us as we're moving a lot of things to the cloud.
Now I have a few thoughts/questions for which I don't have an exact answer for:
1: Configuration Drift
We can make a playbook, chuck it into gitlab, have a pipeline run it...but then what?
What if someone makes a config change on the box but not in git? (it WILL happen)
Puppet runs every 45 minutes or so, without using Ansible Tower, how are people doing this?
Something like Rundeck?
An "Ansible Master" server at each DC running cron jobs every hour?
2: Structure or hierarchy of our Playbooks/Roles, with multiple DCs
There will be quite a few common roles that ALL server will need:
NTP, Security/SSH settings, Log rotation, Log shipping etc etc
Do we just create a playbook for each server type/location, chuck in the "Common" roles and then the app/location specific role into that playbook?
Seems like #2 could get messy quick with lots of servers, doing the same thing over multiple DCs.
e.g. I might want to only affect the mail servers at DC1 today, and then DC2 tomorrow, and DC 3,4,5 & 6 later...but now that means I got 6 versions of the same role to maintain?
EDIT: Damn text editior FORCES YOU TO BE IN CAPS EVEN WHEN YOU'RE NOT SO THE TITLE LOOKS LIKESHIT..
https://redd.it/10gc90g
@r_devops
reddit
mOVING FROM Puppet To Ansible - A few questions around structure...
So we're on Puppet right now - it's old, out of date, but at the core of everything we do. We'd like to move to Ansible, which a lot of us are...
"I Know So Much Stuff I Learned Over The Years I Forgot Half Of That By Now?"
I feel like my brain has a limited capacity to remember stuff I dont repeat from time to time.
As a DevOps/SysOps/SysAdmin w/e I had so many tools I had to learn how they work over the years that I lost track of half of them..
Example 10 years ago was using puppet. Could write configurations 1b1, it was super easy to understand and now I would have to remind myself most of it.. coz Im using mostly GA..
Am I just a bad engineer or the tools change so often from company to company its just impossible to remember all of them ? Maybe some ppl can/ or most ?
Just curious whats the other ppl experience in this regard.
https://redd.it/10gfegd
@r_devops
I feel like my brain has a limited capacity to remember stuff I dont repeat from time to time.
As a DevOps/SysOps/SysAdmin w/e I had so many tools I had to learn how they work over the years that I lost track of half of them..
Example 10 years ago was using puppet. Could write configurations 1b1, it was super easy to understand and now I would have to remind myself most of it.. coz Im using mostly GA..
Am I just a bad engineer or the tools change so often from company to company its just impossible to remember all of them ? Maybe some ppl can/ or most ?
Just curious whats the other ppl experience in this regard.
https://redd.it/10gfegd
@r_devops
reddit
"I Know So Much Stuff I Learned Over The Years I Forgot Half Of...
I feel like my brain has a limited capacity to remember stuff I dont repeat from time to time. As a DevOps/SysOps/SysAdmin w/e I had so many...
Monitoring stack demo using Grafana, Loki & Mimir
Wanted to share a demo/tutorial with everyone on how get started with a monitoring stack using grafana, loki and mimir with prometheus metrics & promtail log sender:
[https://github.com/wick02/monitoring](https://github.com/wick02/monitoring)
I also created a [video demo](https://www.youtube.com/watch?v=KPqbA7ys24o) of it working on a mac m1 along with a few of my old colleagues cloning it with no issues reported. I have around 6-7 years helping maintain logs and metric backends and this is my second video on Grafana which is available on [Grafana's youtube channel](https://www.youtube.com/watch?v=AgV5DoWcY6I&t=1544s) from a meetup in 2017.
**Goals of this repo:**
* To trim down to the very basics of each service, to isolate them from each other so you can pick and choose what you want to use from the demo.
* I've configured it in such a way where you can scale it in a cloud environment and to give something to the developers.
* It's not dependent on keeping volumes on the machine, so you can use something like Amazon ECS without managing the volumes and use spot servers to help cut costs.
* It's not a lot of code or configuration, it uses a lot of existing tutorials already but made in such a way that I think anyone with some operational experience can use and get started with.
* It's also built in a way where the metrics are pushed to an S3 like backend using minio so you can keep and persist all the logs and metrics.
* Lastly, it uses Tenant IDs, so you can isolate offenders if you need to use this as a massive shared service for the company by rate limiting them until they stop sending you too many metrics/logs as we all are accustomed to see when we manage these type of backends.
* Since it is simple to spin up a Mimir or Loki cluster with a design like this, you could make multiple clusters and isolate components away even further
I hope someone out there finds this useful. I hope to add Tempo in the future along with a terraform deployment process for this stack.
https://redd.it/10gfu0t
@r_devops
Wanted to share a demo/tutorial with everyone on how get started with a monitoring stack using grafana, loki and mimir with prometheus metrics & promtail log sender:
[https://github.com/wick02/monitoring](https://github.com/wick02/monitoring)
I also created a [video demo](https://www.youtube.com/watch?v=KPqbA7ys24o) of it working on a mac m1 along with a few of my old colleagues cloning it with no issues reported. I have around 6-7 years helping maintain logs and metric backends and this is my second video on Grafana which is available on [Grafana's youtube channel](https://www.youtube.com/watch?v=AgV5DoWcY6I&t=1544s) from a meetup in 2017.
**Goals of this repo:**
* To trim down to the very basics of each service, to isolate them from each other so you can pick and choose what you want to use from the demo.
* I've configured it in such a way where you can scale it in a cloud environment and to give something to the developers.
* It's not dependent on keeping volumes on the machine, so you can use something like Amazon ECS without managing the volumes and use spot servers to help cut costs.
* It's not a lot of code or configuration, it uses a lot of existing tutorials already but made in such a way that I think anyone with some operational experience can use and get started with.
* It's also built in a way where the metrics are pushed to an S3 like backend using minio so you can keep and persist all the logs and metrics.
* Lastly, it uses Tenant IDs, so you can isolate offenders if you need to use this as a massive shared service for the company by rate limiting them until they stop sending you too many metrics/logs as we all are accustomed to see when we manage these type of backends.
* Since it is simple to spin up a Mimir or Loki cluster with a design like this, you could make multiple clusters and isolate components away even further
I hope someone out there finds this useful. I hope to add Tempo in the future along with a terraform deployment process for this stack.
https://redd.it/10gfu0t
@r_devops
GitHub
GitHub - wick02/monitoring: Get a monitoring system up and rolling easily with a few steps
Get a monitoring system up and rolling easily with a few steps - wick02/monitoring
Feedback Request: TCO Calculation for Apache Kafka
I'm working on calculating the total cost of ownership (TCO) for tools like Apache Kafka to determine when to build vs. buy.
I'd love your feedback -- what am I missing? What did I underestimate/overestimate? How can I improve this?
First, the criteria to consider when calculating TCO:
Up-front costs
software cost & licensing, if applicable
learning & education
implementation & testing (including data migration costs)
documentation & knowledge sharing
customization
Ongoing costs
direct infrastructure costs (e.g., hosting & storage)
backup infrastructure costs (e.g., failover & additional AZs)
supporting infrastructure costs (e.g., monitoring & alerting)
maintenance, patches/upgrades, & support
feature additions
Team & opportunity costs
hiring to replace the engineers now working with the new software
time spent on infrastructure that could otherwise be spent on core product
Now, an example using the above criteria:
Desired specs for our example deployment (I picked one of the smaller Heroku plans):
Capacity: 300GB
Retention: 2 weeks
vCPU: 4
Ram: 16GB
Brokers: 3
Assuming an engineer has an all-in comp package of $200k/yr (this would obviously be different in every situation, for every geo), year one would look like:
||Building (on AWS)|Buying (Heroku)|
|:-|:-|:-|
|software cost & licensing|$0|$21,600|
|learning & education|$7,692 (2 eng \ 1 week)|$3,846 (1 eng * 1 week)|
|implementation & testing|$15,384 (2 eng * 2 weeks)|$7,692 (1 eng * 1 week)|
|infrastructure costs (see above specs)|$12,117.60|$0 (included in software cost)|
|supporting infrastructure costs (monitoring, etc.)|$1,200/yr|$1,200/yr|
|maintenance, patches/upgrades|$15,384 (2 eng * 2 weeks spread throughout the year)|$7,692 (1 eng * 2 weeks spread throughout the year)|
|Year 1 TCO|$51,777.60|$42,030|
Directionally, this example seems correct.
What do you think? What am I missing? What did I underestimate/overestimate? How can I improve this?
Thanks!
https://redd.it/10g9bk2
@r_devops
I'm working on calculating the total cost of ownership (TCO) for tools like Apache Kafka to determine when to build vs. buy.
I'd love your feedback -- what am I missing? What did I underestimate/overestimate? How can I improve this?
First, the criteria to consider when calculating TCO:
Up-front costs
software cost & licensing, if applicable
learning & education
implementation & testing (including data migration costs)
documentation & knowledge sharing
customization
Ongoing costs
direct infrastructure costs (e.g., hosting & storage)
backup infrastructure costs (e.g., failover & additional AZs)
supporting infrastructure costs (e.g., monitoring & alerting)
maintenance, patches/upgrades, & support
feature additions
Team & opportunity costs
hiring to replace the engineers now working with the new software
time spent on infrastructure that could otherwise be spent on core product
Now, an example using the above criteria:
Desired specs for our example deployment (I picked one of the smaller Heroku plans):
Capacity: 300GB
Retention: 2 weeks
vCPU: 4
Ram: 16GB
Brokers: 3
Assuming an engineer has an all-in comp package of $200k/yr (this would obviously be different in every situation, for every geo), year one would look like:
||Building (on AWS)|Buying (Heroku)|
|:-|:-|:-|
|software cost & licensing|$0|$21,600|
|learning & education|$7,692 (2 eng \ 1 week)|$3,846 (1 eng * 1 week)|
|implementation & testing|$15,384 (2 eng * 2 weeks)|$7,692 (1 eng * 1 week)|
|infrastructure costs (see above specs)|$12,117.60|$0 (included in software cost)|
|supporting infrastructure costs (monitoring, etc.)|$1,200/yr|$1,200/yr|
|maintenance, patches/upgrades|$15,384 (2 eng * 2 weeks spread throughout the year)|$7,692 (1 eng * 2 weeks spread throughout the year)|
|Year 1 TCO|$51,777.60|$42,030|
Directionally, this example seems correct.
What do you think? What am I missing? What did I underestimate/overestimate? How can I improve this?
Thanks!
https://redd.it/10g9bk2
@r_devops
reddit
Feedback Request: TCO Calculation for Apache Kafka
I'm working on calculating the total cost of ownership (TCO) for tools like Apache Kafka to determine when to build vs. buy. I'd love your...
Script or software that automatically populate specific profile in ~/.aws/credentials
cat \~/.aws/credentials
default
awsaccesskeyid = xxxx
awssecretaccesskey = yyyyy
foo
awsaccesskeyid = xxxxx
awssecretaccesskey = yyyyy
awssessiontoken = zzzzz
Every time I need to run `aws sts assume-role --role-arn arn:aws:iam::123456789012:role/xaccounts3access --role-session-name s3-access-example` then manually edit \~/.aws/credentials `foo` profile. I was wondering if there software or script that does it automatically for me?
https://redd.it/10ggej1
@r_devops
cat \~/.aws/credentials
default
awsaccesskeyid = xxxx
awssecretaccesskey = yyyyy
foo
awsaccesskeyid = xxxxx
awssecretaccesskey = yyyyy
awssessiontoken = zzzzz
Every time I need to run `aws sts assume-role --role-arn arn:aws:iam::123456789012:role/xaccounts3access --role-session-name s3-access-example` then manually edit \~/.aws/credentials `foo` profile. I was wondering if there software or script that does it automatically for me?
https://redd.it/10ggej1
@r_devops
reddit
Script or software that automatically populate specific profile in...
cat \~/.aws/credentials [default] aws_access_key_id = xxxx aws_secret_access_key = yyyyy [foo] aws_access_key_id =...
Hands-On: Kubernetes Gateway API With APISIX Ingress
A tutorial on using the new Kubernetes Gateway API with Apache APISIX Ingress. This is a hands-on walkthrough that you can follow on your own.
Read: https://navendu.me/posts/kubernetes-gateway-with-apisix/
https://redd.it/10gnldc
@r_devops
A tutorial on using the new Kubernetes Gateway API with Apache APISIX Ingress. This is a hands-on walkthrough that you can follow on your own.
Read: https://navendu.me/posts/kubernetes-gateway-with-apisix/
https://redd.it/10gnldc
@r_devops
Navendu Pottekkat
Kubernetes Gateway API With APISIX Ingress
A hands-on tutorial on using the new Kubernetes Gateway API with Apache APISIX Ingress.
Why do some SaaS have multiple subdomains for each business domain?
What is the logic of this? I feel like it adds complexity. Only thing I can think of is BFF (Backend for Frontends) architecture. Essentially each frontend app getting their own api gateway.
Examples:
Shopify
* Auth screens and anything to do with accounts is on accounts.example.com
* Admin Dashboard has admin.example.com
* storefront entirely different domain and subdomain. hello.myshopify.com (the customization makes sense, since its public facing)
I want to know the benefits and logic of having an architecture like this. Security reasons? Increases complexity quite a bit I feel. Like JWT coming from account.example.com, but then also valid on admin.example.com.
I see Jira does this: start.atlassian.com id.atlassian.com, yourname.atlassian.net
https://redd.it/10gpiaw
@r_devops
What is the logic of this? I feel like it adds complexity. Only thing I can think of is BFF (Backend for Frontends) architecture. Essentially each frontend app getting their own api gateway.
Examples:
Shopify
* Auth screens and anything to do with accounts is on accounts.example.com
* Admin Dashboard has admin.example.com
* storefront entirely different domain and subdomain. hello.myshopify.com (the customization makes sense, since its public facing)
I want to know the benefits and logic of having an architecture like this. Security reasons? Increases complexity quite a bit I feel. Like JWT coming from account.example.com, but then also valid on admin.example.com.
I see Jira does this: start.atlassian.com id.atlassian.com, yourname.atlassian.net
https://redd.it/10gpiaw
@r_devops
reddit
Why do some SaaS have multiple subdomains for each business domain?
What is the logic of this? I feel like it adds complexity. Only thing I can think of is BFF (Backend for Frontends) architecture. Essentially each...
Internal tooling ideas?
I am interested to hear the kinds of internal tooling people have created. Is there a tool you have made that had a significant impact on your team or organisation?
https://redd.it/10gsy86
@r_devops
I am interested to hear the kinds of internal tooling people have created. Is there a tool you have made that had a significant impact on your team or organisation?
https://redd.it/10gsy86
@r_devops
reddit
Internal tooling ideas?
I am interested to hear the kinds of internal tooling people have created. Is there a tool you have made that had a significant impact on your...