Rightsizing tips and recommendations for getting your cloud costs down
Rightsizing is a largely automated process, regardless of which cloud platform you’re using, but it pays to know how it works in order to better understand the recommendations and act on them appropriately. The process consists of three main steps:
1. Analyze: Rightsizing involves continuously tracing metrics like memory, network, disk, and vCPU usage across your volumes, instances, and virtual machines.
2. Verify: Analytics data must be verified against a predefined performance benchmark to determine whether or not resources are being underutilized.
3. Optimize: The final step is to downgrade or terminate cloud resources based on these results and your performance and cost-efficiency targets.
more info check out here
https://redd.it/z1s9qe
@r_devops
Rightsizing is a largely automated process, regardless of which cloud platform you’re using, but it pays to know how it works in order to better understand the recommendations and act on them appropriately. The process consists of three main steps:
1. Analyze: Rightsizing involves continuously tracing metrics like memory, network, disk, and vCPU usage across your volumes, instances, and virtual machines.
2. Verify: Analytics data must be verified against a predefined performance benchmark to determine whether or not resources are being underutilized.
3. Optimize: The final step is to downgrade or terminate cloud resources based on these results and your performance and cost-efficiency targets.
more info check out here
https://redd.it/z1s9qe
@r_devops
www.finout.io
Rightsizing Tips and Recommendations for Getting your Cloud Costs Down
Around a third of cloud spending goes to waste on unused or underutilized instances and storage assets. Here’s how rightsizing can help you regain control.
Starting new project (ideas)
Hi I’m new here. I worked some of my life in devops space, but not necessarily enough to be more than junior. I was thinking about making open source tooling, but I don’t know how much of a problem it is. If it’s not then I will find something other to do in my free time. Can you tell me if you ever encountered similar situations and how you handled them?
1. Deploying on-demand resources like a new VM for somebody new in a team. I was thinking if there is a solution so I can quickly type Terraform code for VM, S3 bucket and configure networking and then create something like a typeform interface for other non-infra people to use my template and quickly deploy infra for quick testing/research. (I’ve spent a crazy amount of time configuring resources for ppl and I hate it)
2. Is there anything there for easy resource policies? I would like to deploy a VM from Terraform and configure automatic slack notifications. For example, if the VM runs for 8 hours a day, I want to message the developer and ask if they still need it. If not then shut it down. I know there are EC2 policies available, but I was thinking about the entire easy-to-configure workflow. (In my startup we wasted a lot of money on unneeded resources)
3. Is there a tool for terraform code management? Like, disallow modifying this particular resource and some specific field and notify me if it was modified. A lot of times somebody screwed up my setup by mistake because terraform allowed that. I know there is Spacelift that allows configuring that, but is there any alternative? (Somebody again tried to screw up the setup and I’ve to explain why)
Let me know if you experienced problems in those categories and if it’s worth creating oss project there. Maybe those problems are imaginary and I do have not enough experience to know how to solve them. Thanks a lot for your feedback!
https://redd.it/z20753
@r_devops
Hi I’m new here. I worked some of my life in devops space, but not necessarily enough to be more than junior. I was thinking about making open source tooling, but I don’t know how much of a problem it is. If it’s not then I will find something other to do in my free time. Can you tell me if you ever encountered similar situations and how you handled them?
1. Deploying on-demand resources like a new VM for somebody new in a team. I was thinking if there is a solution so I can quickly type Terraform code for VM, S3 bucket and configure networking and then create something like a typeform interface for other non-infra people to use my template and quickly deploy infra for quick testing/research. (I’ve spent a crazy amount of time configuring resources for ppl and I hate it)
2. Is there anything there for easy resource policies? I would like to deploy a VM from Terraform and configure automatic slack notifications. For example, if the VM runs for 8 hours a day, I want to message the developer and ask if they still need it. If not then shut it down. I know there are EC2 policies available, but I was thinking about the entire easy-to-configure workflow. (In my startup we wasted a lot of money on unneeded resources)
3. Is there a tool for terraform code management? Like, disallow modifying this particular resource and some specific field and notify me if it was modified. A lot of times somebody screwed up my setup by mistake because terraform allowed that. I know there is Spacelift that allows configuring that, but is there any alternative? (Somebody again tried to screw up the setup and I’ve to explain why)
Let me know if you experienced problems in those categories and if it’s worth creating oss project there. Maybe those problems are imaginary and I do have not enough experience to know how to solve them. Thanks a lot for your feedback!
https://redd.it/z20753
@r_devops
reddit
Starting new project (ideas)
Hi I’m new here. I worked some of my life in devops space, but not necessarily enough to be more than junior. I was thinking about making open...
Observability with Spring Boot 3
The Spring Observability Team has been working on adding observability support for Spring Applications for quite some time, and we are pleased to inform you that this feature will be generally available with Spring Framework 6 and Spring Boot 3!
What is observability? In our understanding, it is "how well you can understand the internals of your system by examining its outputs". We believe that the interconnection between metrics, logging, and distributed tracing gives you the ability to reason about the state of your system in order to debug exceptions and latency in your applications. You can watch more about what we think observability is in this episode of Enlightning with Jonatan Ivanov.
The upcoming Spring Boot 3.0.0-RC1 release will contain numerous autoconfigurations for improved metrics with Micrometer and new distributed tracing support with Micrometer Tracing (formerly Spring Cloud Sleuth). The most notable changes are that it will contain built-in support for log correlation, W3C context propagation will be the default propagation type, and we will support automatic propagation of metadata to be used by the tracing infrastructure (called "remote baggage") that helps to label the observations.
We have been changing the Micrometer API a lot over the course of this year. The most important change is that we have introduced a new API: the Observation API.
>The idea of its founding was that we want the users to instrument their code once using a single API and have multiple benefits out of it (e.g. metrics, tracing, logging).
This blog post details what you need to know to about that API and how you can use it to provide more insights into your application.
Read the post
https://redd.it/z1uiq0
@r_devops
The Spring Observability Team has been working on adding observability support for Spring Applications for quite some time, and we are pleased to inform you that this feature will be generally available with Spring Framework 6 and Spring Boot 3!
What is observability? In our understanding, it is "how well you can understand the internals of your system by examining its outputs". We believe that the interconnection between metrics, logging, and distributed tracing gives you the ability to reason about the state of your system in order to debug exceptions and latency in your applications. You can watch more about what we think observability is in this episode of Enlightning with Jonatan Ivanov.
The upcoming Spring Boot 3.0.0-RC1 release will contain numerous autoconfigurations for improved metrics with Micrometer and new distributed tracing support with Micrometer Tracing (formerly Spring Cloud Sleuth). The most notable changes are that it will contain built-in support for log correlation, W3C context propagation will be the default propagation type, and we will support automatic propagation of metadata to be used by the tracing infrastructure (called "remote baggage") that helps to label the observations.
We have been changing the Micrometer API a lot over the course of this year. The most important change is that we have introduced a new API: the Observation API.
>The idea of its founding was that we want the users to instrument their code once using a single API and have multiple benefits out of it (e.g. metrics, tracing, logging).
This blog post details what you need to know to about that API and how you can use it to provide more insights into your application.
Read the post
https://redd.it/z1uiq0
@r_devops
Observability with Spring Boot 3
Level up your Java code and explore what Spring can do for you.
Tips on learning Ansible as a Chef user?
I've been using Chef for a long time now, across multiple jobs (and Puppet before that). However, my company got acquired last year by a much larger company. We're still going to be maintaining our current stack as-is for the moment, but my team is going to be helping out with another product team, and they use Ansible/Terraform and GitHub actions.
From the research that I've done already, it seems like Ansible approaches things in a much different manner than Chef does; like Chef provides a DSL layer on top of Ruby, and you still use Ruby syntax, Ansible playbooks seem to be based directly on YAML configurations rather than a DSL on top of Python. (I'm assuming that if you need to create a custom Ansible module, you'd need to do so in Python, but I haven't gotten that far yet.)
Anyways, I was wondering if anyone had some guides tailored to people familiar with Chef that are looking to learn Ansible. I've found some different StackOverflow/Google Groups threads, but not as much on an actual tutorial/guide basis.
TIA!
https://redd.it/z23gyh
@r_devops
I've been using Chef for a long time now, across multiple jobs (and Puppet before that). However, my company got acquired last year by a much larger company. We're still going to be maintaining our current stack as-is for the moment, but my team is going to be helping out with another product team, and they use Ansible/Terraform and GitHub actions.
From the research that I've done already, it seems like Ansible approaches things in a much different manner than Chef does; like Chef provides a DSL layer on top of Ruby, and you still use Ruby syntax, Ansible playbooks seem to be based directly on YAML configurations rather than a DSL on top of Python. (I'm assuming that if you need to create a custom Ansible module, you'd need to do so in Python, but I haven't gotten that far yet.)
Anyways, I was wondering if anyone had some guides tailored to people familiar with Chef that are looking to learn Ansible. I've found some different StackOverflow/Google Groups threads, but not as much on an actual tutorial/guide basis.
TIA!
https://redd.it/z23gyh
@r_devops
reddit
Tips on learning Ansible as a Chef user?
I've been using Chef for a long time now, across multiple jobs (and Puppet before that). However, my company got acquired last year by a much...
Non-technical founder looking for advice on the YouTube Data API v3
Could someone advise me on the following situation:
For a new business concept we're testing, we need to continuously know when a new YouTube video has been added to a playlist on our client's YT channel, and we do that on behalf of that client via OAuth2. Now I know it's possible to make API calls continuously, but that seems so paradoxical that I had to seek help from - hopefully - one of you here.
It's not only for one client; our business model depends on this engine. We would need to be able to scale this to eventually 1,000+ channels (clients) of which we would need to be notified whenever a new video is added to a playlist on their channel.
Lastly, we would need the MP4 version downloaded of that newly added video to our environment for further modification.
Anyone that could help us out?
https://redd.it/z24tob
@r_devops
Could someone advise me on the following situation:
For a new business concept we're testing, we need to continuously know when a new YouTube video has been added to a playlist on our client's YT channel, and we do that on behalf of that client via OAuth2. Now I know it's possible to make API calls continuously, but that seems so paradoxical that I had to seek help from - hopefully - one of you here.
It's not only for one client; our business model depends on this engine. We would need to be able to scale this to eventually 1,000+ channels (clients) of which we would need to be notified whenever a new video is added to a playlist on their channel.
Lastly, we would need the MP4 version downloaded of that newly added video to our environment for further modification.
Anyone that could help us out?
https://redd.it/z24tob
@r_devops
reddit
Non-technical founder looking for advice on the YouTube Data API v3
Could someone advise me on the following situation: For a new business concept we're testing, we need to continuously know when a new YouTube...
Daily DevOps Tools
Hey everyone - about a year ago I started posting an interesting DevOps tool every day on a small sub. Linking it here if folks find it interesting or helpful: https://www.reddit.com/r/devopspro/
https://redd.it/z27xy3
@r_devops
Hey everyone - about a year ago I started posting an interesting DevOps tool every day on a small sub. Linking it here if folks find it interesting or helpful: https://www.reddit.com/r/devopspro/
https://redd.it/z27xy3
@r_devops
reddit
devopspro • r/devopspro
Where DevOps professionals can learn about the most interesting open-source Cloud/DevOps tools. Brought to you by by the folks at...
I tried to learn Python
TBH I hated every single piece of code I wrote, my background is C# and I'm currently working as DevOps with Azure as core skill. I'm wondering if I can skip Python and live with PowerShell and Bash, any opinions?
https://redd.it/z29jbx
@r_devops
TBH I hated every single piece of code I wrote, my background is C# and I'm currently working as DevOps with Azure as core skill. I'm wondering if I can skip Python and live with PowerShell and Bash, any opinions?
https://redd.it/z29jbx
@r_devops
reddit
I tried to learn Python
TBH I hated every single piece of code I wrote, my background is C# and I'm currently working as DevOps with Azure as core skill. I'm wondering if...
ci/cd for new project infrastructure
Hi,
I'm workin a ci/cd pipeline for a new set of projects I'm workin well before I I'm looking at building a ci/cd pipeline, and I'm not totally sure where to go or what to do with this for a solution that gets me up and running quickly.
I have pretty decent docker experience. Ihave a dockerfile written for my API project, but I'll have at least a web frontend, an admin frontend and then the backend API. I've had jenkins experience, but I don't really want to run Jenkins for something that isn't super complex.
What's the best way to automate this? I'd like to push to git, and based on the branch deploy, preferably to a Linode I have.
I think that if I build the docker container through my ci/cd after unit tests and etc pass, I can then connect to ssh, jump to the proper directory and call docker-compose to restart the container. Is this viable for deployment? This would be for dev, stage and prod.
My secondary issue is being able to run tests against the container as part of my CI pipeline. These would be API tests or functional tests, or tests in selenium for the frontend and admin interface.
Any direction or ideas would be really helpful. I know that AWS has ci, but they seem to want you to integrate with their vc system. I would even be open to paying for some kind of managed Jenkins instance if that exists.
Thanks,
https://redd.it/z275sh
@r_devops
Hi,
I'm workin a ci/cd pipeline for a new set of projects I'm workin well before I I'm looking at building a ci/cd pipeline, and I'm not totally sure where to go or what to do with this for a solution that gets me up and running quickly.
I have pretty decent docker experience. Ihave a dockerfile written for my API project, but I'll have at least a web frontend, an admin frontend and then the backend API. I've had jenkins experience, but I don't really want to run Jenkins for something that isn't super complex.
What's the best way to automate this? I'd like to push to git, and based on the branch deploy, preferably to a Linode I have.
I think that if I build the docker container through my ci/cd after unit tests and etc pass, I can then connect to ssh, jump to the proper directory and call docker-compose to restart the container. Is this viable for deployment? This would be for dev, stage and prod.
My secondary issue is being able to run tests against the container as part of my CI pipeline. These would be API tests or functional tests, or tests in selenium for the frontend and admin interface.
Any direction or ideas would be really helpful. I know that AWS has ci, but they seem to want you to integrate with their vc system. I would even be open to paying for some kind of managed Jenkins instance if that exists.
Thanks,
https://redd.it/z275sh
@r_devops
reddit
ci/cd for new project infrastructure
Hi, I'm workin a ci/cd pipeline for a new set of projects I'm workin well before I I'm looking at building a ci/cd pipeline, and I'm not totally...
Deploying TIG stack for global network
Hello, we have become more interested in getting rid of our current network monitoring system and replacing it with
Telegraf
Influx/Kapacitor
Grafana
We stood up some test instances and really like what we are able to do/customization and currently playing with kapacitor to alert us. This is entirely for monitoring our network devices via SNMP/Telemetry.
The next step is to start and properly setup the entire stack for production, but we are a little stuck on the direction to go about this. We have 10+ data centers globally (500+ network devices) and would like to distribute the instances based on regions (AMER/APAC). What approach have you seen or recommend for standing up TIG for a larger environment.thanks.
Was thinking of deploying a pair of telegraf/influxdbs at each DC or general region (Midwest/east coast) and feed a single grafana instance. With that is there tools to help manage multiple telegraf/influx hosts? Any way to aggregate into a single data source? Or will have to be separate?
https://redd.it/z28uls
@r_devops
Hello, we have become more interested in getting rid of our current network monitoring system and replacing it with
Telegraf
Influx/Kapacitor
Grafana
We stood up some test instances and really like what we are able to do/customization and currently playing with kapacitor to alert us. This is entirely for monitoring our network devices via SNMP/Telemetry.
The next step is to start and properly setup the entire stack for production, but we are a little stuck on the direction to go about this. We have 10+ data centers globally (500+ network devices) and would like to distribute the instances based on regions (AMER/APAC). What approach have you seen or recommend for standing up TIG for a larger environment.thanks.
Was thinking of deploying a pair of telegraf/influxdbs at each DC or general region (Midwest/east coast) and feed a single grafana instance. With that is there tools to help manage multiple telegraf/influx hosts? Any way to aggregate into a single data source? Or will have to be separate?
https://redd.it/z28uls
@r_devops
reddit
Deploying TIG stack for global network
Hello, we have become more interested in getting rid of our current network monitoring system and replacing it with...
Who defines secret management / certificate management in your company
Hi All,
Wanted to check who generally defines some of the DevOps surrounding processes like secret management and certificate management in your organisations. In my experience I have seen enterprise or dev architects define this mostly not DevOps team or DevOps architect. For secret management, you also need to adapt to a coding framework/ coding methods to be able to read and use those secrets in code.
What do you think?
Thanks
https://redd.it/z2d7zy
@r_devops
Hi All,
Wanted to check who generally defines some of the DevOps surrounding processes like secret management and certificate management in your organisations. In my experience I have seen enterprise or dev architects define this mostly not DevOps team or DevOps architect. For secret management, you also need to adapt to a coding framework/ coding methods to be able to read and use those secrets in code.
What do you think?
Thanks
https://redd.it/z2d7zy
@r_devops
reddit
Who defines secret management / certificate management in your company
Hi All, Wanted to check who generally defines some of the DevOps surrounding processes like secret management and certificate management in your...
CPO magazine: 3 Keys To Successful DevSecOps Implementations.
Found this interesting article, I agree mostly but want your thoughts, do you agree with their 3 key points?
Source: https://www.cpomagazine.com/tech/3-keys-to-successful-devsecops-implementations/
https://redd.it/z247mv
@r_devops
Found this interesting article, I agree mostly but want your thoughts, do you agree with their 3 key points?
Source: https://www.cpomagazine.com/tech/3-keys-to-successful-devsecops-implementations/
https://redd.it/z247mv
@r_devops
CPO Magazine
3 Keys To Successful DevSecOps Implementations - CPO Magazine
DevSecOps is the way forward for all enterprises. However, installing this culture is trickier than it looks on the surface.
Alternative to InSpec: what do you use to "assert things have been correctly configured"?
I used InSpec the past to run once in a while to assert that some things have been correctly configured and report back if not.
Typically:
Checking the content of some files or the status.of some services after build a new AMI with Packer
Checking that security groups re correctly configured according to our "compliance du jour"
I really wanted to love InSpec for that but it still a PITA to use:
The doc is really not great, especially when bootstrapping a new project. Once you have resources configured and want to add more or tweak some, it's a bit better, but still
It's Ruby and all its dependency issues :)
Plugin support is still... Interesting. I'm still not sure what to use/instal to assert a few basic Kubernetes. Trying to install `train-kubernetes` gives me a dependency conflict error (\o/), last `inspec-k8s` is super old and installing that gem installs ... `inspec-k8s` 0.0.0
Just tried a few more things and use a
Is any of you using something else and would recommend it?
At the moment, I'm mostly interested in testing GCP and Kubernetes resources, but that may change in the future.
https://redd.it/z2gb9d
@r_devops
I used InSpec the past to run once in a while to assert that some things have been correctly configured and report back if not.
Typically:
Checking the content of some files or the status.of some services after build a new AMI with Packer
Checking that security groups re correctly configured according to our "compliance du jour"
I really wanted to love InSpec for that but it still a PITA to use:
The doc is really not great, especially when bootstrapping a new project. Once you have resources configured and want to add more or tweak some, it's a bit better, but still
It's Ruby and all its dependency issues :)
Plugin support is still... Interesting. I'm still not sure what to use/instal to assert a few basic Kubernetes. Trying to install `train-kubernetes` gives me a dependency conflict error (\o/), last `inspec-k8s` is super old and installing that gem installs ... `inspec-k8s` 0.0.0
Just tried a few more things and use a
k8s_deployment resource to assert a deployment. It fails with a terrible traceback because it doesn't know the resource.Is any of you using something else and would recommend it?
At the moment, I'm mostly interested in testing GCP and Kubernetes resources, but that may change in the future.
https://redd.it/z2gb9d
@r_devops
reddit
Alternative to InSpec: what do you use to "assert things have been...
I used InSpec the past to run once in a while to assert that some things have been correctly configured and report back if not. Typically: *...
DevOps At Home
Alright so how many of you guys do some sort of devopsish stuff at home? Here’s what I do https://youtu.be/baDpnjg9YTc
https://redd.it/z2g5a7
@r_devops
Alright so how many of you guys do some sort of devopsish stuff at home? Here’s what I do https://youtu.be/baDpnjg9YTc
https://redd.it/z2g5a7
@r_devops
YouTube
DevOps @ Home
Dive into how I manage my #azure infra with #terraform deployed through #githhubactions ran from self-hosted #githubrunners inspecting pull requests with tools like #Oak9 and #terracost.
▬ Timestamps ▬▬▬▬▬▬▬▬▬▬
0:00 - Intro
1:30 - Terraform
7:52 - GitHub…
▬ Timestamps ▬▬▬▬▬▬▬▬▬▬
0:00 - Intro
1:30 - Terraform
7:52 - GitHub…
robocopy time estimate
How do I calculate the time it will take to copy from source to destination using robocopy?
I performed a "dry run" test with the robocopy command and the /L parameters. It listed all of the files it would copy as well as a summary.
It displays a Time column in the log file's summary. Is this an estimated time to copy ? If not , how to determine the estimated time to copy ?
https://redd.it/z2gs9n
@r_devops
How do I calculate the time it will take to copy from source to destination using robocopy?
I performed a "dry run" test with the robocopy command and the /L parameters. It listed all of the files it would copy as well as a summary.
It displays a Time column in the log file's summary. Is this an estimated time to copy ? If not , how to determine the estimated time to copy ?
https://redd.it/z2gs9n
@r_devops
reddit
robocopy time estimate
How do I calculate the time it will take to copy from source to destination using robocopy? I performed a "dry run" test with the robocopy...
ECS deployed on EC2 not accessible via HTTP
I have deployed an ECS cluster with EC2, tasks are running fine even I have checked inside EC2 that the container is running with desired port mapping. But, when I try to access it with that port it's not accessible. It says connection refused. Checked with curl and ping to EC2 IP, has no reply. I have configured security rules accordingly, still no luck. The same docker image runs with fargate launch successfully. Only having issue with EC2 type. Can not figure out the issue.
https://redd.it/z2j8cn
@r_devops
I have deployed an ECS cluster with EC2, tasks are running fine even I have checked inside EC2 that the container is running with desired port mapping. But, when I try to access it with that port it's not accessible. It says connection refused. Checked with curl and ping to EC2 IP, has no reply. I have configured security rules accordingly, still no luck. The same docker image runs with fargate launch successfully. Only having issue with EC2 type. Can not figure out the issue.
https://redd.it/z2j8cn
@r_devops
reddit
ECS deployed on EC2 not accessible via HTTP
I have deployed an ECS cluster with EC2, tasks are running fine even I have checked inside EC2 that the container is running with desired port...
Do you guys know where I could find stuff like this?
[https://imgur.com/a/Ay7Vdqw](https://imgur.com/a/Ay7Vdqw)
Basically, a website where it showcase how to achieve the following on a service:
* Design a Resilient Architecture
* Design High-Performing Architecture
* Design Cost Optimized Architecture
https://redd.it/z2kkk2
@r_devops
[https://imgur.com/a/Ay7Vdqw](https://imgur.com/a/Ay7Vdqw)
Basically, a website where it showcase how to achieve the following on a service:
* Design a Resilient Architecture
* Design High-Performing Architecture
* Design Cost Optimized Architecture
https://redd.it/z2kkk2
@r_devops
Imgur
Post with 0 views.
BitBucket - non-consecutive manual triggers
Howdy all,
Bitbucket have a manual triggers where I can run or not, good example is a deployment step. But I didn't found a way to run the steps non-consecutive, because sometimes my pipeline will not need to execute a certain step.
​
Trivial code snippet:
>\- step:
name: List files
trigger: manual
script:
\- ls -lah
Thanks.
https://redd.it/z2ljpi
@r_devops
Howdy all,
Bitbucket have a manual triggers where I can run or not, good example is a deployment step. But I didn't found a way to run the steps non-consecutive, because sometimes my pipeline will not need to execute a certain step.
​
Trivial code snippet:
>\- step:
name: List files
trigger: manual
script:
\- ls -lah
Thanks.
https://redd.it/z2ljpi
@r_devops
reddit
BitBucket - non-consecutive manual triggers
Howdy all, Bitbucket have a manual triggers where I can run or not, good example is a deployment step. But I didn't found a way to run the steps...
DevOps and Localization: Improving an overlooked area
My experiences working as a developer is that localization in many companies (teams) sticks out as the odd child that isn't properly integrated into the development flow. What I have noticed:
- localization process often stalls development because copy/translations are required to proceed
- developers has to copy-paste translations around they don't understand
- localization are spread across Google Sheets, Jira tickets, Emails, and code base (not to mention spontaneous changes)
- cumbersome process to add or update copy and translation means it often get neglected resulting in suboptimal end product (unprofessionalism, out of date content, unclear or buggy content)
This adds up to a a lot of wasted time and effort. I am trying to find out if others share some of my observations and what the best solutions are? I have been trying out a few different solutions in a small side project as well.
https://redd.it/z2p0ka
@r_devops
My experiences working as a developer is that localization in many companies (teams) sticks out as the odd child that isn't properly integrated into the development flow. What I have noticed:
- localization process often stalls development because copy/translations are required to proceed
- developers has to copy-paste translations around they don't understand
- localization are spread across Google Sheets, Jira tickets, Emails, and code base (not to mention spontaneous changes)
- cumbersome process to add or update copy and translation means it often get neglected resulting in suboptimal end product (unprofessionalism, out of date content, unclear or buggy content)
This adds up to a a lot of wasted time and effort. I am trying to find out if others share some of my observations and what the best solutions are? I have been trying out a few different solutions in a small side project as well.
https://redd.it/z2p0ka
@r_devops
reddit
DevOps and Localization: Improving an overlooked area
My experiences working as a developer is that localization in many companies (teams) sticks out as the odd child that isn't properly integrated...
Ensure that an ansible secrets.yml is never committed unencrypted
I use GitLab for version control and have a lot of secret variables I need to have version control over. However, we I don't want this committed in plain yaml without being encrypted first. How do people typically manage this problem?
I'm wondering if there is some kind of pre-commit hook within GitLab that i could link to a script that checks/validates the contents before accepting the commit.
​
edit: just found this https://aaron.cc/prevent-unencrypted-ansible-vaults-from-being-pushed-to-git/ so it seems Gitlab hooks is the correct way to enforce this server side.
https://redd.it/z2tucy
@r_devops
I use GitLab for version control and have a lot of secret variables I need to have version control over. However, we I don't want this committed in plain yaml without being encrypted first. How do people typically manage this problem?
I'm wondering if there is some kind of pre-commit hook within GitLab that i could link to a script that checks/validates the contents before accepting the commit.
​
edit: just found this https://aaron.cc/prevent-unencrypted-ansible-vaults-from-being-pushed-to-git/ so it seems Gitlab hooks is the correct way to enforce this server side.
https://redd.it/z2tucy
@r_devops
aaron.cc
Prevent Unencrypted Ansible Vaults from Being Pushed to Git
Ansible Vault is a nice tool that allows you to store sensitive data (such as passwords and application secrets) securely along with your Ansible Playbooks, so you have all your configuration in a single place. Obviously, you don’t want to store unencrypted…
Kubernetes on DigitalOcean pricing for low usage and limitations
Hello,
I'm hosting a Rails application for a client of mine and the total cost is about 30$/month, that is 3 droplets, one is postgres, 1 is "background workers" and one is "web" (6$+6$+18$).
I've been "constantly" dealing with the annoyance of having to upgrade the servers, which deserve way more maintenance than I give them, but lately after I made a bunch of changes to the application, I realized that I would like to avoid this all together: makes deployment stressful, I feel uncomfortable because of security patches I need to keep up with at the server level (on top of the app!) and in general even two identical servers diverge somewhat over time.
I thought of just running Docker images inside simple droplets, but putting docker images inside systemd seems to come with some limitations (the process for killing the image is not straightforward)
Out of curiosity, I'm exploring the idea of using kubernetes and terraform as an alternative, I like learning so studying comes as a plus in some way.
Notice that this app has been up for over 10 years, but it still receives new features, so I'm expecting more requests and changes over time.
Is it possible to come close in pricing in DO using Kubernetes? I'd like to know before I commit to even studying stuff. The main questions are:
Basic node pricing seems to be 12$, but it also say that the cost is based on a per-droplet basis, which one is it, if I want to use a 6$ droplet for one of the nodes, will it still cost 12$?
Is a load balancer required? What's the downside of not having one in this case
I need a safe place to store file uploads. The app doesn't support S3 yet, so it needs to be something mounted as a filesystem. I was reading that I can't mount NFS on multiple instances, which would be a serious limitation, is this correct? Is there anything I could share between nodes that I can mount as a filesystem?
are my app logs kept anywhere if I need to debug?
There are quite a few things I can compromise on:
managed postgres, i already discussed this additional expense, so the "postgres" node will be gone
I can pay for Spaces ($5/month),it's a waste because space usage is like 10GB,definitely not 100
one of the machines is currently 2gb of ram, 2 vcpus (needed for the big excel files generated occasionally). I can split this in 2, but from my understanding, I would need a load balancer in that case, increasing the price.
Is there any way to scale to 0 the "background worker" node without spending money on a node to orchestrate that (keda or knative)? That node is used highly infrequently.
Based on my calculation, I would end up needing:
2 nodes, one will cost 12$ and the other 18$
1 managed database
Spaces
Total cost is 50$/month. I would love to have the worker node be small (6$/month),but I can't figure out if DO allows that.
On top of this, I won't be able to increase availability unless I pay 12$ for the load balancer, sadly.
App requirements:
filesystem for file uploads (at some point, I'll move this to s3-like)
postgres
1gb ram
worker process and web process need to be split to avoid workers using all resources needed to serve web
static file serving and proxy to the main application, I usually use nginx (does it need to be another node?)
ideally the filesystem for file uploads can be shared in some way between worker and web nodes. It needs to be accessible by nginx for serving the files
backups for db and Spaces
https, currently managed by letsencrypt with nginx plugin
Based on this huge wall of text, sorry, I'm not confident it's straightforward to keep price in the same range. 50$/month would be acceptable, but if I had to add a load balancer and a separate node for filesysyem sharing, that would put me at 74$/month which is almost 3 times as of the current price.
On top of that, I'm uncertain about nginx: unless it's provided, I will need to put it on the same node as the main application, or a separate node and pay another 12$,but then
Hello,
I'm hosting a Rails application for a client of mine and the total cost is about 30$/month, that is 3 droplets, one is postgres, 1 is "background workers" and one is "web" (6$+6$+18$).
I've been "constantly" dealing with the annoyance of having to upgrade the servers, which deserve way more maintenance than I give them, but lately after I made a bunch of changes to the application, I realized that I would like to avoid this all together: makes deployment stressful, I feel uncomfortable because of security patches I need to keep up with at the server level (on top of the app!) and in general even two identical servers diverge somewhat over time.
I thought of just running Docker images inside simple droplets, but putting docker images inside systemd seems to come with some limitations (the process for killing the image is not straightforward)
Out of curiosity, I'm exploring the idea of using kubernetes and terraform as an alternative, I like learning so studying comes as a plus in some way.
Notice that this app has been up for over 10 years, but it still receives new features, so I'm expecting more requests and changes over time.
Is it possible to come close in pricing in DO using Kubernetes? I'd like to know before I commit to even studying stuff. The main questions are:
Basic node pricing seems to be 12$, but it also say that the cost is based on a per-droplet basis, which one is it, if I want to use a 6$ droplet for one of the nodes, will it still cost 12$?
Is a load balancer required? What's the downside of not having one in this case
I need a safe place to store file uploads. The app doesn't support S3 yet, so it needs to be something mounted as a filesystem. I was reading that I can't mount NFS on multiple instances, which would be a serious limitation, is this correct? Is there anything I could share between nodes that I can mount as a filesystem?
are my app logs kept anywhere if I need to debug?
There are quite a few things I can compromise on:
managed postgres, i already discussed this additional expense, so the "postgres" node will be gone
I can pay for Spaces ($5/month),it's a waste because space usage is like 10GB,definitely not 100
one of the machines is currently 2gb of ram, 2 vcpus (needed for the big excel files generated occasionally). I can split this in 2, but from my understanding, I would need a load balancer in that case, increasing the price.
Is there any way to scale to 0 the "background worker" node without spending money on a node to orchestrate that (keda or knative)? That node is used highly infrequently.
Based on my calculation, I would end up needing:
2 nodes, one will cost 12$ and the other 18$
1 managed database
Spaces
Total cost is 50$/month. I would love to have the worker node be small (6$/month),but I can't figure out if DO allows that.
On top of this, I won't be able to increase availability unless I pay 12$ for the load balancer, sadly.
App requirements:
filesystem for file uploads (at some point, I'll move this to s3-like)
postgres
1gb ram
worker process and web process need to be split to avoid workers using all resources needed to serve web
static file serving and proxy to the main application, I usually use nginx (does it need to be another node?)
ideally the filesystem for file uploads can be shared in some way between worker and web nodes. It needs to be accessible by nginx for serving the files
backups for db and Spaces
https, currently managed by letsencrypt with nginx plugin
Based on this huge wall of text, sorry, I'm not confident it's straightforward to keep price in the same range. 50$/month would be acceptable, but if I had to add a load balancer and a separate node for filesysyem sharing, that would put me at 74$/month which is almost 3 times as of the current price.
On top of that, I'm uncertain about nginx: unless it's provided, I will need to put it on the same node as the main application, or a separate node and pay another 12$,but then
how does it access the file upload filesystem?
Please forgive any term misusage, I'm realizing that "node" might be the wrong term, as I said, I know very little about Kubernetes at this time.
EDIT: Based on the reading I'm doing, it seems like I'm missing the concept of a Pod.
While I don't believe the background worker belongs necessary to the same Pod as the web app, I could put them in the same pod and limit the resources of the background worker process. I would use a 12$ droplet and I could set autoscaling to max 2 or even 3 and include a load balancer. This would bring me still close to 24$+20$ "base price", but the app would be able to tolerate bursts.
The filesystem seems to be sharable in the Pod, so this could solve the problem.
https://redd.it/z2q4cp
@r_devops
Please forgive any term misusage, I'm realizing that "node" might be the wrong term, as I said, I know very little about Kubernetes at this time.
EDIT: Based on the reading I'm doing, it seems like I'm missing the concept of a Pod.
While I don't believe the background worker belongs necessary to the same Pod as the web app, I could put them in the same pod and limit the resources of the background worker process. I would use a 12$ droplet and I could set autoscaling to max 2 or even 3 and include a load balancer. This would bring me still close to 24$+20$ "base price", but the app would be able to tolerate bursts.
The filesystem seems to be sharable in the Pod, so this could solve the problem.
https://redd.it/z2q4cp
@r_devops
reddit
Kubernetes on DigitalOcean pricing for low usage and limitations
Hello, I'm hosting a Rails application for a client of mine and the total cost is about 30$/month, that is 3 droplets, one is postgres, 1 is...