Terraform question. Do I need to worry about state management for a small Lab?
I am currently deploying through Github Actions, a single VM which gets created by Terraform code.
I don't fully understand the problem of state management, at least not for my own small lab environment.
\- Should I use Terraform Cloud for state management
\- Can I just store states in my Github repo (not ideal I know, but for a small lab)?
\- What If I just don't do state management? (they get lost on each run if I don't save them somewhere)
https://redd.it/13jymsk
@r_devops
I am currently deploying through Github Actions, a single VM which gets created by Terraform code.
I don't fully understand the problem of state management, at least not for my own small lab environment.
\- Should I use Terraform Cloud for state management
\- Can I just store states in my Github repo (not ideal I know, but for a small lab)?
\- What If I just don't do state management? (they get lost on each run if I don't save them somewhere)
https://redd.it/13jymsk
@r_devops
Reddit
r/devops on Reddit: Terraform question. Do I need to worry about state management for a small Lab?
Posted by u/AwShix - 1 vote and 6 comments
How did you handle burnout?
I'd like to read about experiences with burnout. I had two weeks where I couldn't focus, and I feel that my performance is lower than it was one or two months ago. I think that this is temporary, so I'm not worrying too much about it. However, like most developers before experiencing burnout, I was working more hours than usual due to anxiety about growth. Now, I'm trying to track my work hours to be more efficient. I prefer to work for 5 or 6 hours without social media or anything that can distract me. So, my questions are:
\- How did you feel with burnout?
\- How did you manage this situation?
\- What was your strategy for getting back to performing well?
https://redd.it/13kiqcm
@r_devops
I'd like to read about experiences with burnout. I had two weeks where I couldn't focus, and I feel that my performance is lower than it was one or two months ago. I think that this is temporary, so I'm not worrying too much about it. However, like most developers before experiencing burnout, I was working more hours than usual due to anxiety about growth. Now, I'm trying to track my work hours to be more efficient. I prefer to work for 5 or 6 hours without social media or anything that can distract me. So, my questions are:
\- How did you feel with burnout?
\- How did you manage this situation?
\- What was your strategy for getting back to performing well?
https://redd.it/13kiqcm
@r_devops
Reddit
r/devops on Reddit: How did you handle burnout?
Posted by u/FernandoJaimes - No votes and no comments
You already reused the code of your company outside company?
DevOps daily produces code that is not part of company product, for example, an script to install Kubernetes or some automation on AWS. You already used these codes in a personal project or in another company?
https://redd.it/13k1ne8
@r_devops
DevOps daily produces code that is not part of company product, for example, an script to install Kubernetes or some automation on AWS. You already used these codes in a personal project or in another company?
https://redd.it/13k1ne8
@r_devops
Reddit
r/devops on Reddit: You already reused the code of your company outside company?
Posted by u/Apart_Side3441 - No votes and 7 comments
Introducing Digger v4.0 - An Open Source GitOps tool for Terraform that runs within your existing CI/CD tool. (+ A brief history of our journey so far)
We have been building [Digger](https://github.com/diggerhq/digger) for over 2 years with multiple iterations in between. Today we are launching Digger v4.0 - An Open Source GitOps tool for Terraform.
A brief history of our journey:
🚜 [Digger Classic](https://app.digger.dev) (v1.0)
Initial focus was to build a “heroku experience in your AWS”.
We wanted to handle everything from infrastructure, CI, monitoring, logs, domains support etc. There were several design issues in this version:
The split from services to environments confused users a lot
Several types of deployments (infrastructure, software) confused customers, they didn’t know when infrastructure is needed versus a software deployment
The concept of “environment target” for the whole infrastructure had its limitations especially for customisation of existing infrastructure.
This led to the birth of Axe,
🪓 [AXE](https://dashboard.digger.dev) (v2.0)
With AXE project we wanted to improve some UX points by focusing more on “apps” which are individuals pieces that developer would want to deploy.
The main idea was to have the ability to capture whole environment was missing in this model, it was something that was appreciated in classic (albeit confusing)
While infrastructure generation was more flexible in this model, there were still pieces which didn’t fit such as creation of VPC and other common cross-app resources. This could have been solved with more thought and notion of app connectivity.
Biggest problem was reliability. Since we were taking on responsibility of creating infrastructure and building and deploying successfully, our success rate for users was not high. This affected our ability to attract more users and grow the product
This subsequently led to the birth of v3.0, Trowel,
🧑🌾 [Trowel](https://dashboard.digger.dev/create) (v3.0)
In this version we limited our scope further to generating and provisioning infrastructure-as-code. The idea was to introduce a “build step” for Terraform - the user describes the infrastructure they want in a high-level config file, that is then compiled into Terraform. Or perhaps a “framework” to abstract away the implementation details, similar to Ruby on Rails.
We no longer touched application deployment, meaning that we could focus on the core proposition infrastructure generation and customizability. This however, did not seem to interest end users we were speaking to. The challenging part was not so much writing the terraform code but rather making sure it’s provisioned correctly. The framework idea still looks promising, we haven't fully explored it yet; but even with a perfect framework in place that produces Terraform, you'd still need something to take the output and make sure the changes are reflected in the target cloud account. This was the one missing piece in the toolchain we decided to further “zoom into”.
🧑🌾 [Digger](https://digger.dev) (v4.0)
Digger is an open-source alternative to Terraform Cloud. It makes it easy to run terraform plan and apply in the CI / CD platform you already have, such as Github Actions.
A class of CI/CD products for Terraform exists (Spacelift, Terraform Cloud, Atlantis) but they are more like separate full-stack CI systems. We think that having 2 CI systems for that doesn't make sense. The infrastructure of asynchronous jobs, logs etc can and should be reused. Stretching the "assembly language" parallel, this is a bit like the CPU for a yet-to-be-created "cloud PC".
So it boils down to making it possible to run Terraform in existing CI systems. This is what Digger does.
Some of the features include:
* Any cloud: AWS, GCP, Azure
* Any CI: GitHub Actions, Gitlab, Azure DevOps
* PR-level LocksPlan / apply preview in comments
* Plan Persistence
* Workspaces support
* Terragrunt support
* PRO (Beta): Open Policy Agent & Conftest
* PRO (Beta): Drift detection (via Driftctl)
* PRO (Beta): Cost Estimates (via
We have been building [Digger](https://github.com/diggerhq/digger) for over 2 years with multiple iterations in between. Today we are launching Digger v4.0 - An Open Source GitOps tool for Terraform.
A brief history of our journey:
🚜 [Digger Classic](https://app.digger.dev) (v1.0)
Initial focus was to build a “heroku experience in your AWS”.
We wanted to handle everything from infrastructure, CI, monitoring, logs, domains support etc. There were several design issues in this version:
The split from services to environments confused users a lot
Several types of deployments (infrastructure, software) confused customers, they didn’t know when infrastructure is needed versus a software deployment
The concept of “environment target” for the whole infrastructure had its limitations especially for customisation of existing infrastructure.
This led to the birth of Axe,
🪓 [AXE](https://dashboard.digger.dev) (v2.0)
With AXE project we wanted to improve some UX points by focusing more on “apps” which are individuals pieces that developer would want to deploy.
The main idea was to have the ability to capture whole environment was missing in this model, it was something that was appreciated in classic (albeit confusing)
While infrastructure generation was more flexible in this model, there were still pieces which didn’t fit such as creation of VPC and other common cross-app resources. This could have been solved with more thought and notion of app connectivity.
Biggest problem was reliability. Since we were taking on responsibility of creating infrastructure and building and deploying successfully, our success rate for users was not high. This affected our ability to attract more users and grow the product
This subsequently led to the birth of v3.0, Trowel,
🧑🌾 [Trowel](https://dashboard.digger.dev/create) (v3.0)
In this version we limited our scope further to generating and provisioning infrastructure-as-code. The idea was to introduce a “build step” for Terraform - the user describes the infrastructure they want in a high-level config file, that is then compiled into Terraform. Or perhaps a “framework” to abstract away the implementation details, similar to Ruby on Rails.
We no longer touched application deployment, meaning that we could focus on the core proposition infrastructure generation and customizability. This however, did not seem to interest end users we were speaking to. The challenging part was not so much writing the terraform code but rather making sure it’s provisioned correctly. The framework idea still looks promising, we haven't fully explored it yet; but even with a perfect framework in place that produces Terraform, you'd still need something to take the output and make sure the changes are reflected in the target cloud account. This was the one missing piece in the toolchain we decided to further “zoom into”.
🧑🌾 [Digger](https://digger.dev) (v4.0)
Digger is an open-source alternative to Terraform Cloud. It makes it easy to run terraform plan and apply in the CI / CD platform you already have, such as Github Actions.
A class of CI/CD products for Terraform exists (Spacelift, Terraform Cloud, Atlantis) but they are more like separate full-stack CI systems. We think that having 2 CI systems for that doesn't make sense. The infrastructure of asynchronous jobs, logs etc can and should be reused. Stretching the "assembly language" parallel, this is a bit like the CPU for a yet-to-be-created "cloud PC".
So it boils down to making it possible to run Terraform in existing CI systems. This is what Digger does.
Some of the features include:
* Any cloud: AWS, GCP, Azure
* Any CI: GitHub Actions, Gitlab, Azure DevOps
* PR-level LocksPlan / apply preview in comments
* Plan Persistence
* Workspaces support
* Terragrunt support
* PRO (Beta): Open Policy Agent & Conftest
* PRO (Beta): Drift detection (via Driftctl)
* PRO (Beta): Cost Estimates (via
GitHub
GitHub - diggerhq/digger: Digger is an open source IaC orchestration tool. Digger allows you to run IaC in your existing CI pipeline…
Digger is an open source IaC orchestration tool. Digger allows you to run IaC in your existing CI pipeline ⚡️ - diggerhq/digger
Infracost)
Do give it a try and let us know what you think. [Here](https://github.com/diggerhq/digger/blob/main/CONTRIBUTING.md_) is a link to the contribution guide, if you are interested.
https://redd.it/13jw53s
@r_devops
Do give it a try and let us know what you think. [Here](https://github.com/diggerhq/digger/blob/main/CONTRIBUTING.md_) is a link to the contribution guide, if you are interested.
https://redd.it/13jw53s
@r_devops
Reddit
r/devops on Reddit: Introducing Digger v4.0 - An Open Source GitOps tool for Terraform that runs within your existing CI/CD tool.…
Posted by u/utpalnadiger - No votes and 2 comments
How to renegotiate salary as a DevOps engineer?
I started my first IT job last June, as an observability/devops/systems admin (our team is kinda weird). I only have an associates of network security and some side projects that actually correlated really well with the job tasks.
I quickly became the SME with most of our tools on the team, within 6 months I was being urged to apply for the engineer role on the same team. I failed the interview the first time but I interviewed again recently and just got the offer, took it and started the role as of today.
I started at the company at a simple desk job and my company has this 15% rule (that I'm pretty sure a lot of big corporate companies have) where you can't get more than 15% compensation increase per promotion. Everyone I've talked to no matter how big the move and no matter how far up the chain they were, they've told me they weren't able to negotiate any more money, but value the title on their resume. I took it for this same reason without even attempting to negotiate.
I got the 15% increase but also lost a 10% late shift premium and any chance at overtime switching to salaried. So pretty much got no extra salary, or maybe even a slight pay cut. But I love the job, and I've wanted the official recognition for what I do for a long time.
This has led to me having a comically low salary as an observability engineer (I would say about half as much as I should be making, and 20% less than they hire brand new Admin I's at.)
I am getting by fine and I've been underpaid for more than a year so I'm kinda used to it but now that I'm an engineer I feel like I should be able to get my salary rightsized?
I just don't know how to go about getting what I want. The best things I can think of are how my salary is comically low compared to admins and engineers in the same exact team, role and company as me. I don't know if that is something I should mention in negotiation or not because it's not something that markets myself, just compares me to my team? I feel really weird boasting about myself and I don't have much experience, certs or education to back myself up, just my work. If I have to negotiate with the business side and they don't really understand or know what I do i'll definitely flop.
I need some advice on how to go about renegotiating my salary.
Thank you,
https://redd.it/13itlyf
@r_devops
I started my first IT job last June, as an observability/devops/systems admin (our team is kinda weird). I only have an associates of network security and some side projects that actually correlated really well with the job tasks.
I quickly became the SME with most of our tools on the team, within 6 months I was being urged to apply for the engineer role on the same team. I failed the interview the first time but I interviewed again recently and just got the offer, took it and started the role as of today.
I started at the company at a simple desk job and my company has this 15% rule (that I'm pretty sure a lot of big corporate companies have) where you can't get more than 15% compensation increase per promotion. Everyone I've talked to no matter how big the move and no matter how far up the chain they were, they've told me they weren't able to negotiate any more money, but value the title on their resume. I took it for this same reason without even attempting to negotiate.
I got the 15% increase but also lost a 10% late shift premium and any chance at overtime switching to salaried. So pretty much got no extra salary, or maybe even a slight pay cut. But I love the job, and I've wanted the official recognition for what I do for a long time.
This has led to me having a comically low salary as an observability engineer (I would say about half as much as I should be making, and 20% less than they hire brand new Admin I's at.)
I am getting by fine and I've been underpaid for more than a year so I'm kinda used to it but now that I'm an engineer I feel like I should be able to get my salary rightsized?
I just don't know how to go about getting what I want. The best things I can think of are how my salary is comically low compared to admins and engineers in the same exact team, role and company as me. I don't know if that is something I should mention in negotiation or not because it's not something that markets myself, just compares me to my team? I feel really weird boasting about myself and I don't have much experience, certs or education to back myself up, just my work. If I have to negotiate with the business side and they don't really understand or know what I do i'll definitely flop.
I need some advice on how to go about renegotiating my salary.
Thank you,
https://redd.it/13itlyf
@r_devops
Reddit
r/devops on Reddit: How to renegotiate salary as a DevOps engineer?
Posted by u/KiwiZ0 - 1 vote and 14 comments
Looking for devops engnr with 4 -6 yrs exp in blr India
My company recently posted a job opening for devops engnr 4 -6 yrs exp . Please dm me if you need further details
https://redd.it/13kodx9
@r_devops
My company recently posted a job opening for devops engnr 4 -6 yrs exp . Please dm me if you need further details
https://redd.it/13kodx9
@r_devops
Reddit
r/devops on Reddit: Looking for devops engnr with 4 -6 yrs exp in blr India
Posted by u/Capable_Difference39 - No votes and no comments
Analyzing AWS EC2 Cloud Security Issues with Selefra GPT
\### **Introduction:**
In today's digital landscape, cloud security is a paramount concern for organizations leveraging cloud computing services. With the increasing complexity of cloud environments, it becomes crucial to have effective tools and strategies in place to identify and address potential security vulnerabilities. In this article, we will explore how Selefra GPT, an advanced policy-as-code tool, can be utilized to analyze and mitigate AWS EC2 cloud security issues.
1. **Understanding Selefra GPT:**
Selefra GPT is an open-source policy-as-code software that combines the power of machine learning and infrastructure analysis. It leverages the capabilities of GPT models to provide comprehensive analytics for multi-cloud and SaaS environments, including AWS EC2. By utilizing Selefra GPT, organizations can gain valuable insights into their cloud infrastructure's security posture and make informed decisions to enhance their overall security.
1. **Identifying AWS EC2 Security Risks:**
Selefra GPT enables security teams to analyze AWS EC2 instances and identify potential security risks. It utilizes its policy-as-code approach to define policies using SQL and YAML syntax, making it easier for security practitioners to express complex security rules. With Selefra GPT, security teams can perform comprehensive security assessments, including checking for open ports, insecure configurations, outdated software versions, and more.
1. **Customizing Security Policies:**
One of the key advantages of Selefra GPT is its flexibility in customizing security policies. Organizations can tailor their security policies according to their specific requirements and compliance standards. Whether it's enforcing encryption protocols, implementing access controls, or monitoring resource configurations, Selefra GPT allows security teams to define and manage policies that align with their unique security objectives.
1. **Continuous Security Monitoring:**
AWS EC2 environments are dynamic, with instances being provisioned, modified, and terminated frequently. Selefra GPT enables continuous security monitoring by regularly analyzing the AWS EC2 environment and detecting any changes or deviations from defined security policies. This proactive approach ensures that security issues are promptly identified and addressed, reducing the window of vulnerability.
1. **Remediation and Compliance:**
Once security issues are identified, Selefra GPT provides actionable insights and recommendations to remediate the vulnerabilities. Security teams can prioritize their efforts based on the severity of the issues and follow the recommended steps to mitigate the risks. Furthermore, Selefra GPT helps organizations maintain compliance with industry standards and regulations by continuously evaluating the AWS EC2 environment against the defined security policies.
\### Install
First, installing Selefra is very simple. You just need to execute the following command:
```bash
brew tap selera/tap
brew install selefra/tap/selefra
mkdir selefra-demo & cd selefra-demo & selefra init
```
\### Choose provider
Then, you need to choose the provider you need in the shell, such as AWS:
```bash
[Use arrows to move, Space to select, and enter to complete the selection\]
[✔\] AWS # We choose AWS installation
[ \] azure
[ \] GCP
[ \] k8s
```
\### ****Configuration****
**configure AWS:**
We have written a detailed configuration [document\](https://www.selefra.io/docs/providers-connector/aws) in advance, you can configure your aws information in advance through here.
**configure Selefra:**
After initialization, you will get a selefra.yaml file. Next, you need to configure this file to use the GPT functionality:
```yaml
selefra:
name: selefra-demo
cli_version: latest
openai_api_key: <Your Openai Api Key>
openai_mode: gpt-3.5
openai_limit: 10
providers:
\- name: aws
source:
\### **Introduction:**
In today's digital landscape, cloud security is a paramount concern for organizations leveraging cloud computing services. With the increasing complexity of cloud environments, it becomes crucial to have effective tools and strategies in place to identify and address potential security vulnerabilities. In this article, we will explore how Selefra GPT, an advanced policy-as-code tool, can be utilized to analyze and mitigate AWS EC2 cloud security issues.
1. **Understanding Selefra GPT:**
Selefra GPT is an open-source policy-as-code software that combines the power of machine learning and infrastructure analysis. It leverages the capabilities of GPT models to provide comprehensive analytics for multi-cloud and SaaS environments, including AWS EC2. By utilizing Selefra GPT, organizations can gain valuable insights into their cloud infrastructure's security posture and make informed decisions to enhance their overall security.
1. **Identifying AWS EC2 Security Risks:**
Selefra GPT enables security teams to analyze AWS EC2 instances and identify potential security risks. It utilizes its policy-as-code approach to define policies using SQL and YAML syntax, making it easier for security practitioners to express complex security rules. With Selefra GPT, security teams can perform comprehensive security assessments, including checking for open ports, insecure configurations, outdated software versions, and more.
1. **Customizing Security Policies:**
One of the key advantages of Selefra GPT is its flexibility in customizing security policies. Organizations can tailor their security policies according to their specific requirements and compliance standards. Whether it's enforcing encryption protocols, implementing access controls, or monitoring resource configurations, Selefra GPT allows security teams to define and manage policies that align with their unique security objectives.
1. **Continuous Security Monitoring:**
AWS EC2 environments are dynamic, with instances being provisioned, modified, and terminated frequently. Selefra GPT enables continuous security monitoring by regularly analyzing the AWS EC2 environment and detecting any changes or deviations from defined security policies. This proactive approach ensures that security issues are promptly identified and addressed, reducing the window of vulnerability.
1. **Remediation and Compliance:**
Once security issues are identified, Selefra GPT provides actionable insights and recommendations to remediate the vulnerabilities. Security teams can prioritize their efforts based on the severity of the issues and follow the recommended steps to mitigate the risks. Furthermore, Selefra GPT helps organizations maintain compliance with industry standards and regulations by continuously evaluating the AWS EC2 environment against the defined security policies.
\### Install
First, installing Selefra is very simple. You just need to execute the following command:
```bash
brew tap selera/tap
brew install selefra/tap/selefra
mkdir selefra-demo & cd selefra-demo & selefra init
```
\### Choose provider
Then, you need to choose the provider you need in the shell, such as AWS:
```bash
[Use arrows to move, Space to select, and enter to complete the selection\]
[✔\] AWS # We choose AWS installation
[ \] azure
[ \] GCP
[ \] k8s
```
\### ****Configuration****
**configure AWS:**
We have written a detailed configuration [document\](https://www.selefra.io/docs/providers-connector/aws) in advance, you can configure your aws information in advance through here.
**configure Selefra:**
After initialization, you will get a selefra.yaml file. Next, you need to configure this file to use the GPT functionality:
```yaml
selefra:
name: selefra-demo
cli_version: latest
openai_api_key: <Your Openai Api Key>
openai_mode: gpt-3.5
openai_limit: 10
providers:
\- name: aws
source:
www.selefra.io
Selefra | Open-Source infrastructure analytics Platform | Policy as Code
Selefra is an open-source policy-as-code software that provides analytics for multi-cloud and SaaS.
aws
version: latest
```
\### Running
You can use environment variables to store the openai_api_key, openai_mode, and openai_limit parameters. Then, you can start the GPT analysis by executing the following command:
```bash
selefra gpt "Please help me analyze the vulnerabilities in AWS S3?"
```
Finally, you will get results similar to the animated image below:

\### **Conclusion:**
Securing AWS EC2 instances is critical for organizations to protect their sensitive data and maintain the integrity of their cloud infrastructure. Selefra GPT empowers security teams with advanced analytics and policy-as-code capabilities to analyze, identify, and remediate security issues in AWS EC2 environments. By leveraging the power of machine learning and policy automation, Selefra GPT enables organizations to enhance their cloud security posture and build a robust defense against potential threats.
https://redd.it/13kos1v
@r_devops
version: latest
```
\### Running
You can use environment variables to store the openai_api_key, openai_mode, and openai_limit parameters. Then, you can start the GPT analysis by executing the following command:
```bash
selefra gpt "Please help me analyze the vulnerabilities in AWS S3?"
```
Finally, you will get results similar to the animated image below:

\### **Conclusion:**
Securing AWS EC2 instances is critical for organizations to protect their sensitive data and maintain the integrity of their cloud infrastructure. Selefra GPT empowers security teams with advanced analytics and policy-as-code capabilities to analyze, identify, and remediate security issues in AWS EC2 environments. By leveraging the power of machine learning and policy automation, Selefra GPT enables organizations to enhance their cloud security posture and build a robust defense against potential threats.
https://redd.it/13kos1v
@r_devops
A detailed article on Datadog's $5M outage
There’s lots of food for thought in this outage...!
https://newsletter.pragmaticengineer.com/p/inside-the-datadog-outage
https://redd.it/13kq12p
@r_devops
There’s lots of food for thought in this outage...!
https://newsletter.pragmaticengineer.com/p/inside-the-datadog-outage
https://redd.it/13kq12p
@r_devops
Pragmaticengineer
Inside DataDog’s $5M Outage (Real-World Engineering Challenges #8)
The observability provider was down for more than a day in March. What went wrong, how did the engineering team respond, and what can businesses learn from the incident? Exclusive.
What skills do I need to acquire to be devops engineer?
Hi, I was hoping to get list of tools and tech that I need to learn to become a devops engineer. I have learned docker till now (docker networking, making containers, adding volumes) .
Also if could also tell about how can I learn things that require credit card like aws or something for free that would be help as I am bit shot on money and doesn’t have a credit card
https://redd.it/13kph6k
@r_devops
Hi, I was hoping to get list of tools and tech that I need to learn to become a devops engineer. I have learned docker till now (docker networking, making containers, adding volumes) .
Also if could also tell about how can I learn things that require credit card like aws or something for free that would be help as I am bit shot on money and doesn’t have a credit card
https://redd.it/13kph6k
@r_devops
Reddit
r/devops on Reddit: What skills do I need to acquire to be devops engineer?
Posted by u/fromMultiverse - No votes and 1 comment
Pod Disruption Budgets (PDB) in Kubernetes
PDB - What they are, why they’re important, and how to use them effectively.
https://medium.com/geekculture/kubernetes-pod-disruption-budgets-pdb-b74f3dade6c1
https://redd.it/13ko55u
@r_devops
PDB - What they are, why they’re important, and how to use them effectively.
https://medium.com/geekculture/kubernetes-pod-disruption-budgets-pdb-b74f3dade6c1
https://redd.it/13ko55u
@r_devops
Medium
Kubernetes | Pod Disruption Budgets (PDB)
How They Affect Scheduling and Availability During Node Maintenance or Failures
Automating the pain away: Solving common issues to improve team workflow
https://www.offerzen.com/blog/automating-to-improve-team-workflow
Thought this was interesting as they dig into some tools they use to better automate local dev workflows.
I hadn't heard of Plop or zx before. Has anyone used them/alternatives?
https://redd.it/13ktqqy
@r_devops
https://www.offerzen.com/blog/automating-to-improve-team-workflow
Thought this was interesting as they dig into some tools they use to better automate local dev workflows.
I hadn't heard of Plop or zx before. Has anyone used them/alternatives?
https://redd.it/13ktqqy
@r_devops
The OfferZen Community Blog
Automating the pain away: Solving common issues to improve team workflow
Here is how we at Stitch took the top 10 common issues from new joiners and automated their detection and solutions - saving us time and money.
Welcome to our Enterprise Developer Survey!
We have a new, short survey in order to understand the technologies and tools that Enterprise Developers use. Are you a software developer, a database administrator, a data scientist, an engineer, an architect or involved in DevOps and SRE? Help us and make an impact on the developer ecosystem. Start here
https://redd.it/13ku7dx
@r_devops
We have a new, short survey in order to understand the technologies and tools that Enterprise Developers use. Are you a software developer, a database administrator, a data scientist, an engineer, an architect or involved in DevOps and SRE? Help us and make an impact on the developer ecosystem. Start here
https://redd.it/13ku7dx
@r_devops
Reddit
r/devops on Reddit: Welcome to our Enterprise Developer Survey!
Posted by u/vjmde - No votes and no comments
Programming without a stack trace: When abstractions become illusions
This [insightful article](https://architectelevator.com/architecture/stacktrace-abstraction/) by [Gregor Hohpe](https://linkedin.com/in/ghohpe) covers:
* Evolution of programming abstractions.
* Challenges of cloud abstractions.
* Importance of tools like stack traces for debugging, especially in distributed systems.
Gregor emphasizes that effective cloud abstractions are crucial but tricky to get right. He points out that debugging at the abstraction level can be complex and underscores the value of good error messages and observability.
The part about the "unhappy path" particularly resonated with me:
>The unhappy path is where many abstractions struggle. Software that makes building small systems easy but struggles with real-world development scenarios like debugging or automated testing is an unwelcome version of “demoware” - it demos well, but doesn’t actually work in the real world. And there’s no unlock code. ... I propose the following test for vendors demoing higher-level development systems:
>
>1. Ask them to enter a typo into one of the fields where the developer is expected to enter some logic.
>
>2. Ask them to leave the room for two minutes while we change a few random elements of their demo configuration. Upon return, they would have to debug and figure out what was changed.
>
>Needless to say, no vendor ever picked the challenge.
# Why it interests me
I'm one of the creators of [Winglang](https://github.com/winglang/wing), an open-source programming language for the cloud that allows developers to work at a higher level of abstraction.
We set a goal for ourselves to provide good debugging experience that will allow developers to debug cloud applications in the context of the logical structure of the apps.
After reading this article I think we can rephrase the goal as being able to easily pass Gregor's vendor test from above :)
https://redd.it/13kz8y5
@r_devops
This [insightful article](https://architectelevator.com/architecture/stacktrace-abstraction/) by [Gregor Hohpe](https://linkedin.com/in/ghohpe) covers:
* Evolution of programming abstractions.
* Challenges of cloud abstractions.
* Importance of tools like stack traces for debugging, especially in distributed systems.
Gregor emphasizes that effective cloud abstractions are crucial but tricky to get right. He points out that debugging at the abstraction level can be complex and underscores the value of good error messages and observability.
The part about the "unhappy path" particularly resonated with me:
>The unhappy path is where many abstractions struggle. Software that makes building small systems easy but struggles with real-world development scenarios like debugging or automated testing is an unwelcome version of “demoware” - it demos well, but doesn’t actually work in the real world. And there’s no unlock code. ... I propose the following test for vendors demoing higher-level development systems:
>
>1. Ask them to enter a typo into one of the fields where the developer is expected to enter some logic.
>
>2. Ask them to leave the room for two minutes while we change a few random elements of their demo configuration. Upon return, they would have to debug and figure out what was changed.
>
>Needless to say, no vendor ever picked the challenge.
# Why it interests me
I'm one of the creators of [Winglang](https://github.com/winglang/wing), an open-source programming language for the cloud that allows developers to work at a higher level of abstraction.
We set a goal for ourselves to provide good debugging experience that will allow developers to debug cloud applications in the context of the logical structure of the apps.
After reading this article I think we can rephrase the goal as being able to easily pass Gregor's vendor test from above :)
https://redd.it/13kz8y5
@r_devops
The Architect Elevator
Programming without a stack trace: When abstractions become illusions
As the complexity of our platforms increases, we keep looking for better abstractions. Cloud compilers might help, but only if they include one key feature: the stack trace
DataDog: Where does it hurt.
As we all know, DataDog is expensive:
"The DataDog pricing model is actually pretty easy. For 500 hosts or less, you just sign over your company and all its assets to them. If >500 hosts, you need to additionally raise VC money." - wingerd33
But there are a number of different dimensions to their model and I'd like to better understand whether everyone is getting hit on the same axis.
For instance, APM charges on Indexed spans, $2.55/M, but you get 1M spans included per APM host. Are the big bills primarily due to the ingestion costs at scale? Or because of scale out in the number of hosts. Is there one particular gotcha or is it evenly spread? If I limit how many APM spans / log lines I let out of my system is that going to be an effective way to reduce my spend?
I made a \~poll with some of the main things I've heard, but the answer can be "it's complicated" and maybe better as a comment. https://forcerank.it/invite/6b853c3bd8472ace
https://redd.it/13ky2iq
@r_devops
As we all know, DataDog is expensive:
"The DataDog pricing model is actually pretty easy. For 500 hosts or less, you just sign over your company and all its assets to them. If >500 hosts, you need to additionally raise VC money." - wingerd33
But there are a number of different dimensions to their model and I'd like to better understand whether everyone is getting hit on the same axis.
For instance, APM charges on Indexed spans, $2.55/M, but you get 1M spans included per APM host. Are the big bills primarily due to the ingestion costs at scale? Or because of scale out in the number of hosts. Is there one particular gotcha or is it evenly spread? If I limit how many APM spans / log lines I let out of my system is that going to be an effective way to reduce my spend?
I made a \~poll with some of the main things I've heard, but the answer can be "it's complicated" and maybe better as a comment. https://forcerank.it/invite/6b853c3bd8472ace
https://redd.it/13ky2iq
@r_devops
Reddit
r/devops on Reddit: Datadog: why is it so popular?
Posted by u/dp79 - 182 votes and 247 comments
Do you actually need to "Know" Linux to work in DevOPS?
I've gotten plenty of DevOps interviews, I even work myself as a DevOps Engineer right now - and I would say I only use Linux about 10% of the time - when I'm writing Pipelines for Github Actions. But that's literally just writing some CLI commands, nothing more. It's incredibly easy, and if i didn't know anything about Linux, I could learn what I need to do for my job within 2-3 days.
Yet on the internet everybody and their grandmothers are saying that you need to know a ton of Linux to be able to make it in DevOps, you need to read the Linux Programming Interface book, you need to know every thing inside out.
So question 1:
Are people just lying, or are does it depend from job to job, or...? My experience is just that you can get by with knowing very little.
Question 2:
I've done a bunch of random tasks using Linux (for Kodekloud, just to get more adept).
Just to list 6-7 random ones:
1) Installed & configured PostgresQL databases, users and their permissions
2) Created Linux users with non-interactive shells, and linux users with expiration dates
3) Managed incoming & outgoing connections for Apache & Nginx using IPTables
4) Used sed & awk to manipulate strings through bash scripts.
5) Limited access to webservers through securing URLs with PAM Authentication - requiring OS users to authenticate their SSL connection before connecting
6) Implemented passwordless SSH authentication for scripts
7) Configured Apache servers, controlling ports, changing headers, hiding version numbers, and redirecting URLs
Just a bunch of random tasks like this - quite a few more than this. I google my way through like any good engineer. If you ask me anything about the Kernel or whatever, or how Linux actually works, what are the differences between the Distros, I woudn't have a clue. What is /etc vs /home vs all those other random folders? No idea.
So do I "know" Linux? How much do I need to know to be able to say I "know" Linux? And why do all these subreddits say you need to "Know" Linux when the only time I ever use Linux in my job is when I'm writing very basic CLI commands for e.g a pipeline in Github Actions - which is less than 10% of my job?
https://redd.it/13l1cxm
@r_devops
I've gotten plenty of DevOps interviews, I even work myself as a DevOps Engineer right now - and I would say I only use Linux about 10% of the time - when I'm writing Pipelines for Github Actions. But that's literally just writing some CLI commands, nothing more. It's incredibly easy, and if i didn't know anything about Linux, I could learn what I need to do for my job within 2-3 days.
Yet on the internet everybody and their grandmothers are saying that you need to know a ton of Linux to be able to make it in DevOps, you need to read the Linux Programming Interface book, you need to know every thing inside out.
So question 1:
Are people just lying, or are does it depend from job to job, or...? My experience is just that you can get by with knowing very little.
Question 2:
I've done a bunch of random tasks using Linux (for Kodekloud, just to get more adept).
Just to list 6-7 random ones:
1) Installed & configured PostgresQL databases, users and their permissions
2) Created Linux users with non-interactive shells, and linux users with expiration dates
3) Managed incoming & outgoing connections for Apache & Nginx using IPTables
4) Used sed & awk to manipulate strings through bash scripts.
5) Limited access to webservers through securing URLs with PAM Authentication - requiring OS users to authenticate their SSL connection before connecting
6) Implemented passwordless SSH authentication for scripts
7) Configured Apache servers, controlling ports, changing headers, hiding version numbers, and redirecting URLs
Just a bunch of random tasks like this - quite a few more than this. I google my way through like any good engineer. If you ask me anything about the Kernel or whatever, or how Linux actually works, what are the differences between the Distros, I woudn't have a clue. What is /etc vs /home vs all those other random folders? No idea.
So do I "know" Linux? How much do I need to know to be able to say I "know" Linux? And why do all these subreddits say you need to "Know" Linux when the only time I ever use Linux in my job is when I'm writing very basic CLI commands for e.g a pipeline in Github Actions - which is less than 10% of my job?
https://redd.it/13l1cxm
@r_devops
Reddit
r/devops on Reddit: Do you actually need to "Know" Linux to work in DevOPS?
Posted by u/waste2muchtime - No votes and 12 comments
Trigger Jenkins pipelines via Ansible
Continuing on the same topic from: https://www.reddit.com/r/devops/comments/13k9wy7/infrastructure\_as\_code\_trying\_to\_setup\_an/
​
Is there a possibility on Trigger Jenkins pipelines via Ansible for deploying ISTIO & NGINX? I know it works well on other way around where running Ansible scripts via Jenkins.
​
Let me know your thoughts people's
https://redd.it/13l0nyh
@r_devops
Continuing on the same topic from: https://www.reddit.com/r/devops/comments/13k9wy7/infrastructure\_as\_code\_trying\_to\_setup\_an/
​
Is there a possibility on Trigger Jenkins pipelines via Ansible for deploying ISTIO & NGINX? I know it works well on other way around where running Ansible scripts via Jenkins.
​
Let me know your thoughts people's
https://redd.it/13l0nyh
@r_devops
Reddit
r/devops on Reddit: Infrastructure As Code - Trying to setup an automation around a very messy tech stack
Posted by u/Mountain_Ad_1548 - 5 votes and 13 comments
Best tips for reducing cloud costs?
Overtime I've learned many tricks from other engineers on where and how to reduce costs by using niche parts of cloud vendors. I'm mainly focused on AWS, but some of the tips are cloud agnostic. Some of them might be basid but are nontheless important. I'd love for you to share yours so we could all learn from each other. Here are mine:
General:
- Cache external dependencies locally to reduce network transfer costs. For example - pull-through docker image registries.
- Always prefer spot instances where stateless and possible as opposed to on-demand.
- Use automated scaling solutions to power off dev workloads during weekends if possible.
- Filter your logs, metrics and traces before they reach your monitoring solution. In almost all solutions, SaaS or not, you're being charged for their storage or ingestion.
AWS:
- Use Reserved Instances and Savings Plans. Consider using "smart" automated RI SaaS solutions which are based on your existing workloads.
- Prefer higher generation EC2 instances, they will always be cheaper. It is also true for other products such as storage solutions like gp2 as opposed to gp3.
- Use S3 object classes to majorly reduce costs on less frequently accessed buckets.
- When using multiple private subnets that access the internet, make sure they each have a NAT gateway. It will cost more to send the traffic only through one of them.
- Move away from Classic load balancers as they are deprecated and cost more, use Network or Application load balancers instead.
- Move away from VPC peering to Transit gateways (or Network Manager). Peering is costlier when there are many VPCs.
Kubernetes:
- Consolidate your pods on less nodes. Leave only as little headroom as you intend for in your nodes.
- Don't over commit resources. Pod requests must be optimized over time in order to not over provision.
- If possible, prefer using only a single region to avoid network transfer costs between nodes. Preferably when it's not production.
https://redd.it/13l6rde
@r_devops
Overtime I've learned many tricks from other engineers on where and how to reduce costs by using niche parts of cloud vendors. I'm mainly focused on AWS, but some of the tips are cloud agnostic. Some of them might be basid but are nontheless important. I'd love for you to share yours so we could all learn from each other. Here are mine:
General:
- Cache external dependencies locally to reduce network transfer costs. For example - pull-through docker image registries.
- Always prefer spot instances where stateless and possible as opposed to on-demand.
- Use automated scaling solutions to power off dev workloads during weekends if possible.
- Filter your logs, metrics and traces before they reach your monitoring solution. In almost all solutions, SaaS or not, you're being charged for their storage or ingestion.
AWS:
- Use Reserved Instances and Savings Plans. Consider using "smart" automated RI SaaS solutions which are based on your existing workloads.
- Prefer higher generation EC2 instances, they will always be cheaper. It is also true for other products such as storage solutions like gp2 as opposed to gp3.
- Use S3 object classes to majorly reduce costs on less frequently accessed buckets.
- When using multiple private subnets that access the internet, make sure they each have a NAT gateway. It will cost more to send the traffic only through one of them.
- Move away from Classic load balancers as they are deprecated and cost more, use Network or Application load balancers instead.
- Move away from VPC peering to Transit gateways (or Network Manager). Peering is costlier when there are many VPCs.
Kubernetes:
- Consolidate your pods on less nodes. Leave only as little headroom as you intend for in your nodes.
- Don't over commit resources. Pod requests must be optimized over time in order to not over provision.
- If possible, prefer using only a single region to avoid network transfer costs between nodes. Preferably when it's not production.
https://redd.it/13l6rde
@r_devops
Reddit
r/devops on Reddit: Best tips for reducing cloud costs?
Posted by u/Jatalocks2 - No votes and no comments
GitHub Actions vs Cloud Build
We had to make some CI pipeline and we thought Cloud Build would be easy since we’re on GCP. However, to me it is a pain in the ass. Especially installing dependencies seems impossible. I gave GitHub Actions a try, and setting up the same pipeline there was ten times faster. Is it just me, or is Cloud Build just shitty for some use cases?
https://redd.it/13l8dr6
@r_devops
We had to make some CI pipeline and we thought Cloud Build would be easy since we’re on GCP. However, to me it is a pain in the ass. Especially installing dependencies seems impossible. I gave GitHub Actions a try, and setting up the same pipeline there was ten times faster. Is it just me, or is Cloud Build just shitty for some use cases?
https://redd.it/13l8dr6
@r_devops
Reddit
r/devops on Reddit: GitHub Actions vs Cloud Build
Posted by u/themouthoftruth - No votes and no comments