Reddit DevOps
270 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Open-source IAM Access Visualizer

Hey folks!

Recently created an IAM access visualizer that displays access relationships between AWS identities and resources.
It’s part of an open source cloud security platform that we maintain.

Some potential use cases we wanted to address:

Which IAM roles can become effective admin?
Which IAM roles can read data on your sensitive S3 bucket?
What's the blast radius of an EC2 instance compromise?
What IAM privilege escalations exist in your environment?


Would love your feedback on if something like this is helpful for your cloud IAM workflows!


Click around the Sandbox Environment
Check out our Loom Demo
Check out the Github Repo

https://redd.it/13k8qao
@r_devops
Create Service Now requests via Ansible - Possibility

I am currently working on updating our configuration management system and want to see this possibility of creating Service-Now requests via Ansible.

Are there api's available from Service-Now for us to automate request creations?

​

Cheers!!!

https://redd.it/13kd8xk
@r_devops
Vagrant alternatives?

I really like Vagrant, but it has a severe flaw. It's painfully slow on windows and it makes it basically unusable for me. Is there a good alternative or a way to make it faster? I know there's docker, but since it isn't free anymore I'd rather not use it.

https://redd.it/13kckev
@r_devops
Terraform question. Do I need to worry about state management for a small Lab?

I am currently deploying through Github Actions, a single VM which gets created by Terraform code.

I don't fully understand the problem of state management, at least not for my own small lab environment.

\- Should I use Terraform Cloud for state management

\- Can I just store states in my Github repo (not ideal I know, but for a small lab)?

\- What If I just don't do state management? (they get lost on each run if I don't save them somewhere)

https://redd.it/13jymsk
@r_devops
How did you handle burnout?

I'd like to read about experiences with burnout. I had two weeks where I couldn't focus, and I feel that my performance is lower than it was one or two months ago. I think that this is temporary, so I'm not worrying too much about it. However, like most developers before experiencing burnout, I was working more hours than usual due to anxiety about growth. Now, I'm trying to track my work hours to be more efficient. I prefer to work for 5 or 6 hours without social media or anything that can distract me. So, my questions are:
\- How did you feel with burnout?
\- How did you manage this situation?
\- What was your strategy for getting back to performing well?

https://redd.it/13kiqcm
@r_devops
You already reused the code of your company outside company?

DevOps daily produces code that is not part of company product, for example, an script to install Kubernetes or some automation on AWS. You already used these codes in a personal project or in another company?

https://redd.it/13k1ne8
@r_devops
Introducing Digger v4.0 - An Open Source GitOps tool for Terraform that runs within your existing CI/CD tool. (+ A brief history of our journey so far)

We have been building [Digger](https://github.com/diggerhq/digger) for over 2 years with multiple iterations in between. Today we are launching Digger v4.0 - An Open Source GitOps tool for Terraform.

A brief history of our journey:

🚜 [Digger Classic](https://app.digger.dev) (v1.0)

Initial focus was to build a “heroku experience in your AWS”.

We wanted to handle everything from infrastructure, CI, monitoring, logs, domains support etc. There were several design issues in this version:

The split from services to environments confused users a lot

Several types of deployments (infrastructure, software) confused customers, they didn’t know when infrastructure is needed versus a software deployment

The concept of “environment target” for the whole infrastructure had its limitations especially for customisation of existing infrastructure.

This led to the birth of Axe,

🪓 [AXE](https://dashboard.digger.dev) (v2.0)

With AXE project we wanted to improve some UX points by focusing more on “apps” which are individuals pieces that developer would want to deploy.

The main idea was to have the ability to capture whole environment was missing in this model, it was something that was appreciated in classic (albeit confusing)

While infrastructure generation was more flexible in this model, there were still pieces which didn’t fit such as creation of VPC and other common cross-app resources. This could have been solved with more thought and notion of app connectivity.

Biggest problem was reliability. Since we were taking on responsibility of creating infrastructure and building and deploying successfully, our success rate for users was not high. This affected our ability to attract more users and grow the product

This subsequently led to the birth of v3.0, Trowel,

🧑‍🌾 [Trowel](https://dashboard.digger.dev/create) (v3.0)

In this version we limited our scope further to generating and provisioning infrastructure-as-code. The idea was to introduce a “build step” for Terraform - the user describes the infrastructure they want in a high-level config file, that is then compiled into Terraform. Or perhaps a “framework” to abstract away the implementation details, similar to Ruby on Rails.

We no longer touched application deployment, meaning that we could focus on the core proposition infrastructure generation and customizability. This however, did not seem to interest end users we were speaking to. The challenging part was not so much writing the terraform code but rather making sure it’s provisioned correctly. The framework idea still looks promising, we haven't fully explored it yet; but even with a perfect framework in place that produces Terraform, you'd still need something to take the output and make sure the changes are reflected in the target cloud account. This was the one missing piece in the toolchain we decided to further “zoom into”.

🧑‍🌾 [Digger](https://digger.dev) (v4.0)

Digger is an open-source alternative to Terraform Cloud. It makes it easy to run terraform plan and apply in the CI / CD platform you already have, such as Github Actions.

A class of CI/CD products for Terraform exists (Spacelift, Terraform Cloud, Atlantis) but they are more like separate full-stack CI systems. We think that having 2 CI systems for that doesn't make sense. The infrastructure of asynchronous jobs, logs etc can and should be reused. Stretching the "assembly language" parallel, this is a bit like the CPU for a yet-to-be-created "cloud PC".

So it boils down to making it possible to run Terraform in existing CI systems. This is what Digger does.

Some of the features include:

* Any cloud: AWS, GCP, Azure
* Any CI: GitHub Actions, Gitlab, Azure DevOps
* PR-level LocksPlan / apply preview in comments
* Plan Persistence
* Workspaces support
* Terragrunt support
* PRO (Beta): Open Policy Agent & Conftest
* PRO (Beta): Drift detection (via Driftctl)
* PRO (Beta): Cost Estimates (via
How to renegotiate salary as a DevOps engineer?

I started my first IT job last June, as an observability/devops/systems admin (our team is kinda weird). I only have an associates of network security and some side projects that actually correlated really well with the job tasks.

I quickly became the SME with most of our tools on the team, within 6 months I was being urged to apply for the engineer role on the same team. I failed the interview the first time but I interviewed again recently and just got the offer, took it and started the role as of today.

I started at the company at a simple desk job and my company has this 15% rule (that I'm pretty sure a lot of big corporate companies have) where you can't get more than 15% compensation increase per promotion. Everyone I've talked to no matter how big the move and no matter how far up the chain they were, they've told me they weren't able to negotiate any more money, but value the title on their resume. I took it for this same reason without even attempting to negotiate.

I got the 15% increase but also lost a 10% late shift premium and any chance at overtime switching to salaried. So pretty much got no extra salary, or maybe even a slight pay cut. But I love the job, and I've wanted the official recognition for what I do for a long time.

This has led to me having a comically low salary as an observability engineer (I would say about half as much as I should be making, and 20% less than they hire brand new Admin I's at.)

I am getting by fine and I've been underpaid for more than a year so I'm kinda used to it but now that I'm an engineer I feel like I should be able to get my salary rightsized?


I just don't know how to go about getting what I want. The best things I can think of are how my salary is comically low compared to admins and engineers in the same exact team, role and company as me. I don't know if that is something I should mention in negotiation or not because it's not something that markets myself, just compares me to my team? I feel really weird boasting about myself and I don't have much experience, certs or education to back myself up, just my work. If I have to negotiate with the business side and they don't really understand or know what I do i'll definitely flop.

I need some advice on how to go about renegotiating my salary.

Thank you,

https://redd.it/13itlyf
@r_devops
Looking for devops engnr with 4 -6 yrs exp in blr India

My company recently posted a job opening for devops engnr 4 -6 yrs exp . Please dm me if you need further details

https://redd.it/13kodx9
@r_devops
Analyzing AWS EC2 Cloud Security Issues with Selefra GPT

\### **Introduction:**
In today's digital landscape, cloud security is a paramount concern for organizations leveraging cloud computing services. With the increasing complexity of cloud environments, it becomes crucial to have effective tools and strategies in place to identify and address potential security vulnerabilities. In this article, we will explore how Selefra GPT, an advanced policy-as-code tool, can be utilized to analyze and mitigate AWS EC2 cloud security issues.
1. **Understanding Selefra GPT:**
Selefra GPT is an open-source policy-as-code software that combines the power of machine learning and infrastructure analysis. It leverages the capabilities of GPT models to provide comprehensive analytics for multi-cloud and SaaS environments, including AWS EC2. By utilizing Selefra GPT, organizations can gain valuable insights into their cloud infrastructure's security posture and make informed decisions to enhance their overall security.
1. **Identifying AWS EC2 Security Risks:**
Selefra GPT enables security teams to analyze AWS EC2 instances and identify potential security risks. It utilizes its policy-as-code approach to define policies using SQL and YAML syntax, making it easier for security practitioners to express complex security rules. With Selefra GPT, security teams can perform comprehensive security assessments, including checking for open ports, insecure configurations, outdated software versions, and more.
1. **Customizing Security Policies:**
One of the key advantages of Selefra GPT is its flexibility in customizing security policies. Organizations can tailor their security policies according to their specific requirements and compliance standards. Whether it's enforcing encryption protocols, implementing access controls, or monitoring resource configurations, Selefra GPT allows security teams to define and manage policies that align with their unique security objectives.
1. **Continuous Security Monitoring:**
AWS EC2 environments are dynamic, with instances being provisioned, modified, and terminated frequently. Selefra GPT enables continuous security monitoring by regularly analyzing the AWS EC2 environment and detecting any changes or deviations from defined security policies. This proactive approach ensures that security issues are promptly identified and addressed, reducing the window of vulnerability.
1. **Remediation and Compliance:**
Once security issues are identified, Selefra GPT provides actionable insights and recommendations to remediate the vulnerabilities. Security teams can prioritize their efforts based on the severity of the issues and follow the recommended steps to mitigate the risks. Furthermore, Selefra GPT helps organizations maintain compliance with industry standards and regulations by continuously evaluating the AWS EC2 environment against the defined security policies.
\### Install
First, installing Selefra is very simple. You just need to execute the following command:
```bash
brew tap selera/tap
brew install selefra/tap/selefra
mkdir selefra-demo & cd selefra-demo & selefra init
```
\### Choose provider
Then, you need to choose the provider you need in the shell, such as AWS:
```bash
[Use arrows to move, Space to select, and enter to complete the selection\]
[\] AWS # We choose AWS installation
[ \] azure
[ \] GCP
[ \] k8s
```
\### ****Configuration****
**configure AWS:**
We have written a detailed configuration [document\](https://www.selefra.io/docs/providers-connector/aws) in advance, you can configure your aws information in advance through here.
**configure Selefra:**
After initialization, you will get a selefra.yaml file. Next, you need to configure this file to use the GPT functionality:
```yaml
selefra:
name: selefra-demo
cli_version: latest
openai_api_key: <Your Openai Api Key>
openai_mode: gpt-3.5
openai_limit: 10
providers:
\- name: aws
source:
aws
version: latest
```
\### Running
You can use environment variables to store the openai_api_key, openai_mode, and openai_limit parameters. Then, you can start the GPT analysis by executing the following command:
```bash
selefra gpt "Please help me analyze the vulnerabilities in AWS S3?"
```
Finally, you will get results similar to the animated image below:
![Untitled\](https://s3-us-west-2.amazonaws.com/secure.notion-static.com/68e1f6f3-88a4-4744-94c5-9755b84a8205/Untitled.gif)
\### **Conclusion:**
Securing AWS EC2 instances is critical for organizations to protect their sensitive data and maintain the integrity of their cloud infrastructure. Selefra GPT empowers security teams with advanced analytics and policy-as-code capabilities to analyze, identify, and remediate security issues in AWS EC2 environments. By leveraging the power of machine learning and policy automation, Selefra GPT enables organizations to enhance their cloud security posture and build a robust defense against potential threats.

https://redd.it/13kos1v
@r_devops
What skills do I need to acquire to be devops engineer?

Hi, I was hoping to get list of tools and tech that I need to learn to become a devops engineer. I have learned docker till now (docker networking, making containers, adding volumes) .

Also if could also tell about how can I learn things that require credit card like aws or something for free that would be help as I am bit shot on money and doesn’t have a credit card

https://redd.it/13kph6k
@r_devops
Automating the pain away: Solving common issues to improve team workflow

https://www.offerzen.com/blog/automating-to-improve-team-workflow


Thought this was interesting as they dig into some tools they use to better automate local dev workflows.

I hadn't heard of Plop or zx before. Has anyone used them/alternatives?

https://redd.it/13ktqqy
@r_devops
Welcome to our Enterprise Developer Survey!

We have a new, short survey in order to understand the technologies and tools that Enterprise Developers use. Are you a software developer, a database administrator, a data scientist, an engineer, an architect or involved in DevOps and SRE? Help us and make an impact on the developer ecosystem. Start here

https://redd.it/13ku7dx
@r_devops
Programming without a stack trace: When abstractions become illusions

This [insightful article](https://architectelevator.com/architecture/stacktrace-abstraction/) by [Gregor Hohpe](https://linkedin.com/in/ghohpe) covers:

* Evolution of programming abstractions.
* Challenges of cloud abstractions.
* Importance of tools like stack traces for debugging, especially in distributed systems.

Gregor emphasizes that effective cloud abstractions are crucial but tricky to get right. He points out that debugging at the abstraction level can be complex and underscores the value of good error messages and observability.

The part about the "unhappy path" particularly resonated with me:

>The unhappy path is where many abstractions struggle. Software that makes building small systems easy but struggles with real-world development scenarios like debugging or automated testing is an unwelcome version of “demoware” - it demos well, but doesn’t actually work in the real world. And there’s no unlock code. ... I propose the following test for vendors demoing higher-level development systems:
>
>1. Ask them to enter a typo into one of the fields where the developer is expected to enter some logic.
>
>2. Ask them to leave the room for two minutes while we change a few random elements of their demo configuration. Upon return, they would have to debug and figure out what was changed.
>
>Needless to say, no vendor ever picked the challenge.

# Why it interests me

I'm one of the creators of [Winglang](https://github.com/winglang/wing), an open-source programming language for the cloud that allows developers to work at a higher level of abstraction.

We set a goal for ourselves to provide good debugging experience that will allow developers to debug cloud applications in the context of the logical structure of the apps.

After reading this article I think we can rephrase the goal as being able to easily pass Gregor's vendor test from above :)

https://redd.it/13kz8y5
@r_devops
DataDog: Where does it hurt.

As we all know, DataDog is expensive:
"The DataDog pricing model is actually pretty easy. For 500 hosts or less, you just sign over your company and all its assets to them. If >500 hosts, you need to additionally raise VC money." - wingerd33
But there are a number of different dimensions to their model and I'd like to better understand whether everyone is getting hit on the same axis.
For instance, APM charges on Indexed spans, $2.55/M, but you get 1M spans included per APM host. Are the big bills primarily due to the ingestion costs at scale? Or because of scale out in the number of hosts. Is there one particular gotcha or is it evenly spread? If I limit how many APM spans / log lines I let out of my system is that going to be an effective way to reduce my spend?

I made a \~poll with some of the main things I've heard, but the answer can be "it's complicated" and maybe better as a comment. https://forcerank.it/invite/6b853c3bd8472ace

https://redd.it/13ky2iq
@r_devops
Do you actually need to "Know" Linux to work in DevOPS?

I've gotten plenty of DevOps interviews, I even work myself as a DevOps Engineer right now - and I would say I only use Linux about 10% of the time - when I'm writing Pipelines for Github Actions. But that's literally just writing some CLI commands, nothing more. It's incredibly easy, and if i didn't know anything about Linux, I could learn what I need to do for my job within 2-3 days.

Yet on the internet everybody and their grandmothers are saying that you need to know a ton of Linux to be able to make it in DevOps, you need to read the Linux Programming Interface book, you need to know every thing inside out.

So question 1:
Are people just lying, or are does it depend from job to job, or...? My experience is just that you can get by with knowing very little.

Question 2:

I've done a bunch of random tasks using Linux (for Kodekloud, just to get more adept).

Just to list 6-7 random ones:

1) Installed & configured PostgresQL databases, users and their permissions

2) Created Linux users with non-interactive shells, and linux users with expiration dates

3) Managed incoming & outgoing connections for Apache & Nginx using IPTables

4) Used sed & awk to manipulate strings through bash scripts.

5) Limited access to webservers through securing URLs with PAM Authentication - requiring OS users to authenticate their SSL connection before connecting

6) Implemented passwordless SSH authentication for scripts

7) Configured Apache servers, controlling ports, changing headers, hiding version numbers, and redirecting URLs

Just a bunch of random tasks like this - quite a few more than this. I google my way through like any good engineer. If you ask me anything about the Kernel or whatever, or how Linux actually works, what are the differences between the Distros, I woudn't have a clue. What is /etc vs /home vs all those other random folders? No idea.

So do I "know" Linux? How much do I need to know to be able to say I "know" Linux? And why do all these subreddits say you need to "Know" Linux when the only time I ever use Linux in my job is when I'm writing very basic CLI commands for e.g a pipeline in Github Actions - which is less than 10% of my job?

https://redd.it/13l1cxm
@r_devops