Reddit DevOps
270 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Repo for small scripts - What's the best practice

We use Azure Devops as a source control repository. The primary language we use is C#. Occasionally, some devops engineers write small automation scripts in python or bash. e.g. Generating a list of stale branches, or delete large number of files from S3 bucket.


What is the best practice to store such scripts? I am thinking of creating one dedicated repo just to store such scripts. This will provide all source control benefits.


Couple of downsides I can think of are as following but I don't think these are major issues.
1) This repo will grow over time and engineers will need to pull all of it before contributing their own scripts
2) Engineers will need to be more careful about not pushing any secrets in those scripts

https://redd.it/13jyvue
@r_devops
Is Backstage a good solution for the needs of our project?

I'm not sure if this is the right subreddit for this question, but here it goes:

Our clients usually demand a Minimum viable product or Proof of concept before commiting to a project.

Right now we do this MVP from scratch. That means that we set up a dev, test, and prod environment, we setup the right CI/CD workflows, create necessary documentation and so on for each proposal to the client.

A lot of the time the programming languages, frameworks and tools are different so it has been difficult to reuse basically anything.
On top of that, these MVPs are created by different teams inside the organization, making the MVP an isolated project that no one has acces to besides the development team.
And seeing that the developer's experiences ranges from seniors to fresh graduates, the whole thing becomes a mess (Or in other words, it doesnt have the consistency or quality that we are aiming for).

Our department has been given the task to create a solution that automates this process.

The requirements are:

1- With "minimum work" we need to be able to deliver a blank project (for example a webpage that displays hello world after calling a backend API) + tests.

2- This blank project must have all the documentation needed for the developers to start working efficiently and it has to be easily accessible.

3- Needs to have all the CI/CD workflow working from the start. It would be a plus if it can be customizable (could use different tools depending on the project needs or clients request).

4- Optional: have terraform code ready to be used.

Another member of the team suggested that we build our own solution and I'm mostly against it (Mainly because of time and money constrains aswell as lack of seniors in our team).
I have been researching Backstage and it looks like its a good solution for our needs, but if I'm being completely honest, I'm having a hard time understanding it.

I want to ask if someone that used or knows of Backstage knows if its the right tool.
Also, I'm open to any other suggestions since I'm a little bit lost with all the tools there are.

Sorry for the long post and thank you in advance!

https://redd.it/13k0h4n
@r_devops
Didn’t get hired because interview was too good

Been studying my ass off and i used GPT to generate interview questions and answers i might be asked during the interview to practice. Unfortunately, I practiced a bit too much and they gave the offer to their second choice because my interview was perfect. Any advice on what i should do to avoid this outcome again?

https://redd.it/13k6pvf
@r_devops
Infrastructure As Code - Trying to setup an automation around a very messy tech stack

As the title stated, our tech stack is unique and rough around the edges. I want to see how can I make the best out of it.We currently have:

1. Setting up requests in Service-Now (For Hardware - Kubernetes Clusters)
2. Trigger Pipelines (via Jenkins) for creating namespaces, deploying ISTIO & Nginx
3. Requesting Certificates (Internal & third party vendor cert requests) & uploading them.
4. Deploying OpenTelemetry Agents (elk, splunk... etc etc etc)
5. Configure ISTIO Secrets, Confit-Gateways

​

I know I can't leverage a single IaC tool (like Terraform or Ansible) to set these up. I want to get different perspectives here in the group to get more ideas on the topic.

https://redd.it/13k9wy7
@r_devops
Open-source IAM Access Visualizer

Hey folks!

Recently created an IAM access visualizer that displays access relationships between AWS identities and resources.
It’s part of an open source cloud security platform that we maintain.

Some potential use cases we wanted to address:

Which IAM roles can become effective admin?
Which IAM roles can read data on your sensitive S3 bucket?
What's the blast radius of an EC2 instance compromise?
What IAM privilege escalations exist in your environment?


Would love your feedback on if something like this is helpful for your cloud IAM workflows!


Click around the Sandbox Environment
Check out our Loom Demo
Check out the Github Repo

https://redd.it/13k8qao
@r_devops
Create Service Now requests via Ansible - Possibility

I am currently working on updating our configuration management system and want to see this possibility of creating Service-Now requests via Ansible.

Are there api's available from Service-Now for us to automate request creations?

​

Cheers!!!

https://redd.it/13kd8xk
@r_devops
Vagrant alternatives?

I really like Vagrant, but it has a severe flaw. It's painfully slow on windows and it makes it basically unusable for me. Is there a good alternative or a way to make it faster? I know there's docker, but since it isn't free anymore I'd rather not use it.

https://redd.it/13kckev
@r_devops
Terraform question. Do I need to worry about state management for a small Lab?

I am currently deploying through Github Actions, a single VM which gets created by Terraform code.

I don't fully understand the problem of state management, at least not for my own small lab environment.

\- Should I use Terraform Cloud for state management

\- Can I just store states in my Github repo (not ideal I know, but for a small lab)?

\- What If I just don't do state management? (they get lost on each run if I don't save them somewhere)

https://redd.it/13jymsk
@r_devops
How did you handle burnout?

I'd like to read about experiences with burnout. I had two weeks where I couldn't focus, and I feel that my performance is lower than it was one or two months ago. I think that this is temporary, so I'm not worrying too much about it. However, like most developers before experiencing burnout, I was working more hours than usual due to anxiety about growth. Now, I'm trying to track my work hours to be more efficient. I prefer to work for 5 or 6 hours without social media or anything that can distract me. So, my questions are:
\- How did you feel with burnout?
\- How did you manage this situation?
\- What was your strategy for getting back to performing well?

https://redd.it/13kiqcm
@r_devops
You already reused the code of your company outside company?

DevOps daily produces code that is not part of company product, for example, an script to install Kubernetes or some automation on AWS. You already used these codes in a personal project or in another company?

https://redd.it/13k1ne8
@r_devops
Introducing Digger v4.0 - An Open Source GitOps tool for Terraform that runs within your existing CI/CD tool. (+ A brief history of our journey so far)

We have been building [Digger](https://github.com/diggerhq/digger) for over 2 years with multiple iterations in between. Today we are launching Digger v4.0 - An Open Source GitOps tool for Terraform.

A brief history of our journey:

🚜 [Digger Classic](https://app.digger.dev) (v1.0)

Initial focus was to build a “heroku experience in your AWS”.

We wanted to handle everything from infrastructure, CI, monitoring, logs, domains support etc. There were several design issues in this version:

The split from services to environments confused users a lot

Several types of deployments (infrastructure, software) confused customers, they didn’t know when infrastructure is needed versus a software deployment

The concept of “environment target” for the whole infrastructure had its limitations especially for customisation of existing infrastructure.

This led to the birth of Axe,

🪓 [AXE](https://dashboard.digger.dev) (v2.0)

With AXE project we wanted to improve some UX points by focusing more on “apps” which are individuals pieces that developer would want to deploy.

The main idea was to have the ability to capture whole environment was missing in this model, it was something that was appreciated in classic (albeit confusing)

While infrastructure generation was more flexible in this model, there were still pieces which didn’t fit such as creation of VPC and other common cross-app resources. This could have been solved with more thought and notion of app connectivity.

Biggest problem was reliability. Since we were taking on responsibility of creating infrastructure and building and deploying successfully, our success rate for users was not high. This affected our ability to attract more users and grow the product

This subsequently led to the birth of v3.0, Trowel,

🧑‍🌾 [Trowel](https://dashboard.digger.dev/create) (v3.0)

In this version we limited our scope further to generating and provisioning infrastructure-as-code. The idea was to introduce a “build step” for Terraform - the user describes the infrastructure they want in a high-level config file, that is then compiled into Terraform. Or perhaps a “framework” to abstract away the implementation details, similar to Ruby on Rails.

We no longer touched application deployment, meaning that we could focus on the core proposition infrastructure generation and customizability. This however, did not seem to interest end users we were speaking to. The challenging part was not so much writing the terraform code but rather making sure it’s provisioned correctly. The framework idea still looks promising, we haven't fully explored it yet; but even with a perfect framework in place that produces Terraform, you'd still need something to take the output and make sure the changes are reflected in the target cloud account. This was the one missing piece in the toolchain we decided to further “zoom into”.

🧑‍🌾 [Digger](https://digger.dev) (v4.0)

Digger is an open-source alternative to Terraform Cloud. It makes it easy to run terraform plan and apply in the CI / CD platform you already have, such as Github Actions.

A class of CI/CD products for Terraform exists (Spacelift, Terraform Cloud, Atlantis) but they are more like separate full-stack CI systems. We think that having 2 CI systems for that doesn't make sense. The infrastructure of asynchronous jobs, logs etc can and should be reused. Stretching the "assembly language" parallel, this is a bit like the CPU for a yet-to-be-created "cloud PC".

So it boils down to making it possible to run Terraform in existing CI systems. This is what Digger does.

Some of the features include:

* Any cloud: AWS, GCP, Azure
* Any CI: GitHub Actions, Gitlab, Azure DevOps
* PR-level LocksPlan / apply preview in comments
* Plan Persistence
* Workspaces support
* Terragrunt support
* PRO (Beta): Open Policy Agent & Conftest
* PRO (Beta): Drift detection (via Driftctl)
* PRO (Beta): Cost Estimates (via
How to renegotiate salary as a DevOps engineer?

I started my first IT job last June, as an observability/devops/systems admin (our team is kinda weird). I only have an associates of network security and some side projects that actually correlated really well with the job tasks.

I quickly became the SME with most of our tools on the team, within 6 months I was being urged to apply for the engineer role on the same team. I failed the interview the first time but I interviewed again recently and just got the offer, took it and started the role as of today.

I started at the company at a simple desk job and my company has this 15% rule (that I'm pretty sure a lot of big corporate companies have) where you can't get more than 15% compensation increase per promotion. Everyone I've talked to no matter how big the move and no matter how far up the chain they were, they've told me they weren't able to negotiate any more money, but value the title on their resume. I took it for this same reason without even attempting to negotiate.

I got the 15% increase but also lost a 10% late shift premium and any chance at overtime switching to salaried. So pretty much got no extra salary, or maybe even a slight pay cut. But I love the job, and I've wanted the official recognition for what I do for a long time.

This has led to me having a comically low salary as an observability engineer (I would say about half as much as I should be making, and 20% less than they hire brand new Admin I's at.)

I am getting by fine and I've been underpaid for more than a year so I'm kinda used to it but now that I'm an engineer I feel like I should be able to get my salary rightsized?


I just don't know how to go about getting what I want. The best things I can think of are how my salary is comically low compared to admins and engineers in the same exact team, role and company as me. I don't know if that is something I should mention in negotiation or not because it's not something that markets myself, just compares me to my team? I feel really weird boasting about myself and I don't have much experience, certs or education to back myself up, just my work. If I have to negotiate with the business side and they don't really understand or know what I do i'll definitely flop.

I need some advice on how to go about renegotiating my salary.

Thank you,

https://redd.it/13itlyf
@r_devops
Looking for devops engnr with 4 -6 yrs exp in blr India

My company recently posted a job opening for devops engnr 4 -6 yrs exp . Please dm me if you need further details

https://redd.it/13kodx9
@r_devops
Analyzing AWS EC2 Cloud Security Issues with Selefra GPT

\### **Introduction:**
In today's digital landscape, cloud security is a paramount concern for organizations leveraging cloud computing services. With the increasing complexity of cloud environments, it becomes crucial to have effective tools and strategies in place to identify and address potential security vulnerabilities. In this article, we will explore how Selefra GPT, an advanced policy-as-code tool, can be utilized to analyze and mitigate AWS EC2 cloud security issues.
1. **Understanding Selefra GPT:**
Selefra GPT is an open-source policy-as-code software that combines the power of machine learning and infrastructure analysis. It leverages the capabilities of GPT models to provide comprehensive analytics for multi-cloud and SaaS environments, including AWS EC2. By utilizing Selefra GPT, organizations can gain valuable insights into their cloud infrastructure's security posture and make informed decisions to enhance their overall security.
1. **Identifying AWS EC2 Security Risks:**
Selefra GPT enables security teams to analyze AWS EC2 instances and identify potential security risks. It utilizes its policy-as-code approach to define policies using SQL and YAML syntax, making it easier for security practitioners to express complex security rules. With Selefra GPT, security teams can perform comprehensive security assessments, including checking for open ports, insecure configurations, outdated software versions, and more.
1. **Customizing Security Policies:**
One of the key advantages of Selefra GPT is its flexibility in customizing security policies. Organizations can tailor their security policies according to their specific requirements and compliance standards. Whether it's enforcing encryption protocols, implementing access controls, or monitoring resource configurations, Selefra GPT allows security teams to define and manage policies that align with their unique security objectives.
1. **Continuous Security Monitoring:**
AWS EC2 environments are dynamic, with instances being provisioned, modified, and terminated frequently. Selefra GPT enables continuous security monitoring by regularly analyzing the AWS EC2 environment and detecting any changes or deviations from defined security policies. This proactive approach ensures that security issues are promptly identified and addressed, reducing the window of vulnerability.
1. **Remediation and Compliance:**
Once security issues are identified, Selefra GPT provides actionable insights and recommendations to remediate the vulnerabilities. Security teams can prioritize their efforts based on the severity of the issues and follow the recommended steps to mitigate the risks. Furthermore, Selefra GPT helps organizations maintain compliance with industry standards and regulations by continuously evaluating the AWS EC2 environment against the defined security policies.
\### Install
First, installing Selefra is very simple. You just need to execute the following command:
```bash
brew tap selera/tap
brew install selefra/tap/selefra
mkdir selefra-demo & cd selefra-demo & selefra init
```
\### Choose provider
Then, you need to choose the provider you need in the shell, such as AWS:
```bash
[Use arrows to move, Space to select, and enter to complete the selection\]
[\] AWS # We choose AWS installation
[ \] azure
[ \] GCP
[ \] k8s
```
\### ****Configuration****
**configure AWS:**
We have written a detailed configuration [document\](https://www.selefra.io/docs/providers-connector/aws) in advance, you can configure your aws information in advance through here.
**configure Selefra:**
After initialization, you will get a selefra.yaml file. Next, you need to configure this file to use the GPT functionality:
```yaml
selefra:
name: selefra-demo
cli_version: latest
openai_api_key: <Your Openai Api Key>
openai_mode: gpt-3.5
openai_limit: 10
providers:
\- name: aws
source:
aws
version: latest
```
\### Running
You can use environment variables to store the openai_api_key, openai_mode, and openai_limit parameters. Then, you can start the GPT analysis by executing the following command:
```bash
selefra gpt "Please help me analyze the vulnerabilities in AWS S3?"
```
Finally, you will get results similar to the animated image below:
![Untitled\](https://s3-us-west-2.amazonaws.com/secure.notion-static.com/68e1f6f3-88a4-4744-94c5-9755b84a8205/Untitled.gif)
\### **Conclusion:**
Securing AWS EC2 instances is critical for organizations to protect their sensitive data and maintain the integrity of their cloud infrastructure. Selefra GPT empowers security teams with advanced analytics and policy-as-code capabilities to analyze, identify, and remediate security issues in AWS EC2 environments. By leveraging the power of machine learning and policy automation, Selefra GPT enables organizations to enhance their cloud security posture and build a robust defense against potential threats.

https://redd.it/13kos1v
@r_devops
What skills do I need to acquire to be devops engineer?

Hi, I was hoping to get list of tools and tech that I need to learn to become a devops engineer. I have learned docker till now (docker networking, making containers, adding volumes) .

Also if could also tell about how can I learn things that require credit card like aws or something for free that would be help as I am bit shot on money and doesn’t have a credit card

https://redd.it/13kph6k
@r_devops