Didn’t get hired because interview was too good
Been studying my ass off and i used GPT to generate interview questions and answers i might be asked during the interview to practice. Unfortunately, I practiced a bit too much and they gave the offer to their second choice because my interview was perfect. Any advice on what i should do to avoid this outcome again?
https://redd.it/13k6pvf
@r_devops
Been studying my ass off and i used GPT to generate interview questions and answers i might be asked during the interview to practice. Unfortunately, I practiced a bit too much and they gave the offer to their second choice because my interview was perfect. Any advice on what i should do to avoid this outcome again?
https://redd.it/13k6pvf
@r_devops
Reddit
r/devops on Reddit: Didn’t get hired because interview was too good
Posted by u/SnooPies1330 - No votes and no comments
Looks like GitHub is responding to the chronic downtime they have been having
https://github.blog/2023-05-16-addressing-githubs-recent-availability-issues/
https://redd.it/13k7v8i
@r_devops
https://github.blog/2023-05-16-addressing-githubs-recent-availability-issues/
https://redd.it/13k7v8i
@r_devops
The GitHub Blog
Addressing GitHub’s recent availability issues
GitHub recently experienced several availability incidents, both long running and shorter duration. We have since mitigated these incidents and all systems are now operating normally. Read on for more details about what caused these incidents and what we’re…
Advanced End-to-End DevOps Pipeline for a Java Web Application: A Step-by-Step Guide
Hi Everyone,
I've created this Project which simulates a real world CICD pipeline for deploying a Java weba application on Kubernetes cluster on AWS.
https://mandeepsingh10.hashnode.dev/advanced-end-to-end-cicd-pipeline-for-a-java-web-application-a-step-by-step-guide#heading-references
https://redd.it/13k6nsn
@r_devops
Hi Everyone,
I've created this Project which simulates a real world CICD pipeline for deploying a Java weba application on Kubernetes cluster on AWS.
https://mandeepsingh10.hashnode.dev/advanced-end-to-end-cicd-pipeline-for-a-java-web-application-a-step-by-step-guide#heading-references
https://redd.it/13k6nsn
@r_devops
Mandeep Singh's Blog
Advanced End-to-End DevOps Pipeline for a Java Web Application: A Step-by-Step Guide
Overview
This project aims to build an advanced end-to-end DevOps pipeline for a Java web application.Our project is divided into two main parts:
The initial phase involves the installation and configuration of various tools and servers.
In the sec...
This project aims to build an advanced end-to-end DevOps pipeline for a Java web application.Our project is divided into two main parts:
The initial phase involves the installation and configuration of various tools and servers.
In the sec...
Infrastructure As Code - Trying to setup an automation around a very messy tech stack
As the title stated, our tech stack is unique and rough around the edges. I want to see how can I make the best out of it.We currently have:
1. Setting up requests in Service-Now (For Hardware - Kubernetes Clusters)
2. Trigger Pipelines (via Jenkins) for creating namespaces, deploying ISTIO & Nginx
3. Requesting Certificates (Internal & third party vendor cert requests) & uploading them.
4. Deploying OpenTelemetry Agents (elk, splunk... etc etc etc)
5. Configure ISTIO Secrets, Confit-Gateways
​
I know I can't leverage a single IaC tool (like Terraform or Ansible) to set these up. I want to get different perspectives here in the group to get more ideas on the topic.
https://redd.it/13k9wy7
@r_devops
As the title stated, our tech stack is unique and rough around the edges. I want to see how can I make the best out of it.We currently have:
1. Setting up requests in Service-Now (For Hardware - Kubernetes Clusters)
2. Trigger Pipelines (via Jenkins) for creating namespaces, deploying ISTIO & Nginx
3. Requesting Certificates (Internal & third party vendor cert requests) & uploading them.
4. Deploying OpenTelemetry Agents (elk, splunk... etc etc etc)
5. Configure ISTIO Secrets, Confit-Gateways
​
I know I can't leverage a single IaC tool (like Terraform or Ansible) to set these up. I want to get different perspectives here in the group to get more ideas on the topic.
https://redd.it/13k9wy7
@r_devops
Reddit
r/devops on Reddit: Infrastructure As Code - Trying to setup an automation around a very messy tech stack
Posted by u/Mountain_Ad_1548 - No votes and 8 comments
Open-source IAM Access Visualizer
Hey folks!
Recently created an IAM access visualizer that displays access relationships between AWS identities and resources.
It’s part of an open source cloud security platform that we maintain.
Some potential use cases we wanted to address:
Which IAM roles can become effective admin?
Which IAM roles can read data on your sensitive S3 bucket?
What's the blast radius of an EC2 instance compromise?
What IAM privilege escalations exist in your environment?
Would love your feedback on if something like this is helpful for your cloud IAM workflows!
Click around the Sandbox Environment
Check out our Loom Demo
Check out the Github Repo
https://redd.it/13k8qao
@r_devops
Hey folks!
Recently created an IAM access visualizer that displays access relationships between AWS identities and resources.
It’s part of an open source cloud security platform that we maintain.
Some potential use cases we wanted to address:
Which IAM roles can become effective admin?
Which IAM roles can read data on your sensitive S3 bucket?
What's the blast radius of an EC2 instance compromise?
What IAM privilege escalations exist in your environment?
Would love your feedback on if something like this is helpful for your cloud IAM workflows!
Click around the Sandbox Environment
Check out our Loom Demo
Check out the Github Repo
https://redd.it/13k8qao
@r_devops
GitHub
GitHub - Zeus-Labs/ZeusCloud: Open Source Cloud Security
Open Source Cloud Security. Contribute to Zeus-Labs/ZeusCloud development by creating an account on GitHub.
Create Service Now requests via Ansible - Possibility
I am currently working on updating our configuration management system and want to see this possibility of creating Service-Now requests via Ansible.
Are there api's available from Service-Now for us to automate request creations?
​
Cheers!!!
https://redd.it/13kd8xk
@r_devops
I am currently working on updating our configuration management system and want to see this possibility of creating Service-Now requests via Ansible.
Are there api's available from Service-Now for us to automate request creations?
​
Cheers!!!
https://redd.it/13kd8xk
@r_devops
Reddit
r/devops on Reddit: Create Service Now requests via Ansible - Possibility
Posted by u/Mountain_Ad_1548 - No votes and 2 comments
Vagrant alternatives?
I really like Vagrant, but it has a severe flaw. It's painfully slow on windows and it makes it basically unusable for me. Is there a good alternative or a way to make it faster? I know there's docker, but since it isn't free anymore I'd rather not use it.
https://redd.it/13kckev
@r_devops
I really like Vagrant, but it has a severe flaw. It's painfully slow on windows and it makes it basically unusable for me. Is there a good alternative or a way to make it faster? I know there's docker, but since it isn't free anymore I'd rather not use it.
https://redd.it/13kckev
@r_devops
Reddit
r/devops on Reddit: Vagrant alternatives?
Posted by u/Luxvoo - No votes and 3 comments
Terraform question. Do I need to worry about state management for a small Lab?
I am currently deploying through Github Actions, a single VM which gets created by Terraform code.
I don't fully understand the problem of state management, at least not for my own small lab environment.
\- Should I use Terraform Cloud for state management
\- Can I just store states in my Github repo (not ideal I know, but for a small lab)?
\- What If I just don't do state management? (they get lost on each run if I don't save them somewhere)
https://redd.it/13jymsk
@r_devops
I am currently deploying through Github Actions, a single VM which gets created by Terraform code.
I don't fully understand the problem of state management, at least not for my own small lab environment.
\- Should I use Terraform Cloud for state management
\- Can I just store states in my Github repo (not ideal I know, but for a small lab)?
\- What If I just don't do state management? (they get lost on each run if I don't save them somewhere)
https://redd.it/13jymsk
@r_devops
Reddit
r/devops on Reddit: Terraform question. Do I need to worry about state management for a small Lab?
Posted by u/AwShix - 1 vote and 6 comments
How did you handle burnout?
I'd like to read about experiences with burnout. I had two weeks where I couldn't focus, and I feel that my performance is lower than it was one or two months ago. I think that this is temporary, so I'm not worrying too much about it. However, like most developers before experiencing burnout, I was working more hours than usual due to anxiety about growth. Now, I'm trying to track my work hours to be more efficient. I prefer to work for 5 or 6 hours without social media or anything that can distract me. So, my questions are:
\- How did you feel with burnout?
\- How did you manage this situation?
\- What was your strategy for getting back to performing well?
https://redd.it/13kiqcm
@r_devops
I'd like to read about experiences with burnout. I had two weeks where I couldn't focus, and I feel that my performance is lower than it was one or two months ago. I think that this is temporary, so I'm not worrying too much about it. However, like most developers before experiencing burnout, I was working more hours than usual due to anxiety about growth. Now, I'm trying to track my work hours to be more efficient. I prefer to work for 5 or 6 hours without social media or anything that can distract me. So, my questions are:
\- How did you feel with burnout?
\- How did you manage this situation?
\- What was your strategy for getting back to performing well?
https://redd.it/13kiqcm
@r_devops
Reddit
r/devops on Reddit: How did you handle burnout?
Posted by u/FernandoJaimes - No votes and no comments
You already reused the code of your company outside company?
DevOps daily produces code that is not part of company product, for example, an script to install Kubernetes or some automation on AWS. You already used these codes in a personal project or in another company?
https://redd.it/13k1ne8
@r_devops
DevOps daily produces code that is not part of company product, for example, an script to install Kubernetes or some automation on AWS. You already used these codes in a personal project or in another company?
https://redd.it/13k1ne8
@r_devops
Reddit
r/devops on Reddit: You already reused the code of your company outside company?
Posted by u/Apart_Side3441 - No votes and 7 comments
Introducing Digger v4.0 - An Open Source GitOps tool for Terraform that runs within your existing CI/CD tool. (+ A brief history of our journey so far)
We have been building [Digger](https://github.com/diggerhq/digger) for over 2 years with multiple iterations in between. Today we are launching Digger v4.0 - An Open Source GitOps tool for Terraform.
A brief history of our journey:
🚜 [Digger Classic](https://app.digger.dev) (v1.0)
Initial focus was to build a “heroku experience in your AWS”.
We wanted to handle everything from infrastructure, CI, monitoring, logs, domains support etc. There were several design issues in this version:
The split from services to environments confused users a lot
Several types of deployments (infrastructure, software) confused customers, they didn’t know when infrastructure is needed versus a software deployment
The concept of “environment target” for the whole infrastructure had its limitations especially for customisation of existing infrastructure.
This led to the birth of Axe,
🪓 [AXE](https://dashboard.digger.dev) (v2.0)
With AXE project we wanted to improve some UX points by focusing more on “apps” which are individuals pieces that developer would want to deploy.
The main idea was to have the ability to capture whole environment was missing in this model, it was something that was appreciated in classic (albeit confusing)
While infrastructure generation was more flexible in this model, there were still pieces which didn’t fit such as creation of VPC and other common cross-app resources. This could have been solved with more thought and notion of app connectivity.
Biggest problem was reliability. Since we were taking on responsibility of creating infrastructure and building and deploying successfully, our success rate for users was not high. This affected our ability to attract more users and grow the product
This subsequently led to the birth of v3.0, Trowel,
🧑🌾 [Trowel](https://dashboard.digger.dev/create) (v3.0)
In this version we limited our scope further to generating and provisioning infrastructure-as-code. The idea was to introduce a “build step” for Terraform - the user describes the infrastructure they want in a high-level config file, that is then compiled into Terraform. Or perhaps a “framework” to abstract away the implementation details, similar to Ruby on Rails.
We no longer touched application deployment, meaning that we could focus on the core proposition infrastructure generation and customizability. This however, did not seem to interest end users we were speaking to. The challenging part was not so much writing the terraform code but rather making sure it’s provisioned correctly. The framework idea still looks promising, we haven't fully explored it yet; but even with a perfect framework in place that produces Terraform, you'd still need something to take the output and make sure the changes are reflected in the target cloud account. This was the one missing piece in the toolchain we decided to further “zoom into”.
🧑🌾 [Digger](https://digger.dev) (v4.0)
Digger is an open-source alternative to Terraform Cloud. It makes it easy to run terraform plan and apply in the CI / CD platform you already have, such as Github Actions.
A class of CI/CD products for Terraform exists (Spacelift, Terraform Cloud, Atlantis) but they are more like separate full-stack CI systems. We think that having 2 CI systems for that doesn't make sense. The infrastructure of asynchronous jobs, logs etc can and should be reused. Stretching the "assembly language" parallel, this is a bit like the CPU for a yet-to-be-created "cloud PC".
So it boils down to making it possible to run Terraform in existing CI systems. This is what Digger does.
Some of the features include:
* Any cloud: AWS, GCP, Azure
* Any CI: GitHub Actions, Gitlab, Azure DevOps
* PR-level LocksPlan / apply preview in comments
* Plan Persistence
* Workspaces support
* Terragrunt support
* PRO (Beta): Open Policy Agent & Conftest
* PRO (Beta): Drift detection (via Driftctl)
* PRO (Beta): Cost Estimates (via
We have been building [Digger](https://github.com/diggerhq/digger) for over 2 years with multiple iterations in between. Today we are launching Digger v4.0 - An Open Source GitOps tool for Terraform.
A brief history of our journey:
🚜 [Digger Classic](https://app.digger.dev) (v1.0)
Initial focus was to build a “heroku experience in your AWS”.
We wanted to handle everything from infrastructure, CI, monitoring, logs, domains support etc. There were several design issues in this version:
The split from services to environments confused users a lot
Several types of deployments (infrastructure, software) confused customers, they didn’t know when infrastructure is needed versus a software deployment
The concept of “environment target” for the whole infrastructure had its limitations especially for customisation of existing infrastructure.
This led to the birth of Axe,
🪓 [AXE](https://dashboard.digger.dev) (v2.0)
With AXE project we wanted to improve some UX points by focusing more on “apps” which are individuals pieces that developer would want to deploy.
The main idea was to have the ability to capture whole environment was missing in this model, it was something that was appreciated in classic (albeit confusing)
While infrastructure generation was more flexible in this model, there were still pieces which didn’t fit such as creation of VPC and other common cross-app resources. This could have been solved with more thought and notion of app connectivity.
Biggest problem was reliability. Since we were taking on responsibility of creating infrastructure and building and deploying successfully, our success rate for users was not high. This affected our ability to attract more users and grow the product
This subsequently led to the birth of v3.0, Trowel,
🧑🌾 [Trowel](https://dashboard.digger.dev/create) (v3.0)
In this version we limited our scope further to generating and provisioning infrastructure-as-code. The idea was to introduce a “build step” for Terraform - the user describes the infrastructure they want in a high-level config file, that is then compiled into Terraform. Or perhaps a “framework” to abstract away the implementation details, similar to Ruby on Rails.
We no longer touched application deployment, meaning that we could focus on the core proposition infrastructure generation and customizability. This however, did not seem to interest end users we were speaking to. The challenging part was not so much writing the terraform code but rather making sure it’s provisioned correctly. The framework idea still looks promising, we haven't fully explored it yet; but even with a perfect framework in place that produces Terraform, you'd still need something to take the output and make sure the changes are reflected in the target cloud account. This was the one missing piece in the toolchain we decided to further “zoom into”.
🧑🌾 [Digger](https://digger.dev) (v4.0)
Digger is an open-source alternative to Terraform Cloud. It makes it easy to run terraform plan and apply in the CI / CD platform you already have, such as Github Actions.
A class of CI/CD products for Terraform exists (Spacelift, Terraform Cloud, Atlantis) but they are more like separate full-stack CI systems. We think that having 2 CI systems for that doesn't make sense. The infrastructure of asynchronous jobs, logs etc can and should be reused. Stretching the "assembly language" parallel, this is a bit like the CPU for a yet-to-be-created "cloud PC".
So it boils down to making it possible to run Terraform in existing CI systems. This is what Digger does.
Some of the features include:
* Any cloud: AWS, GCP, Azure
* Any CI: GitHub Actions, Gitlab, Azure DevOps
* PR-level LocksPlan / apply preview in comments
* Plan Persistence
* Workspaces support
* Terragrunt support
* PRO (Beta): Open Policy Agent & Conftest
* PRO (Beta): Drift detection (via Driftctl)
* PRO (Beta): Cost Estimates (via
GitHub
GitHub - diggerhq/digger: Digger is an open source IaC orchestration tool. Digger allows you to run IaC in your existing CI pipeline…
Digger is an open source IaC orchestration tool. Digger allows you to run IaC in your existing CI pipeline ⚡️ - diggerhq/digger
Infracost)
Do give it a try and let us know what you think. [Here](https://github.com/diggerhq/digger/blob/main/CONTRIBUTING.md_) is a link to the contribution guide, if you are interested.
https://redd.it/13jw53s
@r_devops
Do give it a try and let us know what you think. [Here](https://github.com/diggerhq/digger/blob/main/CONTRIBUTING.md_) is a link to the contribution guide, if you are interested.
https://redd.it/13jw53s
@r_devops
Reddit
r/devops on Reddit: Introducing Digger v4.0 - An Open Source GitOps tool for Terraform that runs within your existing CI/CD tool.…
Posted by u/utpalnadiger - No votes and 2 comments
How to renegotiate salary as a DevOps engineer?
I started my first IT job last June, as an observability/devops/systems admin (our team is kinda weird). I only have an associates of network security and some side projects that actually correlated really well with the job tasks.
I quickly became the SME with most of our tools on the team, within 6 months I was being urged to apply for the engineer role on the same team. I failed the interview the first time but I interviewed again recently and just got the offer, took it and started the role as of today.
I started at the company at a simple desk job and my company has this 15% rule (that I'm pretty sure a lot of big corporate companies have) where you can't get more than 15% compensation increase per promotion. Everyone I've talked to no matter how big the move and no matter how far up the chain they were, they've told me they weren't able to negotiate any more money, but value the title on their resume. I took it for this same reason without even attempting to negotiate.
I got the 15% increase but also lost a 10% late shift premium and any chance at overtime switching to salaried. So pretty much got no extra salary, or maybe even a slight pay cut. But I love the job, and I've wanted the official recognition for what I do for a long time.
This has led to me having a comically low salary as an observability engineer (I would say about half as much as I should be making, and 20% less than they hire brand new Admin I's at.)
I am getting by fine and I've been underpaid for more than a year so I'm kinda used to it but now that I'm an engineer I feel like I should be able to get my salary rightsized?
I just don't know how to go about getting what I want. The best things I can think of are how my salary is comically low compared to admins and engineers in the same exact team, role and company as me. I don't know if that is something I should mention in negotiation or not because it's not something that markets myself, just compares me to my team? I feel really weird boasting about myself and I don't have much experience, certs or education to back myself up, just my work. If I have to negotiate with the business side and they don't really understand or know what I do i'll definitely flop.
I need some advice on how to go about renegotiating my salary.
Thank you,
https://redd.it/13itlyf
@r_devops
I started my first IT job last June, as an observability/devops/systems admin (our team is kinda weird). I only have an associates of network security and some side projects that actually correlated really well with the job tasks.
I quickly became the SME with most of our tools on the team, within 6 months I was being urged to apply for the engineer role on the same team. I failed the interview the first time but I interviewed again recently and just got the offer, took it and started the role as of today.
I started at the company at a simple desk job and my company has this 15% rule (that I'm pretty sure a lot of big corporate companies have) where you can't get more than 15% compensation increase per promotion. Everyone I've talked to no matter how big the move and no matter how far up the chain they were, they've told me they weren't able to negotiate any more money, but value the title on their resume. I took it for this same reason without even attempting to negotiate.
I got the 15% increase but also lost a 10% late shift premium and any chance at overtime switching to salaried. So pretty much got no extra salary, or maybe even a slight pay cut. But I love the job, and I've wanted the official recognition for what I do for a long time.
This has led to me having a comically low salary as an observability engineer (I would say about half as much as I should be making, and 20% less than they hire brand new Admin I's at.)
I am getting by fine and I've been underpaid for more than a year so I'm kinda used to it but now that I'm an engineer I feel like I should be able to get my salary rightsized?
I just don't know how to go about getting what I want. The best things I can think of are how my salary is comically low compared to admins and engineers in the same exact team, role and company as me. I don't know if that is something I should mention in negotiation or not because it's not something that markets myself, just compares me to my team? I feel really weird boasting about myself and I don't have much experience, certs or education to back myself up, just my work. If I have to negotiate with the business side and they don't really understand or know what I do i'll definitely flop.
I need some advice on how to go about renegotiating my salary.
Thank you,
https://redd.it/13itlyf
@r_devops
Reddit
r/devops on Reddit: How to renegotiate salary as a DevOps engineer?
Posted by u/KiwiZ0 - 1 vote and 14 comments
Looking for devops engnr with 4 -6 yrs exp in blr India
My company recently posted a job opening for devops engnr 4 -6 yrs exp . Please dm me if you need further details
https://redd.it/13kodx9
@r_devops
My company recently posted a job opening for devops engnr 4 -6 yrs exp . Please dm me if you need further details
https://redd.it/13kodx9
@r_devops
Reddit
r/devops on Reddit: Looking for devops engnr with 4 -6 yrs exp in blr India
Posted by u/Capable_Difference39 - No votes and no comments
Analyzing AWS EC2 Cloud Security Issues with Selefra GPT
\### **Introduction:**
In today's digital landscape, cloud security is a paramount concern for organizations leveraging cloud computing services. With the increasing complexity of cloud environments, it becomes crucial to have effective tools and strategies in place to identify and address potential security vulnerabilities. In this article, we will explore how Selefra GPT, an advanced policy-as-code tool, can be utilized to analyze and mitigate AWS EC2 cloud security issues.
1. **Understanding Selefra GPT:**
Selefra GPT is an open-source policy-as-code software that combines the power of machine learning and infrastructure analysis. It leverages the capabilities of GPT models to provide comprehensive analytics for multi-cloud and SaaS environments, including AWS EC2. By utilizing Selefra GPT, organizations can gain valuable insights into their cloud infrastructure's security posture and make informed decisions to enhance their overall security.
1. **Identifying AWS EC2 Security Risks:**
Selefra GPT enables security teams to analyze AWS EC2 instances and identify potential security risks. It utilizes its policy-as-code approach to define policies using SQL and YAML syntax, making it easier for security practitioners to express complex security rules. With Selefra GPT, security teams can perform comprehensive security assessments, including checking for open ports, insecure configurations, outdated software versions, and more.
1. **Customizing Security Policies:**
One of the key advantages of Selefra GPT is its flexibility in customizing security policies. Organizations can tailor their security policies according to their specific requirements and compliance standards. Whether it's enforcing encryption protocols, implementing access controls, or monitoring resource configurations, Selefra GPT allows security teams to define and manage policies that align with their unique security objectives.
1. **Continuous Security Monitoring:**
AWS EC2 environments are dynamic, with instances being provisioned, modified, and terminated frequently. Selefra GPT enables continuous security monitoring by regularly analyzing the AWS EC2 environment and detecting any changes or deviations from defined security policies. This proactive approach ensures that security issues are promptly identified and addressed, reducing the window of vulnerability.
1. **Remediation and Compliance:**
Once security issues are identified, Selefra GPT provides actionable insights and recommendations to remediate the vulnerabilities. Security teams can prioritize their efforts based on the severity of the issues and follow the recommended steps to mitigate the risks. Furthermore, Selefra GPT helps organizations maintain compliance with industry standards and regulations by continuously evaluating the AWS EC2 environment against the defined security policies.
\### Install
First, installing Selefra is very simple. You just need to execute the following command:
```bash
brew tap selera/tap
brew install selefra/tap/selefra
mkdir selefra-demo & cd selefra-demo & selefra init
```
\### Choose provider
Then, you need to choose the provider you need in the shell, such as AWS:
```bash
[Use arrows to move, Space to select, and enter to complete the selection\]
[✔\] AWS # We choose AWS installation
[ \] azure
[ \] GCP
[ \] k8s
```
\### ****Configuration****
**configure AWS:**
We have written a detailed configuration [document\](https://www.selefra.io/docs/providers-connector/aws) in advance, you can configure your aws information in advance through here.
**configure Selefra:**
After initialization, you will get a selefra.yaml file. Next, you need to configure this file to use the GPT functionality:
```yaml
selefra:
name: selefra-demo
cli_version: latest
openai_api_key: <Your Openai Api Key>
openai_mode: gpt-3.5
openai_limit: 10
providers:
\- name: aws
source:
\### **Introduction:**
In today's digital landscape, cloud security is a paramount concern for organizations leveraging cloud computing services. With the increasing complexity of cloud environments, it becomes crucial to have effective tools and strategies in place to identify and address potential security vulnerabilities. In this article, we will explore how Selefra GPT, an advanced policy-as-code tool, can be utilized to analyze and mitigate AWS EC2 cloud security issues.
1. **Understanding Selefra GPT:**
Selefra GPT is an open-source policy-as-code software that combines the power of machine learning and infrastructure analysis. It leverages the capabilities of GPT models to provide comprehensive analytics for multi-cloud and SaaS environments, including AWS EC2. By utilizing Selefra GPT, organizations can gain valuable insights into their cloud infrastructure's security posture and make informed decisions to enhance their overall security.
1. **Identifying AWS EC2 Security Risks:**
Selefra GPT enables security teams to analyze AWS EC2 instances and identify potential security risks. It utilizes its policy-as-code approach to define policies using SQL and YAML syntax, making it easier for security practitioners to express complex security rules. With Selefra GPT, security teams can perform comprehensive security assessments, including checking for open ports, insecure configurations, outdated software versions, and more.
1. **Customizing Security Policies:**
One of the key advantages of Selefra GPT is its flexibility in customizing security policies. Organizations can tailor their security policies according to their specific requirements and compliance standards. Whether it's enforcing encryption protocols, implementing access controls, or monitoring resource configurations, Selefra GPT allows security teams to define and manage policies that align with their unique security objectives.
1. **Continuous Security Monitoring:**
AWS EC2 environments are dynamic, with instances being provisioned, modified, and terminated frequently. Selefra GPT enables continuous security monitoring by regularly analyzing the AWS EC2 environment and detecting any changes or deviations from defined security policies. This proactive approach ensures that security issues are promptly identified and addressed, reducing the window of vulnerability.
1. **Remediation and Compliance:**
Once security issues are identified, Selefra GPT provides actionable insights and recommendations to remediate the vulnerabilities. Security teams can prioritize their efforts based on the severity of the issues and follow the recommended steps to mitigate the risks. Furthermore, Selefra GPT helps organizations maintain compliance with industry standards and regulations by continuously evaluating the AWS EC2 environment against the defined security policies.
\### Install
First, installing Selefra is very simple. You just need to execute the following command:
```bash
brew tap selera/tap
brew install selefra/tap/selefra
mkdir selefra-demo & cd selefra-demo & selefra init
```
\### Choose provider
Then, you need to choose the provider you need in the shell, such as AWS:
```bash
[Use arrows to move, Space to select, and enter to complete the selection\]
[✔\] AWS # We choose AWS installation
[ \] azure
[ \] GCP
[ \] k8s
```
\### ****Configuration****
**configure AWS:**
We have written a detailed configuration [document\](https://www.selefra.io/docs/providers-connector/aws) in advance, you can configure your aws information in advance through here.
**configure Selefra:**
After initialization, you will get a selefra.yaml file. Next, you need to configure this file to use the GPT functionality:
```yaml
selefra:
name: selefra-demo
cli_version: latest
openai_api_key: <Your Openai Api Key>
openai_mode: gpt-3.5
openai_limit: 10
providers:
\- name: aws
source:
www.selefra.io
Selefra | Open-Source infrastructure analytics Platform | Policy as Code
Selefra is an open-source policy-as-code software that provides analytics for multi-cloud and SaaS.
aws
version: latest
```
\### Running
You can use environment variables to store the openai_api_key, openai_mode, and openai_limit parameters. Then, you can start the GPT analysis by executing the following command:
```bash
selefra gpt "Please help me analyze the vulnerabilities in AWS S3?"
```
Finally, you will get results similar to the animated image below:

\### **Conclusion:**
Securing AWS EC2 instances is critical for organizations to protect their sensitive data and maintain the integrity of their cloud infrastructure. Selefra GPT empowers security teams with advanced analytics and policy-as-code capabilities to analyze, identify, and remediate security issues in AWS EC2 environments. By leveraging the power of machine learning and policy automation, Selefra GPT enables organizations to enhance their cloud security posture and build a robust defense against potential threats.
https://redd.it/13kos1v
@r_devops
version: latest
```
\### Running
You can use environment variables to store the openai_api_key, openai_mode, and openai_limit parameters. Then, you can start the GPT analysis by executing the following command:
```bash
selefra gpt "Please help me analyze the vulnerabilities in AWS S3?"
```
Finally, you will get results similar to the animated image below:

\### **Conclusion:**
Securing AWS EC2 instances is critical for organizations to protect their sensitive data and maintain the integrity of their cloud infrastructure. Selefra GPT empowers security teams with advanced analytics and policy-as-code capabilities to analyze, identify, and remediate security issues in AWS EC2 environments. By leveraging the power of machine learning and policy automation, Selefra GPT enables organizations to enhance their cloud security posture and build a robust defense against potential threats.
https://redd.it/13kos1v
@r_devops
A detailed article on Datadog's $5M outage
There’s lots of food for thought in this outage...!
https://newsletter.pragmaticengineer.com/p/inside-the-datadog-outage
https://redd.it/13kq12p
@r_devops
There’s lots of food for thought in this outage...!
https://newsletter.pragmaticengineer.com/p/inside-the-datadog-outage
https://redd.it/13kq12p
@r_devops
Pragmaticengineer
Inside DataDog’s $5M Outage (Real-World Engineering Challenges #8)
The observability provider was down for more than a day in March. What went wrong, how did the engineering team respond, and what can businesses learn from the incident? Exclusive.
What skills do I need to acquire to be devops engineer?
Hi, I was hoping to get list of tools and tech that I need to learn to become a devops engineer. I have learned docker till now (docker networking, making containers, adding volumes) .
Also if could also tell about how can I learn things that require credit card like aws or something for free that would be help as I am bit shot on money and doesn’t have a credit card
https://redd.it/13kph6k
@r_devops
Hi, I was hoping to get list of tools and tech that I need to learn to become a devops engineer. I have learned docker till now (docker networking, making containers, adding volumes) .
Also if could also tell about how can I learn things that require credit card like aws or something for free that would be help as I am bit shot on money and doesn’t have a credit card
https://redd.it/13kph6k
@r_devops
Reddit
r/devops on Reddit: What skills do I need to acquire to be devops engineer?
Posted by u/fromMultiverse - No votes and 1 comment
Pod Disruption Budgets (PDB) in Kubernetes
PDB - What they are, why they’re important, and how to use them effectively.
https://medium.com/geekculture/kubernetes-pod-disruption-budgets-pdb-b74f3dade6c1
https://redd.it/13ko55u
@r_devops
PDB - What they are, why they’re important, and how to use them effectively.
https://medium.com/geekculture/kubernetes-pod-disruption-budgets-pdb-b74f3dade6c1
https://redd.it/13ko55u
@r_devops
Medium
Kubernetes | Pod Disruption Budgets (PDB)
How They Affect Scheduling and Availability During Node Maintenance or Failures
Automating the pain away: Solving common issues to improve team workflow
https://www.offerzen.com/blog/automating-to-improve-team-workflow
Thought this was interesting as they dig into some tools they use to better automate local dev workflows.
I hadn't heard of Plop or zx before. Has anyone used them/alternatives?
https://redd.it/13ktqqy
@r_devops
https://www.offerzen.com/blog/automating-to-improve-team-workflow
Thought this was interesting as they dig into some tools they use to better automate local dev workflows.
I hadn't heard of Plop or zx before. Has anyone used them/alternatives?
https://redd.it/13ktqqy
@r_devops
The OfferZen Community Blog
Automating the pain away: Solving common issues to improve team workflow
Here is how we at Stitch took the top 10 common issues from new joiners and automated their detection and solutions - saving us time and money.
Welcome to our Enterprise Developer Survey!
We have a new, short survey in order to understand the technologies and tools that Enterprise Developers use. Are you a software developer, a database administrator, a data scientist, an engineer, an architect or involved in DevOps and SRE? Help us and make an impact on the developer ecosystem. Start here
https://redd.it/13ku7dx
@r_devops
We have a new, short survey in order to understand the technologies and tools that Enterprise Developers use. Are you a software developer, a database administrator, a data scientist, an engineer, an architect or involved in DevOps and SRE? Help us and make an impact on the developer ecosystem. Start here
https://redd.it/13ku7dx
@r_devops
Reddit
r/devops on Reddit: Welcome to our Enterprise Developer Survey!
Posted by u/vjmde - No votes and no comments