Need advice regarding my nxst phase, transitioning from Software Engineering to DevOps
I've Pearson BTEC Level 5 Higher National Diploma in COMPUTING (SOFTWARE ENGINEERING) and have 3 years experience as a Software Engineer( very intensive position at the previous company worked on react nodejs geaphql and even had to manage some aws and firebase stuff too) and currently in a DevOps Engineer position and l'm willing to pursue the Devops path and progress my academic qualifications as well. What are your suggestions regarding my next step in qualifications? (Aws certifications or top up degree? Current position deals with aws and I've some knowledge about it and the current employer doesn't demand any more certifications or academic qualifications)
https://redd.it/1cfhen4
@r_devops
I've Pearson BTEC Level 5 Higher National Diploma in COMPUTING (SOFTWARE ENGINEERING) and have 3 years experience as a Software Engineer( very intensive position at the previous company worked on react nodejs geaphql and even had to manage some aws and firebase stuff too) and currently in a DevOps Engineer position and l'm willing to pursue the Devops path and progress my academic qualifications as well. What are your suggestions regarding my next step in qualifications? (Aws certifications or top up degree? Current position deals with aws and I've some knowledge about it and the current employer doesn't demand any more certifications or academic qualifications)
https://redd.it/1cfhen4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to prepare for DevSecOps interview as someone with no experience?
I'm currently a software engineer, but my position was eliminated and they're letting me go. However I was given an opportunity for redeployment into a DevSecOps role. The phone screening I had Thursday went well and they said they'd follow up with me early this week to schedule a virtual interview with the team. Problem is I have absolutely no experience with DevSecOps, the most experience I have is I know how to use GitHub and Microsoft Azure. What can I do to prepare? The job is to develop and maintain CI/CD pipelines (I at least studied that so far) for embedded software in Azure DevOps.
https://redd.it/1cfh1ky
@r_devops
I'm currently a software engineer, but my position was eliminated and they're letting me go. However I was given an opportunity for redeployment into a DevSecOps role. The phone screening I had Thursday went well and they said they'd follow up with me early this week to schedule a virtual interview with the team. Problem is I have absolutely no experience with DevSecOps, the most experience I have is I know how to use GitHub and Microsoft Azure. What can I do to prepare? The job is to develop and maintain CI/CD pipelines (I at least studied that so far) for embedded software in Azure DevOps.
https://redd.it/1cfh1ky
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Building Blog Using MERN Stack | Free Udemy course for limited time
https://www.webhelperapp.com/building-blog-using-mern-stack/
https://redd.it/1cfgmgi
@r_devops
https://www.webhelperapp.com/building-blog-using-mern-stack/
https://redd.it/1cfgmgi
@r_devops
Free Udemy Coupons
Building Blog using MERN Stack
Welcome to "MERN Stack with Blog Project". In this course, we will be building an in-depth full-stack Blog project application using Node.js,
Looking to solve problems
Im looking to start a software project and fix a niche devops problem. I was thinking about creating a very easy to use, but powerful CICD system that could be used in classified environments. Ive noticed that everywhere I work, they are always using Jenkins. Jenkins is a beast in unclass environments, but even harder to maintain on the class side. Ideally, I would build this on top of kubernetes. Im open to other ideas. Like I said, im looking for a problem to solve.
https://redd.it/1cfm2an
@r_devops
Im looking to start a software project and fix a niche devops problem. I was thinking about creating a very easy to use, but powerful CICD system that could be used in classified environments. Ive noticed that everywhere I work, they are always using Jenkins. Jenkins is a beast in unclass environments, but even harder to maintain on the class side. Ideally, I would build this on top of kubernetes. Im open to other ideas. Like I said, im looking for a problem to solve.
https://redd.it/1cfm2an
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to access nested value in kusto when there a dynamic nunber.
Hello Guys,
I am trying to access the values of security rules in azure for change analysis. Below is the KQL query
arg("").resourcechanges
|extend changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId),
changeType = tostring(properties.changeType), correlationId = properties.changeAttributes.correlationId,
changedProperties = properties.changes, changeCount = properties.changeAttributes.changesCount ,clientType = properties.changeAttributes.clientType, name = tostring(properties.changes."properties.securityRules[18.name"].newValue)
|where targetResourceId contains "providers/Microsoft.Network/networkSecurityGroups/" and clientType !contains "Windows Azure Security Resource Provider"
|where changeTime > ago(5d)
|order by tostring(changeTime) desc
|project changeTime, targetResourceId, changeType, correlationId, changeCount, tostring(changedProperties), clientType, name
I would like to access the value of securityRules but the number 18 is random. How do I write a query where it does not bother about the number 18 and I can access the value like .newValue as I have shown above.
Kindly help me out. I have tried to use regex but I am not able to figure out how to do this.
https://redd.it/1cftk1l
@r_devops
Hello Guys,
I am trying to access the values of security rules in azure for change analysis. Below is the KQL query
arg("").resourcechanges
|extend changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId),
changeType = tostring(properties.changeType), correlationId = properties.changeAttributes.correlationId,
changedProperties = properties.changes, changeCount = properties.changeAttributes.changesCount ,clientType = properties.changeAttributes.clientType, name = tostring(properties.changes."properties.securityRules[18.name"].newValue)
|where targetResourceId contains "providers/Microsoft.Network/networkSecurityGroups/" and clientType !contains "Windows Azure Security Resource Provider"
|where changeTime > ago(5d)
|order by tostring(changeTime) desc
|project changeTime, targetResourceId, changeType, correlationId, changeCount, tostring(changedProperties), clientType, name
I would like to access the value of securityRules but the number 18 is random. How do I write a query where it does not bother about the number 18 and I can access the value like .newValue as I have shown above.
Kindly help me out. I have tried to use regex but I am not able to figure out how to do this.
https://redd.it/1cftk1l
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to properly use persistent storage in dev container
Hello,
I'm learning about using dev container and quite new to all this so very sorry if my question is too basic.
How do you use persistent storage in dev container in Windows with WSL2? From what i read, the best way in term of performance is to use Docker Volume. If this is the case, how do you manage to quickly open a project, since in Window, the WSL backend stores volumn in a deep location. From my understanding, your actual code will be stored in Linus file systems (Ubuntu in my case), is this correct?
Thank you very much for your help.
https://redd.it/1cftzwh
@r_devops
Hello,
I'm learning about using dev container and quite new to all this so very sorry if my question is too basic.
How do you use persistent storage in dev container in Windows with WSL2? From what i read, the best way in term of performance is to use Docker Volume. If this is the case, how do you manage to quickly open a project, since in Window, the WSL backend stores volumn in a deep location. From my understanding, your actual code will be stored in Linus file systems (Ubuntu in my case), is this correct?
Thank you very much for your help.
https://redd.it/1cftzwh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Becoming a DevOps Contractor - how did you do it?
Hi all,
I've considered for a long time becoming a contractor in DevOps. It appeals to me, the ability to make more money, have a bit more control over my work.
But for those of you who have gotten into contracting... how did you do it? At what point did you know you were smart enough and knew enough, and how did you network, etc?
https://redd.it/1cfv33x
@r_devops
Hi all,
I've considered for a long time becoming a contractor in DevOps. It appeals to me, the ability to make more money, have a bit more control over my work.
But for those of you who have gotten into contracting... how did you do it? At what point did you know you were smart enough and knew enough, and how did you network, etc?
https://redd.it/1cfv33x
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Are you encouraging your team to switch to open standards?
I feel like every day we're still hearing about vendor lock-in and teams adopting tools and standards that make it impossible to switch vendors.
My personal hobby horse is OpenTelemetry: Even if we're going to use a vendor's monitoring tool and another vendor's metric storage/dashboards I still want it to use OTLP and the OpenTelemetry Collector. That way if we want to switch away there's at least a path to not be locked in.
Observability is just one example: there's open vs. closed datastores, internal services like queueing, and of course the (possible) death of Terraform.
As part of your work defining the technical roadmap, do you make it a point to encourage open standards?
Do you feel like managers and execs are receptive to adopting open standards? Do they see the value?
https://redd.it/1cfw4a2
@r_devops
I feel like every day we're still hearing about vendor lock-in and teams adopting tools and standards that make it impossible to switch vendors.
My personal hobby horse is OpenTelemetry: Even if we're going to use a vendor's monitoring tool and another vendor's metric storage/dashboards I still want it to use OTLP and the OpenTelemetry Collector. That way if we want to switch away there's at least a path to not be locked in.
Observability is just one example: there's open vs. closed datastores, internal services like queueing, and of course the (possible) death of Terraform.
As part of your work defining the technical roadmap, do you make it a point to encourage open standards?
Do you feel like managers and execs are receptive to adopting open standards? Do they see the value?
https://redd.it/1cfw4a2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you setup OpsGenie?
What do you think a good OpsGenie configuration would look like in order for the tool to represent a real benefit?
I've already looked through the documentation, and there are some pretty cool ideas, but I'd be curious to get some feedback on what's realistic.
How do you define the priority of an alert? What are some good escalation rules to have?
On the notification method (sms, email, etc.), since it's up to the operator to choose the method, I'd imagine it's important to recommend teams to always be reachable at OpsGenie level, and take a strong care having all notification to a person justified.
Unless there's a way of forcing a call in the event of a P1?
Also, small side-question, in the event of a major breakdown, to avoid being notified non-stop, is it possible to pause all alerts?
https://redd.it/1cfwqdk
@r_devops
What do you think a good OpsGenie configuration would look like in order for the tool to represent a real benefit?
I've already looked through the documentation, and there are some pretty cool ideas, but I'd be curious to get some feedback on what's realistic.
How do you define the priority of an alert? What are some good escalation rules to have?
On the notification method (sms, email, etc.), since it's up to the operator to choose the method, I'd imagine it's important to recommend teams to always be reachable at OpsGenie level, and take a strong care having all notification to a person justified.
Unless there's a way of forcing a call in the event of a P1?
Also, small side-question, in the event of a major breakdown, to avoid being notified non-stop, is it possible to pause all alerts?
https://redd.it/1cfwqdk
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Terraform, pull docker image from ECR
Hello everyone, I'm a software engineer transitioning into DevOps and recently began working with Terraform. I must say, I'm loving it! Terraform is an amazing tool. Currently, I'm working on small projects involving Lambdas, S3, and more.
My latest task was to deploy a Node.js container on an EC2 instance. I've managed to set up almost everything successfully. Here's a snippet of my EC2 instance configuration:
resource "awsinstance" "ec2instance" {
dependson = aws_iam_role.ec2_role, aws_ecr_repository.ecr_repo
ami = var.instance
instancetype = var.instancetype
subnetid = awssubnet.main.id
keyname = awskeypair.ec2keypair.keyname
vpcsecuritygroupids = [awssecuritygroup.instancesg.id]
associatepublicipaddress = var.allowpublicip
iaminstanceprofile = awsiaminstanceprofile.ec2instanceprofile.name
userdata = <<-EOF
#!/bin/bash
sudo apt update -y
sudo apt install -y docker.io awscli
sudo service docker start
aws ecr get-login-password --region ${var.region} | sudo docker login --username AWS --password-stdin ${awsecrrepository.ecrrepo.repositoryurl}
sudo docker pull ${awsecrrepository.ecrrepo.repositoryurl}
sudo docker run -d -p 80:3000 ${awsecrrepository.ecrrepo.repositoryurl}
EOF
tags = {
Name = var.instancename,
"source" = "terraform",
"environment" = terraform.workspace
}
}
The challenge I'm facing is with running the "user_data" section. The steps seem correct, and when I print the variables and execute the steps individually over SSH, everything works fine. I can even access my application. So, it appears that the steps are correct.
I've confirmed that Docker and the AWS CLI are installed, and I can successfully log in. When I SSH into the instance, I can pull the Docker image without any issues and run it.
What could I be missing? Any insights would be greatly appreciated!
https://redd.it/1cfv9ia
@r_devops
Hello everyone, I'm a software engineer transitioning into DevOps and recently began working with Terraform. I must say, I'm loving it! Terraform is an amazing tool. Currently, I'm working on small projects involving Lambdas, S3, and more.
My latest task was to deploy a Node.js container on an EC2 instance. I've managed to set up almost everything successfully. Here's a snippet of my EC2 instance configuration:
resource "awsinstance" "ec2instance" {
dependson = aws_iam_role.ec2_role, aws_ecr_repository.ecr_repo
ami = var.instance
instancetype = var.instancetype
subnetid = awssubnet.main.id
keyname = awskeypair.ec2keypair.keyname
vpcsecuritygroupids = [awssecuritygroup.instancesg.id]
associatepublicipaddress = var.allowpublicip
iaminstanceprofile = awsiaminstanceprofile.ec2instanceprofile.name
userdata = <<-EOF
#!/bin/bash
sudo apt update -y
sudo apt install -y docker.io awscli
sudo service docker start
aws ecr get-login-password --region ${var.region} | sudo docker login --username AWS --password-stdin ${awsecrrepository.ecrrepo.repositoryurl}
sudo docker pull ${awsecrrepository.ecrrepo.repositoryurl}
sudo docker run -d -p 80:3000 ${awsecrrepository.ecrrepo.repositoryurl}
EOF
tags = {
Name = var.instancename,
"source" = "terraform",
"environment" = terraform.workspace
}
}
The challenge I'm facing is with running the "user_data" section. The steps seem correct, and when I print the variables and execute the steps individually over SSH, everything works fine. I can even access my application. So, it appears that the steps are correct.
I've confirmed that Docker and the AWS CLI are installed, and I can successfully log in. When I SSH into the instance, I can pull the Docker image without any issues and run it.
What could I be missing? Any insights would be greatly appreciated!
https://redd.it/1cfv9ia
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Should I use AWS for my (temporarily) local startup or manage my own server?
So I've started planning to develop an app, deploy it in my home city first, with a young audience. Keep in my mind, I'm also in college. So I'm thinking of keeping costs as low as possible, and given that only one region needs to be served I thought that would be simple enough. Let's say (if we're lucky) the platform reaches a total of 10 thousand daily active users, which is roughly 7% of the young population of the city. The idea is that we first launch in this metropol we live in, and If we reach a decent amount of users and MRR we expand.
Anyways:
I was looking into how I would deploy the app once I actually develop it, so I took a look at AWS. I was surprised to see that a EC2 service with 4 vCPU and 8GiB memory (with 2 Baseline and 4 peak instances) costs around 200 dollars a month. RDS for Postgres was also unexpectedly expensive.I didn't even bother to check the rest (S3, Monitoring, CI/CD Pipeline,...). I then remembered that the little VPS I had started renting from a more local provider (for silly personal projects and some websites) costs me only 6 euros a month, and that is advertised as 4 vCore (x86), 4 GB RAM, 80 GB SSD (RAID10), 80 TB Traffic. Increase the budget to 100 euros a month, and you've got yourself a dedicated server with 24 cores and 128 GB of RAM. Sure managing my own VPS was a bit of a headache, and I understand the amazon perhaps performs at a way higher standard, but this price difference is just mind boggling to me. If I'm misunderstanding something about these pricings, please point it out.
My question is, given these price differences, what (other than the hundreds of additional services, that are totally unnecessary for my business, and the easy scaling) would be reasons to still choose AWS?
https://redd.it/1cfzzwn
@r_devops
So I've started planning to develop an app, deploy it in my home city first, with a young audience. Keep in my mind, I'm also in college. So I'm thinking of keeping costs as low as possible, and given that only one region needs to be served I thought that would be simple enough. Let's say (if we're lucky) the platform reaches a total of 10 thousand daily active users, which is roughly 7% of the young population of the city. The idea is that we first launch in this metropol we live in, and If we reach a decent amount of users and MRR we expand.
Anyways:
I was looking into how I would deploy the app once I actually develop it, so I took a look at AWS. I was surprised to see that a EC2 service with 4 vCPU and 8GiB memory (with 2 Baseline and 4 peak instances) costs around 200 dollars a month. RDS for Postgres was also unexpectedly expensive.I didn't even bother to check the rest (S3, Monitoring, CI/CD Pipeline,...). I then remembered that the little VPS I had started renting from a more local provider (for silly personal projects and some websites) costs me only 6 euros a month, and that is advertised as 4 vCore (x86), 4 GB RAM, 80 GB SSD (RAID10), 80 TB Traffic. Increase the budget to 100 euros a month, and you've got yourself a dedicated server with 24 cores and 128 GB of RAM. Sure managing my own VPS was a bit of a headache, and I understand the amazon perhaps performs at a way higher standard, but this price difference is just mind boggling to me. If I'm misunderstanding something about these pricings, please point it out.
My question is, given these price differences, what (other than the hundreds of additional services, that are totally unnecessary for my business, and the easy scaling) would be reasons to still choose AWS?
https://redd.it/1cfzzwn
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
KusionStack step inside Platform Tooling Landscape
It's true, KusionStack has recently been included in the Platform Tooling Landscape Platform Orchestrator category. It is also the first-ever open-source project which included in both the CNCF Landscape and the Platform Tooling Landscape that focuses on the field of Platform Engineering.
https://redd.it/1cg06j3
@r_devops
It's true, KusionStack has recently been included in the Platform Tooling Landscape Platform Orchestrator category. It is also the first-ever open-source project which included in both the CNCF Landscape and the Platform Tooling Landscape that focuses on the field of Platform Engineering.
https://redd.it/1cg06j3
@r_devops
GitHub
KusionStack
Open Tech Stack to build self-service, collaborative, reliable and sustainable Internal Developer Platform. - KusionStack
Should each component of a project live on its own server/instance?
A basic question regarding architectural best practices!
I'm setting up an open access data portal where people can visit a website and find visualised datasets related to my industry.
I've identified a few open-source components to help do this well (it's a non-profit idea so that has swayed a lot of the decision-making so far):
\-> Metabase for the frontend / data visualisation
\-> Airbyte for data pipeline management
\-> PostgresSQL (offsite) as a standalone cluster for the data itself
\-> Wordress/Ghost/Drupal for a blog
Approaches I've considered:
\-> 1 large VPS running all the moving pieces with DNS routing onto the various components (say, pipeline.myproject.com pipes onto Airbyte, etc, etc).
\-> Each component lives on its own VPS with a VNC in the background for efficient communication between the "nodes"
And my question:
Is there a right option and a wrong option here? Is instance-per-component more ideal?
TIA!
​
​
https://redd.it/1cg30r2
@r_devops
A basic question regarding architectural best practices!
I'm setting up an open access data portal where people can visit a website and find visualised datasets related to my industry.
I've identified a few open-source components to help do this well (it's a non-profit idea so that has swayed a lot of the decision-making so far):
\-> Metabase for the frontend / data visualisation
\-> Airbyte for data pipeline management
\-> PostgresSQL (offsite) as a standalone cluster for the data itself
\-> Wordress/Ghost/Drupal for a blog
Approaches I've considered:
\-> 1 large VPS running all the moving pieces with DNS routing onto the various components (say, pipeline.myproject.com pipes onto Airbyte, etc, etc).
\-> Each component lives on its own VPS with a VNC in the background for efficient communication between the "nodes"
And my question:
Is there a right option and a wrong option here? Is instance-per-component more ideal?
TIA!
​
​
https://redd.it/1cg30r2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
url redirect to internal app system design
is there any opensource app with url redirect similar to team1.slack.com team2.slack.com
was looking for any app,db with inbuilt url redirect to internal microservices on github opensource apps public
tried to search
https://slack.engineering/?s=system%20design
https://redd.it/1cg1kzo
@r_devops
is there any opensource app with url redirect similar to team1.slack.com team2.slack.com
was looking for any app,db with inbuilt url redirect to internal microservices on github opensource apps public
tried to search
https://slack.engineering/?s=system%20design
https://redd.it/1cg1kzo
@r_devops
Terminal Interface for Hashicorp Vault
Hi everyone!
Just wanted to share a tui app, I've been working on for a while.
It's inspired by other popular ones like k9s/wander etc, to give you a nice interface while never leaving your terminal for your work with Vault.
Repo: https://github.com/dkyanakiev/vaul7y
Hopefully this brings more people to it so they can give it a spin and share any feedback and/or bug to report!
​
https://redd.it/1cg5tgc
@r_devops
Hi everyone!
Just wanted to share a tui app, I've been working on for a while.
It's inspired by other popular ones like k9s/wander etc, to give you a nice interface while never leaving your terminal for your work with Vault.
Repo: https://github.com/dkyanakiev/vaul7y
Hopefully this brings more people to it so they can give it a spin and share any feedback and/or bug to report!
​
https://redd.it/1cg5tgc
@r_devops
GitHub
GitHub - dkyanakiev/vaul7y: TUI for Hashicorp Vault
TUI for Hashicorp Vault. Contribute to dkyanakiev/vaul7y development by creating an account on GitHub.
What would you say if your Boss suddenly says you are responsible for some legacy python2 app that is the core of the platform and have to move it to python3 together with multiple Other improvements to the code base (as a devops)
Just curious if something like this happened to you and how did you react / proceeded.
Also curious how many of devopses suddenly become responsible for parts of the core products (maintenance/development)?
Edit. This is not a small app, its hundreds of thousends of lines in frameworks that are not maintained in over 5 years, written over few years.
https://redd.it/1cg9jrb
@r_devops
Just curious if something like this happened to you and how did you react / proceeded.
Also curious how many of devopses suddenly become responsible for parts of the core products (maintenance/development)?
Edit. This is not a small app, its hundreds of thousends of lines in frameworks that are not maintained in over 5 years, written over few years.
https://redd.it/1cg9jrb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
SIEM OVERAGES
👉I’m looking for IT and #DevOps pros who are struggling with their Splunk spend and interested in reducing those overages!(with or without replacing their SIEM) 💭Let’s connect for 30 minutes! You’ll learn about the latest in observability tooling and use cases!!! In exchange for 30 minutes of your time you’ll also be fed🔥🔥🔥 Let’s do this!!! Your sharing of this post will be equally appreciated 😇
https://redd.it/1cggiaa
@r_devops
👉I’m looking for IT and #DevOps pros who are struggling with their Splunk spend and interested in reducing those overages!(with or without replacing their SIEM) 💭Let’s connect for 30 minutes! You’ll learn about the latest in observability tooling and use cases!!! In exchange for 30 minutes of your time you’ll also be fed🔥🔥🔥 Let’s do this!!! Your sharing of this post will be equally appreciated 😇
https://redd.it/1cggiaa
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
User monitoring
What tools or software do y’all use to monitor user behavior. Things like what buttons a user clicks on, how long they stay on pages, and other data used to analyze user experience.
https://redd.it/1cghqfm
@r_devops
What tools or software do y’all use to monitor user behavior. Things like what buttons a user clicks on, how long they stay on pages, and other data used to analyze user experience.
https://redd.it/1cghqfm
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
An "all-in-one" cli tool vs separate tools
We currently have a repository with a hundred different scripts and binaries all written in different languages that people took on as side projects to automate parts of our job.
There is an incentive to move all of these tools into a single Golang CLI application, but there are some drawbacks. We currently don't have any quality control or ownership of these tools whatsoever, so we need to figure out some sort of process for this. Besides, none of the team members have a development background, so it's been pretty wild what you see in the code of the current tools.
Would love to hear some thoughts on whether this would be a good/bad idea and what to look out for.
https://redd.it/1cghg2e
@r_devops
We currently have a repository with a hundred different scripts and binaries all written in different languages that people took on as side projects to automate parts of our job.
There is an incentive to move all of these tools into a single Golang CLI application, but there are some drawbacks. We currently don't have any quality control or ownership of these tools whatsoever, so we need to figure out some sort of process for this. Besides, none of the team members have a development background, so it's been pretty wild what you see in the code of the current tools.
Would love to hear some thoughts on whether this would be a good/bad idea and what to look out for.
https://redd.it/1cghg2e
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Will anyone be at RSA in MAY ?
Would love to meet other individuals in the same space
https://redd.it/1cgghmd
@r_devops
Would love to meet other individuals in the same space
https://redd.it/1cgghmd
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
An article citing important S3 bucket pricing "vulnerability": How an empty S3 bucket can make your AWS bill explode
Thought it was important to disseminate the lessons in this blog post:
S3 charges you for unauthorized incoming requests
Anyone who knows the name of any of your S3 buckets can ramp up your AWS bill as they like.
Adding a random suffix to your bucket names can enhance security.
When executing a lot of requests to S3, make sure to explicitly specify the AWS region.
Read more here: https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1
https://redd.it/1cgklx9
@r_devops
Thought it was important to disseminate the lessons in this blog post:
S3 charges you for unauthorized incoming requests
Anyone who knows the name of any of your S3 buckets can ramp up your AWS bill as they like.
Adding a random suffix to your bucket names can enhance security.
When executing a lot of requests to S3, make sure to explicitly specify the AWS region.
Read more here: https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1
https://redd.it/1cgklx9
@r_devops
Medium
How an empty S3 bucket can make your AWS bill explode
Imagine you create an empty, private AWS S3 bucket in a region of your preference. What will your AWS bill be the next morning?