how do you keep your monitoring scripts in sync between servers?
I work at a company where we have a lot of Linux servers monitored by nagios/icinga, we often need to make modifications to our monitoring scripts. The issue is that we want to spread these modifications to all the servers we are monitoring.
I have been thinking to create a repository on GitHub for the monitoring scripts. Then setup the git repository on all the servers then create a periodic git pull on all the servers, so that any modification done on the centralized GitHub repository is automatically synchronized to all the servers we are monitoring.
The reason why I wanted a git repository is to keep the history of all the modifications done to each monitoring scripts.
Is that a good idea? Any other good idea to suggest? How are you guys doing if you are in the same situation as me?
https://redd.it/t08g7n
@r_devops
I work at a company where we have a lot of Linux servers monitored by nagios/icinga, we often need to make modifications to our monitoring scripts. The issue is that we want to spread these modifications to all the servers we are monitoring.
I have been thinking to create a repository on GitHub for the monitoring scripts. Then setup the git repository on all the servers then create a periodic git pull on all the servers, so that any modification done on the centralized GitHub repository is automatically synchronized to all the servers we are monitoring.
The reason why I wanted a git repository is to keep the history of all the modifications done to each monitoring scripts.
Is that a good idea? Any other good idea to suggest? How are you guys doing if you are in the same situation as me?
https://redd.it/t08g7n
@r_devops
reddit
how do you keep your monitoring scripts in sync between servers?
I work at a company where we have a lot of Linux servers monitored by nagios/icinga, we often need to make modifications to our monitoring...
Is there any way to create a free NAT gateway on AWS?
Hey everyone!
I'm currently deploying my first little project on AWS and was a bit sad to see the bill yesterday, which would probably not allow me to run this long term. The app maintains a project website for a chair I work at at uni. I have a Lambda function packaged in a Docker container image on ECR that connects to S3 and RDS and sends requests to different websites.
When reading up on Lambda, I understood it the way that a function can either (1) only connect to the internet (and not access resources within my VPC), (2) only access recources within the VPC but not connect to the outside or (3) do both if you set up a NAT gateway. I believe that I need both, please correct me if I'm wrong. So I set up a NAT gateway which works fine. However, it charges $0.045 per hour, which would amount to around $30 per month, which would be a little much for a project that does not generate any profit (I cannot ask my uni to pay because they insist we should use their 2017 Debian server that hasn't been updated ever since).
I have tried to find a way to decrease the cost of this. This article suggests that you could use a t3.micro to run a NAT instance on it, but it seems like AWS does not want to support this in the future. I assume that it would also be possible to have other Lambda functions create and destroy the gateway every time it is needed, but that sounds very complicated to me and I would like to keep it as simple as possible.
Do you have any advice on what I could do here?
https://redd.it/t07330
@r_devops
Hey everyone!
I'm currently deploying my first little project on AWS and was a bit sad to see the bill yesterday, which would probably not allow me to run this long term. The app maintains a project website for a chair I work at at uni. I have a Lambda function packaged in a Docker container image on ECR that connects to S3 and RDS and sends requests to different websites.
When reading up on Lambda, I understood it the way that a function can either (1) only connect to the internet (and not access resources within my VPC), (2) only access recources within the VPC but not connect to the outside or (3) do both if you set up a NAT gateway. I believe that I need both, please correct me if I'm wrong. So I set up a NAT gateway which works fine. However, it charges $0.045 per hour, which would amount to around $30 per month, which would be a little much for a project that does not generate any profit (I cannot ask my uni to pay because they insist we should use their 2017 Debian server that hasn't been updated ever since).
I have tried to find a way to decrease the cost of this. This article suggests that you could use a t3.micro to run a NAT instance on it, but it seems like AWS does not want to support this in the future. I assume that it would also be possible to have other Lambda functions create and destroy the gateway every time it is needed, but that sounds very complicated to me and I would like to keep it as simple as possible.
Do you have any advice on what I could do here?
https://redd.it/t07330
@r_devops
Cloudforecast
AWS NAT Gateway Pricing and Cost Reduction Guide | CloudForecast
The ultimate AWS NAT Gateway Pricing guide. Learn the most common ways to reduce your AWS costs quickly.
Packer experts need your help
I am trying to create a image out of a base image using packer
I am using this
shared_image_gallery {
subscription = "00000000-0000-0000-0000-00000000000"
resource_group = "ResourceGroup"
gallery_name = "GalleryName"
image_name = "ImageName"
image_version = "1.0.0"
}
managed_image_name = "TargetImageName"
managed_image_resource_group_name = "TargetResourceGroup"
​
problem is packer is throwing error like i need provide plan info
however this is custome image and doesnt need those details
can any one please help meon this stuck in this issue for long time
https://redd.it/t09ygi
@r_devops
I am trying to create a image out of a base image using packer
I am using this
shared_image_gallery {
subscription = "00000000-0000-0000-0000-00000000000"
resource_group = "ResourceGroup"
gallery_name = "GalleryName"
image_name = "ImageName"
image_version = "1.0.0"
}
managed_image_name = "TargetImageName"
managed_image_resource_group_name = "TargetResourceGroup"
​
problem is packer is throwing error like i need provide plan info
however this is custome image and doesnt need those details
can any one please help meon this stuck in this issue for long time
https://redd.it/t09ygi
@r_devops
reddit
Packer experts need your help
I am trying to create a image out of a base image using packer I am using this shared\_image\_gallery { subscription =...
Looking for DevOps/Cloud Engineers in Europe
Hi! I'm not part of the HR in my company, but an employee looking for new team members, as it's really difficult to hire new people in tech.
Do you have some experience in topics related to DevOps, Cloud or Systems and living in Europe? Are you looking for a new experience? Please PM me and I will redirect your application to the required person, which in turn will meet with you through video conference to see if we have found a match.
Thank you for your interest!
https://redd.it/t0d80d
@r_devops
Hi! I'm not part of the HR in my company, but an employee looking for new team members, as it's really difficult to hire new people in tech.
Do you have some experience in topics related to DevOps, Cloud or Systems and living in Europe? Are you looking for a new experience? Please PM me and I will redirect your application to the required person, which in turn will meet with you through video conference to see if we have found a match.
Thank you for your interest!
https://redd.it/t0d80d
@r_devops
reddit
Looking for DevOps/Cloud Engineers in Europe
Hi! I'm not part of the HR in my company, but an employee looking for new team members, as it's really difficult to hire new people in tech. Do...
Orchestrating Vulnerability Scanning with Kubernetes - Watch here - https://youtu.be/btEVJQooL9s
https://youtu.be/btEVJQooL9s
https://redd.it/t0ag6r
@r_devops
https://youtu.be/btEVJQooL9s
https://redd.it/t0ag6r
@r_devops
YouTube
Orchestrating Vulnerability Scanning with Kubernetes
Welcome to AppSecEngineer’s first livestream in 2022! In this session, your favourite instructor Abhay Bhargav is demonstrating how to orchestrate vulnerability scans with Kubernetes.
A little background on this: performing vulnerability scans manually isn’t…
A little background on this: performing vulnerability scans manually isn’t…
Azure experts I need your help! Application Gateway: Is it possible to preserve the original application gateway url but have appgateway redirect or send to another url?
I have https://user.mysite.net. his is pointed at the public ip of the application gateway WAF_v2. When user hits this user, I want them to be taken to https://test.mysite.com/user1 .
However at the same time, I want the user to see user.mysite.net in the browser. They shouldn't see test.mysite.com/user1. I think this has to do with rewrite rules, but I am struggling with the order of operations here...also not entire sure this is possible.
test.mysite.com/user1 is an application in same tenant but different subscription on a VM.
https://redd.it/t0cn68
@r_devops
I have https://user.mysite.net. his is pointed at the public ip of the application gateway WAF_v2. When user hits this user, I want them to be taken to https://test.mysite.com/user1 .
However at the same time, I want the user to see user.mysite.net in the browser. They shouldn't see test.mysite.com/user1. I think this has to do with rewrite rules, but I am struggling with the order of operations here...also not entire sure this is possible.
test.mysite.com/user1 is an application in same tenant but different subscription on a VM.
https://redd.it/t0cn68
@r_devops
Why were DDOS attacks successful on Ukranian banks ? Are they using outdated technology? Or were they not architectured well ?
Would the results have been different if they were using public cloud providers ? (Assuming if they were not)
https://redd.it/t0dmd6
@r_devops
Would the results have been different if they were using public cloud providers ? (Assuming if they were not)
https://redd.it/t0dmd6
@r_devops
reddit
Why were DDOS attacks successful on Ukranian banks ? Are they...
Would the results have been different if they were using public cloud providers ? (Assuming if they were not)
Automating Jenkins and Artifactory using Python
Hey everyone.
I'm trying to check for the build success or failure result in Jenkins and the currently available versions in Artifactory using Python scripts in order to automatize some tasks.
So basically, from Python, I'll have to Login in each of them and call some APIs to get the information… Like /checkLastBuildsFormUser X in case of Jenkins and /getLastVersionsForProject Y in the case of Artifactory.
Before doing this I should Login and get a token or session or something in order to be able to keep calling the APIs and that’s where I’m blocked, right at the beginning…
At the moment I'm struggling with the Login, every time I try I get response 403 Forbidden (currently trying Antifactory).
While calling the Login API other than the body with the JSON containing the username and password what else must I include?
And what part of the response should I use in the subsequent requests?
https://redd.it/t0i6im
@r_devops
Hey everyone.
I'm trying to check for the build success or failure result in Jenkins and the currently available versions in Artifactory using Python scripts in order to automatize some tasks.
So basically, from Python, I'll have to Login in each of them and call some APIs to get the information… Like /checkLastBuildsFormUser X in case of Jenkins and /getLastVersionsForProject Y in the case of Artifactory.
Before doing this I should Login and get a token or session or something in order to be able to keep calling the APIs and that’s where I’m blocked, right at the beginning…
At the moment I'm struggling with the Login, every time I try I get response 403 Forbidden (currently trying Antifactory).
While calling the Login API other than the body with the JSON containing the username and password what else must I include?
And what part of the response should I use in the subsequent requests?
https://redd.it/t0i6im
@r_devops
reddit
Automating Jenkins and Artifactory using Python
Hey everyone. I'm trying to check for the build success or failure result in Jenkins and the currently available versions in Artifactory using...
Beware of GitLab billing issues
TL;DR - GitLab makes an egregious billing mistake, refuses to fix it, and tells a GitLab evangelist to go pound salt. If you purchase it, examine the order closely.
So, a little background on me: I started at a software company years ago in an IT position. Our traditional software development toolchain was overly complicated for my liking, so I set up GitLab.
I did so well with it that I became my company's first DevOps Engineer, and I got dev teams to make the switch. Not only did I present on GitLab at work, I took my GitLab evangelism on the road to enthusiasts in the area - I.E. the local Linux User Group.
Not long ago, I ordered some GitLab licenses since more people wanted to use it. I asked to go from 57 to 75 licenses. Instead, GitLab put the order in wrong and added 75 licenses, bringing us to 132 total.
About this time, I was pulled to a critically-important project that was way behind schedule and told not to work on anything else. When I got enough breathing room to switch back, our account manager acted like she couldn't care less. The most I ever got was "I'll be sure to look into it" or "I'm still looking into it".
The process dragged on for weeks. I had to nag her over and over again for updates until she finally told me that GitLab's billing department had decided... not to give me a refund because it had been too long. How convenient, especially after dragging out the process for so long.
I complained about this, asked for a new account manager, and got what I requested. Our new account manager took my concerns to the GitLab crew again... and got told once again that not only would we not receive a refund, GitLab wasn't going to offer us any sort of compensation or credit whatsoever.
We're a software company as well, and we would never treat loyal customers this way - especially not our power users. I've built my DevOps career around GitLab and encouraged others to do the same. That GitLab could be so tone-deaf over a problem that was clearly their fault speaks volumes to how the company has changed.
I'm grateful for what GitLab has provided. It's still a good product, even if I'm gravely concerned about its future. But I'm hanging up my GitLab evangelist hat. A few of my company's senior developers are interested in GitLab alternatives, and I've given the thumbs-up to do a proof-of-concept with one of them later this year.
If you choose to use GitLab in your organization, check your bills carefully.
https://redd.it/t0qizc
@r_devops
TL;DR - GitLab makes an egregious billing mistake, refuses to fix it, and tells a GitLab evangelist to go pound salt. If you purchase it, examine the order closely.
So, a little background on me: I started at a software company years ago in an IT position. Our traditional software development toolchain was overly complicated for my liking, so I set up GitLab.
I did so well with it that I became my company's first DevOps Engineer, and I got dev teams to make the switch. Not only did I present on GitLab at work, I took my GitLab evangelism on the road to enthusiasts in the area - I.E. the local Linux User Group.
Not long ago, I ordered some GitLab licenses since more people wanted to use it. I asked to go from 57 to 75 licenses. Instead, GitLab put the order in wrong and added 75 licenses, bringing us to 132 total.
About this time, I was pulled to a critically-important project that was way behind schedule and told not to work on anything else. When I got enough breathing room to switch back, our account manager acted like she couldn't care less. The most I ever got was "I'll be sure to look into it" or "I'm still looking into it".
The process dragged on for weeks. I had to nag her over and over again for updates until she finally told me that GitLab's billing department had decided... not to give me a refund because it had been too long. How convenient, especially after dragging out the process for so long.
I complained about this, asked for a new account manager, and got what I requested. Our new account manager took my concerns to the GitLab crew again... and got told once again that not only would we not receive a refund, GitLab wasn't going to offer us any sort of compensation or credit whatsoever.
We're a software company as well, and we would never treat loyal customers this way - especially not our power users. I've built my DevOps career around GitLab and encouraged others to do the same. That GitLab could be so tone-deaf over a problem that was clearly their fault speaks volumes to how the company has changed.
I'm grateful for what GitLab has provided. It's still a good product, even if I'm gravely concerned about its future. But I'm hanging up my GitLab evangelist hat. A few of my company's senior developers are interested in GitLab alternatives, and I've given the thumbs-up to do a proof-of-concept with one of them later this year.
If you choose to use GitLab in your organization, check your bills carefully.
https://redd.it/t0qizc
@r_devops
reddit
Beware of GitLab billing issues
TL;DR - GitLab makes an egregious billing mistake, refuses to fix it, and tells a GitLab evangelist to go pound salt. If you purchase it, examine...
mineOps Part 5 Released!
Following up from my original post, https://www.reddit.com/r/devops/comments/rvkh6w/a\_new\_blog\_series\_mineops/.
At long last I've finally finished and released part 5 of the mineOps series, Making Containers Highly Available.
https://blog.kywa.io/mineops-part-5/
https://redd.it/t0uz62
@r_devops
Following up from my original post, https://www.reddit.com/r/devops/comments/rvkh6w/a\_new\_blog\_series\_mineops/.
At long last I've finally finished and released part 5 of the mineOps series, Making Containers Highly Available.
https://blog.kywa.io/mineops-part-5/
https://redd.it/t0uz62
@r_devops
reddit
A new Blog series - mineOps!
Hey r/devops, I've started a new series on my blog entitled "mineOps - Teaching the principles of DevOps through Minecraft". This isn't meant to...
Github ssh access to multiple repos
Im trying to add my ssh key to github so i can clone multiple repos in my organization. I was able to add my ssh key but it only lets me clone 1 repo. The remaining 80 repos give an error message that says
ERROR: Repository not found.
fatal: Could not read from remote repository.
​
Please make sure you have the correct access rights
and the repository exists.
Is there a way that i can automatically login and clone the repo daily without being prompted for my credentials? I
https://redd.it/t0mge5
@r_devops
Im trying to add my ssh key to github so i can clone multiple repos in my organization. I was able to add my ssh key but it only lets me clone 1 repo. The remaining 80 repos give an error message that says
ERROR: Repository not found.
fatal: Could not read from remote repository.
​
Please make sure you have the correct access rights
and the repository exists.
Is there a way that i can automatically login and clone the repo daily without being prompted for my credentials? I
https://redd.it/t0mge5
@r_devops
reddit
Github ssh access to multiple repos
Im trying to add my ssh key to github so i can clone multiple repos in my organization. I was able to add my ssh key but it only lets me clone 1...
aws lambda invoke
Hi, lets say i make some simple flask web app where user can generate image, is it possible to invoke lambda function when user click on generate image?Mostly i used lambda for s3 put objects so not sure about this.
Thanks
https://redd.it/t13ejp
@r_devops
Hi, lets say i make some simple flask web app where user can generate image, is it possible to invoke lambda function when user click on generate image?Mostly i used lambda for s3 put objects so not sure about this.
Thanks
https://redd.it/t13ejp
@r_devops
reddit
aws lambda invoke
Hi, lets say i make some simple flask web app where user can generate image, is it possible to invoke lambda function when user click on generate...
Docker build in GH Actions. Check if image digest is the same as previous before pushing
Hello, I'm trying to build a CI with GithubActions:
on:
push:
branches:
- cicd
permissions:
id-token: write
contents: read # This is required for actions/checkout@v2
name: Build images to ECR and deploy them to ECS
jobs:
deploy:
name: deploy
runs-on: ubuntu-20.04
steps:
#Increments the version for the image tag
- name: gh auth login
env:
pattoken: ${{ secrets.REPOACCESSTOKEN }}
shell: bash
run: gh auth login --with-token <<< "${{ enb.pattoken }}"
- name: gh secret set env
env:
secretname: 'MINOR'
secretrepo: Nasini-Trading/ArqLogger-Server
shell: bash
run: gh secret set "${{ enb.secretname }}" --body $((${{secrets.MINOR}} +1)) --repo "${{ enb.secretrepo }}"
- name: Checkout
uses: actions/checkout@v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: arn:aws:iam::XXXX:role/GithubActionsRole
role-session-name: GithubActionsSession
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push backend image to Amazon ECR
id: build-backend
env:
ECRREGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECRREPOSITORY: arqlogger-server-backend
IMAGETAG: ${{ secrets.MAJOR }}.${{ secrets.MINOR }}
working-directory: ./backend
run: |
docker build -f Dockerfile -t $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG .
docker push $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG
echo "::set-output name=image::$ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG"
- name: Build, tag, and push frontend image to Amazon ECR
id: build-frontend
env:
ECRREGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECRREPOSITORY: arqlogger-server-frontend
IMAGETAG: ${{ secrets.MAJOR }}.${{ secrets.MINOR }}
working-directory: ./frontend
run: |
docker build -f Dockerfile -t $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG .
docker push $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG
echo "::set-output name=image::$ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG"
where I'm:
logging into ECR
using a GH secret to incremente the TAG (so each new docker image has a 0.1, 0.2, 0.3... version tag)
building the images (one for the Dockerfile in the frontend folder, one for the backend)
tagging with the previous MAJOR MINOR version secret
My problem is that sometimes I don't change anything in the image build but it would nontheless trigger a new version.
I want to use the Image Digest/checksum of the just built image to compare with the Digest of the ECR previous image. If they are the same (no changes in the content of any layer of the image) the push should not be triggered
Any ideas?
EDIT: For some strange reason Reddit doesn't allow me to write ${{enb}} with the "v". It gives error 403. Odd...
https://redd.it/t15mbu
@r_devops
Hello, I'm trying to build a CI with GithubActions:
on:
push:
branches:
- cicd
permissions:
id-token: write
contents: read # This is required for actions/checkout@v2
name: Build images to ECR and deploy them to ECS
jobs:
deploy:
name: deploy
runs-on: ubuntu-20.04
steps:
#Increments the version for the image tag
- name: gh auth login
env:
pattoken: ${{ secrets.REPOACCESSTOKEN }}
shell: bash
run: gh auth login --with-token <<< "${{ enb.pattoken }}"
- name: gh secret set env
env:
secretname: 'MINOR'
secretrepo: Nasini-Trading/ArqLogger-Server
shell: bash
run: gh secret set "${{ enb.secretname }}" --body $((${{secrets.MINOR}} +1)) --repo "${{ enb.secretrepo }}"
- name: Checkout
uses: actions/checkout@v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: arn:aws:iam::XXXX:role/GithubActionsRole
role-session-name: GithubActionsSession
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push backend image to Amazon ECR
id: build-backend
env:
ECRREGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECRREPOSITORY: arqlogger-server-backend
IMAGETAG: ${{ secrets.MAJOR }}.${{ secrets.MINOR }}
working-directory: ./backend
run: |
docker build -f Dockerfile -t $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG .
docker push $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG
echo "::set-output name=image::$ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG"
- name: Build, tag, and push frontend image to Amazon ECR
id: build-frontend
env:
ECRREGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECRREPOSITORY: arqlogger-server-frontend
IMAGETAG: ${{ secrets.MAJOR }}.${{ secrets.MINOR }}
working-directory: ./frontend
run: |
docker build -f Dockerfile -t $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG .
docker push $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG
echo "::set-output name=image::$ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG"
where I'm:
logging into ECR
using a GH secret to incremente the TAG (so each new docker image has a 0.1, 0.2, 0.3... version tag)
building the images (one for the Dockerfile in the frontend folder, one for the backend)
tagging with the previous MAJOR MINOR version secret
My problem is that sometimes I don't change anything in the image build but it would nontheless trigger a new version.
I want to use the Image Digest/checksum of the just built image to compare with the Digest of the ECR previous image. If they are the same (no changes in the content of any layer of the image) the push should not be triggered
Any ideas?
EDIT: For some strange reason Reddit doesn't allow me to write ${{enb}} with the "v". It gives error 403. Odd...
https://redd.it/t15mbu
@r_devops
reddit
Docker build in GH Actions. Check if image digest is the same as...
Hello, I'm trying to build a CI with GithubActions: on: push: branches: - cicd permissions: id-token:...
Just a day of "Why am I bothering with this lunacy?"
Rant I guess. Sorry.
I've had this ticket for a week or so, it's really simple - make sure EBS volumes are encrypted.
I am relatively new in a large org, but I figured - that should be pretty easy, I can do that.
Packer, Terraform, AWS... build a new instance make sure the attached volumes are encrypted.
So I made a branch, put in the relevant...
"No no, on this system we don't use THAT gitlab instance we use THIS gitlab instance..."
Okay, I can't get access to that...
"Ah you need an LDAP account to use the VPN it's behind"
3 days later...
"Okay you have that LDAP account... You should able to use that VPN now..."
Yep, that VPN works but I can't see any projects in that Gitlab...
"Oh yeah, so now you need to get your LDAP account added to that Gitlab instance..."
Okay who do i ask about that?
"Don't know, ask in standup"
Okay so now I have my LDAP account in that gitlab instance but I can't push any code because it requires a GPG key for signing... I have a GPG public key but it's not accepting it because it's associated with an identity for the gitlab SAAS instance...
I create a new GPG pair, try and upload it.
"No we don't recognise this because the machine you're using is not associated with the LDAP identity provided..."
FML
Project manager: "Nezbla we are waiting on this Jira ticket about EBS volumes to be closed so we can include it on the next release!! What have you been doing all this time!! Can you work harder please?!?"
It's 3 lines of code in the packer template and a minor Terraform tweak with IAM policies regarding KMS....
I'm not happy it's taken me a week to get this nonsense merged... In fact I'm verging in furious.
But it's Friday after 5pm so I've a cold pint of beer in hand and considering how much better life could be if I'd decided to be a lumberjack...
https://redd.it/t193fk
@r_devops
Rant I guess. Sorry.
I've had this ticket for a week or so, it's really simple - make sure EBS volumes are encrypted.
I am relatively new in a large org, but I figured - that should be pretty easy, I can do that.
Packer, Terraform, AWS... build a new instance make sure the attached volumes are encrypted.
So I made a branch, put in the relevant...
"No no, on this system we don't use THAT gitlab instance we use THIS gitlab instance..."
Okay, I can't get access to that...
"Ah you need an LDAP account to use the VPN it's behind"
3 days later...
"Okay you have that LDAP account... You should able to use that VPN now..."
Yep, that VPN works but I can't see any projects in that Gitlab...
"Oh yeah, so now you need to get your LDAP account added to that Gitlab instance..."
Okay who do i ask about that?
"Don't know, ask in standup"
Okay so now I have my LDAP account in that gitlab instance but I can't push any code because it requires a GPG key for signing... I have a GPG public key but it's not accepting it because it's associated with an identity for the gitlab SAAS instance...
I create a new GPG pair, try and upload it.
"No we don't recognise this because the machine you're using is not associated with the LDAP identity provided..."
FML
Project manager: "Nezbla we are waiting on this Jira ticket about EBS volumes to be closed so we can include it on the next release!! What have you been doing all this time!! Can you work harder please?!?"
It's 3 lines of code in the packer template and a minor Terraform tweak with IAM policies regarding KMS....
I'm not happy it's taken me a week to get this nonsense merged... In fact I'm verging in furious.
But it's Friday after 5pm so I've a cold pint of beer in hand and considering how much better life could be if I'd decided to be a lumberjack...
https://redd.it/t193fk
@r_devops
reddit
Just a day of "Why am I bothering with this lunacy?"
Rant I guess. Sorry. I've had this ticket for a week or so, it's really simple - make sure EBS volumes are encrypted. I am relatively new in a...
Russia traffic
Seems like all leaders are busy figuring out what ‘sanctions’ to announce next against Russia. Devops should not miss out, what about configuring traffic drops?
https://redd.it/t1dnql
@r_devops
Seems like all leaders are busy figuring out what ‘sanctions’ to announce next against Russia. Devops should not miss out, what about configuring traffic drops?
https://redd.it/t1dnql
@r_devops
reddit
Russia traffic
Seems like all leaders are busy figuring out what ‘sanctions’ to announce next against Russia. Devops should not miss out, what about configuring...
Any recommendation for some must-know DevOps skills or fundamentals?
What are some of the fundamentals or skills that you guys think an individual must know in devops field
https://redd.it/t1qztw
@r_devops
What are some of the fundamentals or skills that you guys think an individual must know in devops field
https://redd.it/t1qztw
@r_devops
reddit
Any recommendation for some must-know DevOps skills or fundamentals?
What are some of the fundamentals or skills that you guys think an individual must know in devops field
How to host HTML / JS / CSS?
Working on a small project to teach myself DevOps. I built a simple "notes" web app with a document each for html, js, and css. What is a good web server platform to use for hosting this? I've used Apache a bit in the past, but that's about it. I want to run it on an AWS ec2 and branch out using DevOps tools from there to learn the ropes.
Note: I know there are simple ways to host static website, like in s3 - but I explicitly want to overengineer it a bit so I can work with more DevOps tools.
https://redd.it/t1hzoc
@r_devops
Working on a small project to teach myself DevOps. I built a simple "notes" web app with a document each for html, js, and css. What is a good web server platform to use for hosting this? I've used Apache a bit in the past, but that's about it. I want to run it on an AWS ec2 and branch out using DevOps tools from there to learn the ropes.
Note: I know there are simple ways to host static website, like in s3 - but I explicitly want to overengineer it a bit so I can work with more DevOps tools.
https://redd.it/t1hzoc
@r_devops
reddit
How to host HTML / JS / CSS?
Working on a small project to teach myself DevOps. I built a simple "notes" web app with a document each for html, js, and css. What is a good web...
Scale Jenkins Behind Webhook
We have a Jenkins setup on Kubernetes. Agents leverage Kubernetes pods dynamically. But the Master controller is just one and has become bottleneck and SpOF.
We can shard the controller for sure. But wanted to check if we can completely abstract, decouple the controllers from consumers.
We intend to completely hide Jenkins behind a event handler or webhooks service like svix. So we can distribute the jobs to any jenkins controller.
Is this feasible? May be I am missing something obvious.
https://redd.it/t1qxy5
@r_devops
We have a Jenkins setup on Kubernetes. Agents leverage Kubernetes pods dynamically. But the Master controller is just one and has become bottleneck and SpOF.
We can shard the controller for sure. But wanted to check if we can completely abstract, decouple the controllers from consumers.
We intend to completely hide Jenkins behind a event handler or webhooks service like svix. So we can distribute the jobs to any jenkins controller.
Is this feasible? May be I am missing something obvious.
https://redd.it/t1qxy5
@r_devops
reddit
Scale Jenkins Behind Webhook
We have a Jenkins setup on Kubernetes. Agents leverage Kubernetes pods dynamically. But the Master controller is just one and has become...
lambda pipeline and buildpec
So I have an application that I want to run in lambda, my pipeline looks like I pick the code from github and then I want to build in codebuild and I want the "jar" file from codebuild to go into an s3 bucket so that I can create a deploy stage for lambda so lambda can pick up the changes from there.
​
Does this sound like a good plan? Also. what should my buildspec.yml should look like, basically I want to copy the jar file present in /targets into s3
https://redd.it/t1pu53
@r_devops
So I have an application that I want to run in lambda, my pipeline looks like I pick the code from github and then I want to build in codebuild and I want the "jar" file from codebuild to go into an s3 bucket so that I can create a deploy stage for lambda so lambda can pick up the changes from there.
​
Does this sound like a good plan? Also. what should my buildspec.yml should look like, basically I want to copy the jar file present in /targets into s3
https://redd.it/t1pu53
@r_devops
reddit
lambda pipeline and buildpec
So I have an application that I want to run in lambda, my pipeline looks like I pick the code from github and then I want to build in codebuild...
Hashicorp Packer - VMware timeout over 1h
My builds using packer are timing out due to taking over 1h (windows updates...) if I disable the Windows Updates, works fine. Any idea how to overcome this issue? I can't seem to find what I'm looking for anywhere...
https://redd.it/t1xvxk
@r_devops
My builds using packer are timing out due to taking over 1h (windows updates...) if I disable the Windows Updates, works fine. Any idea how to overcome this issue? I can't seem to find what I'm looking for anywhere...
https://redd.it/t1xvxk
@r_devops
reddit
Hashicorp Packer - VMware timeout over 1h
My builds using packer are timing out due to taking over 1h (windows updates...) if I disable the Windows Updates, works fine. Any idea how to...
What Are My Options For Running SonarQube In A Pipeline?
I have previously run SonarQube using the gradle plugin and the server running on localhost.
I have also worked at larger companies where they have a dedicated server instance.
I now need to run it as part of a build pipeline and not just locally. However, managing a server, keeping it up to date with patches, leaving it running all the time when I don't need it etc seems like a pain and I'm on a shoestring without anyone to manage it.
There is a plugin that give the results on the pipeline (which I want), however, it geared all around having a standalone server.
What are my options for running SonarQube in an (Azure) pipeline?
NB: I need a guide or link on how to do the steps of the options too.
https://redd.it/t1o4yx
@r_devops
I have previously run SonarQube using the gradle plugin and the server running on localhost.
I have also worked at larger companies where they have a dedicated server instance.
I now need to run it as part of a build pipeline and not just locally. However, managing a server, keeping it up to date with patches, leaving it running all the time when I don't need it etc seems like a pain and I'm on a shoestring without anyone to manage it.
There is a plugin that give the results on the pipeline (which I want), however, it geared all around having a standalone server.
What are my options for running SonarQube in an (Azure) pipeline?
NB: I need a guide or link on how to do the steps of the options too.
https://redd.it/t1o4yx
@r_devops
reddit
What Are My Options For Running SonarQube In A Pipeline?
I have previously run SonarQube using the gradle plugin and the server running on localhost. I have also worked at larger companies where they...