Debugging Azure Devops
I'm trying to build my first Azure CI/CD pipeline and to be honest I'm having a nightmare. The pipeline I'm building is just the standard tutorial one to create a storage account in Azure using an ARM template.
I keep getting the error message:
"Unexpected character encountered when parsing template"
I've checked and rechecked my ARM template and my YAML. I use the ARM add on in VS code to check the json and it seems fine. I can also manually deploy the ARM file in Azure and it works no problem.
I'm not really asking for help on the specific error, what I actually need is a good place to go to find out exactly what the error is telling me. I can't find any record of this error in MS docs and the usual places are no help. So any Azure Devops people out there, do you have a go to resource somewhere out there to help you fix problems with pipelines?
https://redd.it/szp010
@r_devops
I'm trying to build my first Azure CI/CD pipeline and to be honest I'm having a nightmare. The pipeline I'm building is just the standard tutorial one to create a storage account in Azure using an ARM template.
I keep getting the error message:
"Unexpected character encountered when parsing template"
I've checked and rechecked my ARM template and my YAML. I use the ARM add on in VS code to check the json and it seems fine. I can also manually deploy the ARM file in Azure and it works no problem.
I'm not really asking for help on the specific error, what I actually need is a good place to go to find out exactly what the error is telling me. I can't find any record of this error in MS docs and the usual places are no help. So any Azure Devops people out there, do you have a go to resource somewhere out there to help you fix problems with pipelines?
https://redd.it/szp010
@r_devops
reddit
Debugging Azure Devops
I'm trying to build my first Azure CI/CD pipeline and to be honest I'm having a nightmare. The pipeline I'm building is just the standard tutorial...
Looking for an OPEN SOURCE/CROSS-PLATFORM deploy script. What is everyone using now a days?
I'm looking for a scripting language that I can create an installer with. I was wondering what was popular now a days. Something I can run from the linux or windows command line that will unzip packages in the right locations and will read environment variables from a text file and inject those into the appropriate config files, etc.
I'm open to automation with Terraform, Ansible, etc, is powershell my only option or are there better more task specific scripting language I can use?
https://redd.it/szqhs6
@r_devops
I'm looking for a scripting language that I can create an installer with. I was wondering what was popular now a days. Something I can run from the linux or windows command line that will unzip packages in the right locations and will read environment variables from a text file and inject those into the appropriate config files, etc.
I'm open to automation with Terraform, Ansible, etc, is powershell my only option or are there better more task specific scripting language I can use?
https://redd.it/szqhs6
@r_devops
reddit
Looking for an OPEN SOURCE/CROSS-PLATFORM deploy script. What is...
I'm looking for a scripting language that I can create an installer with. I was wondering what was popular now a days. Something I can run from...
What is the best way to manage MySQL users
We have an RDS MySQL database and currently users are being created and deleted manually. We are starting to grow and I would like to automate this process.
​
I already have an ansible playbook which adds all the developers SSH keys to our VMs, so ideally I would just build off of that and manage MySQL users with the Ansible mysql plugin
​
Even with that plugin I am still unsure how to create and distribute the passwords once a user is created?
https://redd.it/szojut
@r_devops
We have an RDS MySQL database and currently users are being created and deleted manually. We are starting to grow and I would like to automate this process.
​
I already have an ansible playbook which adds all the developers SSH keys to our VMs, so ideally I would just build off of that and manage MySQL users with the Ansible mysql plugin
​
Even with that plugin I am still unsure how to create and distribute the passwords once a user is created?
https://redd.it/szojut
@r_devops
reddit
What is the best way to manage MySQL users
We have an RDS MySQL database and currently users are being created and deleted manually. We are starting to grow and I would like to automate...
Understanding Nginx tail latencies
In this article we trace Nginx running on a 80 CPU server as a CDN node in one of the world largest Internet exchange point. We revealed that a ligh-weight monitoring process may cause severe latencies due to the Linux CPU scheduler. During the investigations we had a lot of fun with eBPF and perf.
​
https://tempesta-tech.com/blog/nginx-tail-latency
https://redd.it/szsihc
@r_devops
In this article we trace Nginx running on a 80 CPU server as a CDN node in one of the world largest Internet exchange point. We revealed that a ligh-weight monitoring process may cause severe latencies due to the Linux CPU scheduler. During the investigations we had a lot of fun with eBPF and perf.
​
https://tempesta-tech.com/blog/nginx-tail-latency
https://redd.it/szsihc
@r_devops
Tempesta Technologies
Understanding Nginx tail latencies - Tempesta Technologies
We traced Nginx running on a 80 CPU server as a CDN node in one of the world largest Internet exchange point. We revealed that a ligh-weight monitoring process may cause severe latencies due to the Linux CPU scheduler. During the investigations we had a lot…
DevOps Case Study for Interview Process
I've been doing some interviewing and been given some pretty bad technical assessments so far. One company I liked so far wanted me to do a Case study instead of the standard technical assessment for their 3rd round.
However, once I got the details on the case study it looks like they want candidates to solve issues they're currently facing as part of the interview process. Essentially I'm expected to research and come up with a solution to one of the issues presented and do a presentation for 1 hour (hopefully a lot of time for questions).
I don't think this is a complete bait and switch for some free consulting as I'm pretty confident there are actual positions to be filled as multiple roles are posted for. However, I feel like it is still free work provided for any candidates they don't move forward with.
Am I looking too much into it?
https://redd.it/szpvi0
@r_devops
I've been doing some interviewing and been given some pretty bad technical assessments so far. One company I liked so far wanted me to do a Case study instead of the standard technical assessment for their 3rd round.
However, once I got the details on the case study it looks like they want candidates to solve issues they're currently facing as part of the interview process. Essentially I'm expected to research and come up with a solution to one of the issues presented and do a presentation for 1 hour (hopefully a lot of time for questions).
I don't think this is a complete bait and switch for some free consulting as I'm pretty confident there are actual positions to be filled as multiple roles are posted for. However, I feel like it is still free work provided for any candidates they don't move forward with.
Am I looking too much into it?
https://redd.it/szpvi0
@r_devops
reddit
DevOps Case Study for Interview Process
I've been doing some interviewing and been given some pretty bad technical assessments so far. One company I liked so far wanted me to do a Case...
Leveraging Terraform state file for creating resources in azure
Im trying to deploy some Azure resources via terraform for a new environment. The original environment was setup via the gui but i was able to import them via terraform import. I can see all the configurations via the state file.
Although i can create resources via terraform, the documentation only lists a few arguments (i.e., name, location, etc). Is there a way that i can take info from the state file and use it to create new resources using the additional arguments? For example, i can create an app service environment v3 with some arguments. However, i imported an app service environment and it has alot more info. How can i add all the additional info (i.e., inbound_network_dependencies and all of the ip addresses) when deploying via terraform? There has to be a way to add all the extra info at deployment.
https://redd.it/szwjn9
@r_devops
Im trying to deploy some Azure resources via terraform for a new environment. The original environment was setup via the gui but i was able to import them via terraform import. I can see all the configurations via the state file.
Although i can create resources via terraform, the documentation only lists a few arguments (i.e., name, location, etc). Is there a way that i can take info from the state file and use it to create new resources using the additional arguments? For example, i can create an app service environment v3 with some arguments. However, i imported an app service environment and it has alot more info. How can i add all the additional info (i.e., inbound_network_dependencies and all of the ip addresses) when deploying via terraform? There has to be a way to add all the extra info at deployment.
https://redd.it/szwjn9
@r_devops
reddit
Leveraging Terraform state file for creating resources in azure
Im trying to deploy some Azure resources via terraform for a new environment. The original environment was setup via the gui but i was able to...
flamegraph.com - A tool for uploading, analyzing, and sharing flamegraphs
At Pyroscope (open source continuous profiling) we use flamegraphs extensively to visualize and analyze profiling data. However, one of the worst parts about using flamegraphs for analysis is that they are kind of annoying to share.
We often found ourselves sharing marked-up screenshots which would lose significant portions of information and important functionality for analysis.
As a solution, we created **flamegraph.com** which creates a shareable link to the flamegraph.
We did this using various components from Pyroscope:
backend: pprof conversion APIs, adhoc api, diff calculator endpoint
frontend: FlamegraphRender component
We added 4 main functionalities for the release of flamegraph.com and we hope to improve these and also add more features in the future:
1. Ability to upload single flamegraph
2. Ability to calculate diff between two flamegraphs
3. API for turning a flamegraph into a shareable link (we use this api for our slackbot / vscode extension)
4. Go playground which both runs and profiles code (currently code has to run for at least \~5 seconds)
Our goal is to improve the experience of sharing flamegraphs and make it easier to do as profiling becomes a more common part of developers observability toolkit.
Definitely a lot of possibilities for flamegraph.com so would love to hear about your experiences with profiling / sharing flamegraphs with your teams and how we could help improve that!
https://redd.it/szv5t5
@r_devops
At Pyroscope (open source continuous profiling) we use flamegraphs extensively to visualize and analyze profiling data. However, one of the worst parts about using flamegraphs for analysis is that they are kind of annoying to share.
We often found ourselves sharing marked-up screenshots which would lose significant portions of information and important functionality for analysis.
As a solution, we created **flamegraph.com** which creates a shareable link to the flamegraph.
We did this using various components from Pyroscope:
backend: pprof conversion APIs, adhoc api, diff calculator endpoint
frontend: FlamegraphRender component
We added 4 main functionalities for the release of flamegraph.com and we hope to improve these and also add more features in the future:
1. Ability to upload single flamegraph
2. Ability to calculate diff between two flamegraphs
3. API for turning a flamegraph into a shareable link (we use this api for our slackbot / vscode extension)
4. Go playground which both runs and profiles code (currently code has to run for at least \~5 seconds)
Our goal is to improve the experience of sharing flamegraphs and make it easier to do as profiling becomes a more common part of developers observability toolkit.
Definitely a lot of possibilities for flamegraph.com so would love to hear about your experiences with profiling / sharing flamegraphs with your teams and how we could help improve that!
https://redd.it/szv5t5
@r_devops
GitHub
GitHub - grafana/pyroscope: Continuous Profiling Platform. Debug performance issues down to a single line of code
Continuous Profiling Platform. Debug performance issues down to a single line of code - grafana/pyroscope
I accidentally got into devops, what now?
So against all odds I passed two rounds of interviews for the largest university in my country which has about 200 people all in all working in IT (beat 70 candidates, woo!) I was selected to be in DevOps specifically spinning up RHEL linux servers across the different campuses, using among others Ansible, Kubernetes and Terraform. Drifting around 2000 servers.
Problem is.. I do not even have an IT degree. I was upfront about this, and very honest during my interview. I have a maybe 10 university courses in IT under my belt, and some experience here and there. Been using Linux for the past ten years as a power user, but I do not feel this qualifies me for the job.. Very familiar with git, command line and general Linux from an user perspective. But I can not for the life of me name all the folders under root nor what they all do.. I do know a little bit about HTTP, protocols, handshakes and so forth, but its been a while.
Perhaps my best trait Is that I love coding (really, really good at Python), good at learning new things and have a master in mathematics. So bashing my head against difficult things is nothing new.
So... What do I do?
Any advice for how to prepare? I start in august. I am prepared to put in the hours to be as prepared as possible.
https://redd.it/t03m5s
@r_devops
So against all odds I passed two rounds of interviews for the largest university in my country which has about 200 people all in all working in IT (beat 70 candidates, woo!) I was selected to be in DevOps specifically spinning up RHEL linux servers across the different campuses, using among others Ansible, Kubernetes and Terraform. Drifting around 2000 servers.
Problem is.. I do not even have an IT degree. I was upfront about this, and very honest during my interview. I have a maybe 10 university courses in IT under my belt, and some experience here and there. Been using Linux for the past ten years as a power user, but I do not feel this qualifies me for the job.. Very familiar with git, command line and general Linux from an user perspective. But I can not for the life of me name all the folders under root nor what they all do.. I do know a little bit about HTTP, protocols, handshakes and so forth, but its been a while.
Perhaps my best trait Is that I love coding (really, really good at Python), good at learning new things and have a master in mathematics. So bashing my head against difficult things is nothing new.
So... What do I do?
Any advice for how to prepare? I start in august. I am prepared to put in the hours to be as prepared as possible.
https://redd.it/t03m5s
@r_devops
reddit
I accidentally got into devops, what now?
So against all odds I passed two rounds of interviews for the largest university in my country which has about 200 people all in all working in IT...
how do you keep your monitoring scripts in sync between servers?
I work at a company where we have a lot of Linux servers monitored by nagios/icinga, we often need to make modifications to our monitoring scripts. The issue is that we want to spread these modifications to all the servers we are monitoring.
I have been thinking to create a repository on GitHub for the monitoring scripts. Then setup the git repository on all the servers then create a periodic git pull on all the servers, so that any modification done on the centralized GitHub repository is automatically synchronized to all the servers we are monitoring.
The reason why I wanted a git repository is to keep the history of all the modifications done to each monitoring scripts.
Is that a good idea? Any other good idea to suggest? How are you guys doing if you are in the same situation as me?
https://redd.it/t08g7n
@r_devops
I work at a company where we have a lot of Linux servers monitored by nagios/icinga, we often need to make modifications to our monitoring scripts. The issue is that we want to spread these modifications to all the servers we are monitoring.
I have been thinking to create a repository on GitHub for the monitoring scripts. Then setup the git repository on all the servers then create a periodic git pull on all the servers, so that any modification done on the centralized GitHub repository is automatically synchronized to all the servers we are monitoring.
The reason why I wanted a git repository is to keep the history of all the modifications done to each monitoring scripts.
Is that a good idea? Any other good idea to suggest? How are you guys doing if you are in the same situation as me?
https://redd.it/t08g7n
@r_devops
reddit
how do you keep your monitoring scripts in sync between servers?
I work at a company where we have a lot of Linux servers monitored by nagios/icinga, we often need to make modifications to our monitoring...
Is there any way to create a free NAT gateway on AWS?
Hey everyone!
I'm currently deploying my first little project on AWS and was a bit sad to see the bill yesterday, which would probably not allow me to run this long term. The app maintains a project website for a chair I work at at uni. I have a Lambda function packaged in a Docker container image on ECR that connects to S3 and RDS and sends requests to different websites.
When reading up on Lambda, I understood it the way that a function can either (1) only connect to the internet (and not access resources within my VPC), (2) only access recources within the VPC but not connect to the outside or (3) do both if you set up a NAT gateway. I believe that I need both, please correct me if I'm wrong. So I set up a NAT gateway which works fine. However, it charges $0.045 per hour, which would amount to around $30 per month, which would be a little much for a project that does not generate any profit (I cannot ask my uni to pay because they insist we should use their 2017 Debian server that hasn't been updated ever since).
I have tried to find a way to decrease the cost of this. This article suggests that you could use a t3.micro to run a NAT instance on it, but it seems like AWS does not want to support this in the future. I assume that it would also be possible to have other Lambda functions create and destroy the gateway every time it is needed, but that sounds very complicated to me and I would like to keep it as simple as possible.
Do you have any advice on what I could do here?
https://redd.it/t07330
@r_devops
Hey everyone!
I'm currently deploying my first little project on AWS and was a bit sad to see the bill yesterday, which would probably not allow me to run this long term. The app maintains a project website for a chair I work at at uni. I have a Lambda function packaged in a Docker container image on ECR that connects to S3 and RDS and sends requests to different websites.
When reading up on Lambda, I understood it the way that a function can either (1) only connect to the internet (and not access resources within my VPC), (2) only access recources within the VPC but not connect to the outside or (3) do both if you set up a NAT gateway. I believe that I need both, please correct me if I'm wrong. So I set up a NAT gateway which works fine. However, it charges $0.045 per hour, which would amount to around $30 per month, which would be a little much for a project that does not generate any profit (I cannot ask my uni to pay because they insist we should use their 2017 Debian server that hasn't been updated ever since).
I have tried to find a way to decrease the cost of this. This article suggests that you could use a t3.micro to run a NAT instance on it, but it seems like AWS does not want to support this in the future. I assume that it would also be possible to have other Lambda functions create and destroy the gateway every time it is needed, but that sounds very complicated to me and I would like to keep it as simple as possible.
Do you have any advice on what I could do here?
https://redd.it/t07330
@r_devops
Cloudforecast
AWS NAT Gateway Pricing and Cost Reduction Guide | CloudForecast
The ultimate AWS NAT Gateway Pricing guide. Learn the most common ways to reduce your AWS costs quickly.
Packer experts need your help
I am trying to create a image out of a base image using packer
I am using this
shared_image_gallery {
subscription = "00000000-0000-0000-0000-00000000000"
resource_group = "ResourceGroup"
gallery_name = "GalleryName"
image_name = "ImageName"
image_version = "1.0.0"
}
managed_image_name = "TargetImageName"
managed_image_resource_group_name = "TargetResourceGroup"
​
problem is packer is throwing error like i need provide plan info
however this is custome image and doesnt need those details
can any one please help meon this stuck in this issue for long time
https://redd.it/t09ygi
@r_devops
I am trying to create a image out of a base image using packer
I am using this
shared_image_gallery {
subscription = "00000000-0000-0000-0000-00000000000"
resource_group = "ResourceGroup"
gallery_name = "GalleryName"
image_name = "ImageName"
image_version = "1.0.0"
}
managed_image_name = "TargetImageName"
managed_image_resource_group_name = "TargetResourceGroup"
​
problem is packer is throwing error like i need provide plan info
however this is custome image and doesnt need those details
can any one please help meon this stuck in this issue for long time
https://redd.it/t09ygi
@r_devops
reddit
Packer experts need your help
I am trying to create a image out of a base image using packer I am using this shared\_image\_gallery { subscription =...
Looking for DevOps/Cloud Engineers in Europe
Hi! I'm not part of the HR in my company, but an employee looking for new team members, as it's really difficult to hire new people in tech.
Do you have some experience in topics related to DevOps, Cloud or Systems and living in Europe? Are you looking for a new experience? Please PM me and I will redirect your application to the required person, which in turn will meet with you through video conference to see if we have found a match.
Thank you for your interest!
https://redd.it/t0d80d
@r_devops
Hi! I'm not part of the HR in my company, but an employee looking for new team members, as it's really difficult to hire new people in tech.
Do you have some experience in topics related to DevOps, Cloud or Systems and living in Europe? Are you looking for a new experience? Please PM me and I will redirect your application to the required person, which in turn will meet with you through video conference to see if we have found a match.
Thank you for your interest!
https://redd.it/t0d80d
@r_devops
reddit
Looking for DevOps/Cloud Engineers in Europe
Hi! I'm not part of the HR in my company, but an employee looking for new team members, as it's really difficult to hire new people in tech. Do...
Orchestrating Vulnerability Scanning with Kubernetes - Watch here - https://youtu.be/btEVJQooL9s
https://youtu.be/btEVJQooL9s
https://redd.it/t0ag6r
@r_devops
https://youtu.be/btEVJQooL9s
https://redd.it/t0ag6r
@r_devops
YouTube
Orchestrating Vulnerability Scanning with Kubernetes
Welcome to AppSecEngineer’s first livestream in 2022! In this session, your favourite instructor Abhay Bhargav is demonstrating how to orchestrate vulnerability scans with Kubernetes.
A little background on this: performing vulnerability scans manually isn’t…
A little background on this: performing vulnerability scans manually isn’t…
Azure experts I need your help! Application Gateway: Is it possible to preserve the original application gateway url but have appgateway redirect or send to another url?
I have https://user.mysite.net. his is pointed at the public ip of the application gateway WAF_v2. When user hits this user, I want them to be taken to https://test.mysite.com/user1 .
However at the same time, I want the user to see user.mysite.net in the browser. They shouldn't see test.mysite.com/user1. I think this has to do with rewrite rules, but I am struggling with the order of operations here...also not entire sure this is possible.
test.mysite.com/user1 is an application in same tenant but different subscription on a VM.
https://redd.it/t0cn68
@r_devops
I have https://user.mysite.net. his is pointed at the public ip of the application gateway WAF_v2. When user hits this user, I want them to be taken to https://test.mysite.com/user1 .
However at the same time, I want the user to see user.mysite.net in the browser. They shouldn't see test.mysite.com/user1. I think this has to do with rewrite rules, but I am struggling with the order of operations here...also not entire sure this is possible.
test.mysite.com/user1 is an application in same tenant but different subscription on a VM.
https://redd.it/t0cn68
@r_devops
Why were DDOS attacks successful on Ukranian banks ? Are they using outdated technology? Or were they not architectured well ?
Would the results have been different if they were using public cloud providers ? (Assuming if they were not)
https://redd.it/t0dmd6
@r_devops
Would the results have been different if they were using public cloud providers ? (Assuming if they were not)
https://redd.it/t0dmd6
@r_devops
reddit
Why were DDOS attacks successful on Ukranian banks ? Are they...
Would the results have been different if they were using public cloud providers ? (Assuming if they were not)
Automating Jenkins and Artifactory using Python
Hey everyone.
I'm trying to check for the build success or failure result in Jenkins and the currently available versions in Artifactory using Python scripts in order to automatize some tasks.
So basically, from Python, I'll have to Login in each of them and call some APIs to get the information… Like /checkLastBuildsFormUser X in case of Jenkins and /getLastVersionsForProject Y in the case of Artifactory.
Before doing this I should Login and get a token or session or something in order to be able to keep calling the APIs and that’s where I’m blocked, right at the beginning…
At the moment I'm struggling with the Login, every time I try I get response 403 Forbidden (currently trying Antifactory).
While calling the Login API other than the body with the JSON containing the username and password what else must I include?
And what part of the response should I use in the subsequent requests?
https://redd.it/t0i6im
@r_devops
Hey everyone.
I'm trying to check for the build success or failure result in Jenkins and the currently available versions in Artifactory using Python scripts in order to automatize some tasks.
So basically, from Python, I'll have to Login in each of them and call some APIs to get the information… Like /checkLastBuildsFormUser X in case of Jenkins and /getLastVersionsForProject Y in the case of Artifactory.
Before doing this I should Login and get a token or session or something in order to be able to keep calling the APIs and that’s where I’m blocked, right at the beginning…
At the moment I'm struggling with the Login, every time I try I get response 403 Forbidden (currently trying Antifactory).
While calling the Login API other than the body with the JSON containing the username and password what else must I include?
And what part of the response should I use in the subsequent requests?
https://redd.it/t0i6im
@r_devops
reddit
Automating Jenkins and Artifactory using Python
Hey everyone. I'm trying to check for the build success or failure result in Jenkins and the currently available versions in Artifactory using...
Beware of GitLab billing issues
TL;DR - GitLab makes an egregious billing mistake, refuses to fix it, and tells a GitLab evangelist to go pound salt. If you purchase it, examine the order closely.
So, a little background on me: I started at a software company years ago in an IT position. Our traditional software development toolchain was overly complicated for my liking, so I set up GitLab.
I did so well with it that I became my company's first DevOps Engineer, and I got dev teams to make the switch. Not only did I present on GitLab at work, I took my GitLab evangelism on the road to enthusiasts in the area - I.E. the local Linux User Group.
Not long ago, I ordered some GitLab licenses since more people wanted to use it. I asked to go from 57 to 75 licenses. Instead, GitLab put the order in wrong and added 75 licenses, bringing us to 132 total.
About this time, I was pulled to a critically-important project that was way behind schedule and told not to work on anything else. When I got enough breathing room to switch back, our account manager acted like she couldn't care less. The most I ever got was "I'll be sure to look into it" or "I'm still looking into it".
The process dragged on for weeks. I had to nag her over and over again for updates until she finally told me that GitLab's billing department had decided... not to give me a refund because it had been too long. How convenient, especially after dragging out the process for so long.
I complained about this, asked for a new account manager, and got what I requested. Our new account manager took my concerns to the GitLab crew again... and got told once again that not only would we not receive a refund, GitLab wasn't going to offer us any sort of compensation or credit whatsoever.
We're a software company as well, and we would never treat loyal customers this way - especially not our power users. I've built my DevOps career around GitLab and encouraged others to do the same. That GitLab could be so tone-deaf over a problem that was clearly their fault speaks volumes to how the company has changed.
I'm grateful for what GitLab has provided. It's still a good product, even if I'm gravely concerned about its future. But I'm hanging up my GitLab evangelist hat. A few of my company's senior developers are interested in GitLab alternatives, and I've given the thumbs-up to do a proof-of-concept with one of them later this year.
If you choose to use GitLab in your organization, check your bills carefully.
https://redd.it/t0qizc
@r_devops
TL;DR - GitLab makes an egregious billing mistake, refuses to fix it, and tells a GitLab evangelist to go pound salt. If you purchase it, examine the order closely.
So, a little background on me: I started at a software company years ago in an IT position. Our traditional software development toolchain was overly complicated for my liking, so I set up GitLab.
I did so well with it that I became my company's first DevOps Engineer, and I got dev teams to make the switch. Not only did I present on GitLab at work, I took my GitLab evangelism on the road to enthusiasts in the area - I.E. the local Linux User Group.
Not long ago, I ordered some GitLab licenses since more people wanted to use it. I asked to go from 57 to 75 licenses. Instead, GitLab put the order in wrong and added 75 licenses, bringing us to 132 total.
About this time, I was pulled to a critically-important project that was way behind schedule and told not to work on anything else. When I got enough breathing room to switch back, our account manager acted like she couldn't care less. The most I ever got was "I'll be sure to look into it" or "I'm still looking into it".
The process dragged on for weeks. I had to nag her over and over again for updates until she finally told me that GitLab's billing department had decided... not to give me a refund because it had been too long. How convenient, especially after dragging out the process for so long.
I complained about this, asked for a new account manager, and got what I requested. Our new account manager took my concerns to the GitLab crew again... and got told once again that not only would we not receive a refund, GitLab wasn't going to offer us any sort of compensation or credit whatsoever.
We're a software company as well, and we would never treat loyal customers this way - especially not our power users. I've built my DevOps career around GitLab and encouraged others to do the same. That GitLab could be so tone-deaf over a problem that was clearly their fault speaks volumes to how the company has changed.
I'm grateful for what GitLab has provided. It's still a good product, even if I'm gravely concerned about its future. But I'm hanging up my GitLab evangelist hat. A few of my company's senior developers are interested in GitLab alternatives, and I've given the thumbs-up to do a proof-of-concept with one of them later this year.
If you choose to use GitLab in your organization, check your bills carefully.
https://redd.it/t0qizc
@r_devops
reddit
Beware of GitLab billing issues
TL;DR - GitLab makes an egregious billing mistake, refuses to fix it, and tells a GitLab evangelist to go pound salt. If you purchase it, examine...
mineOps Part 5 Released!
Following up from my original post, https://www.reddit.com/r/devops/comments/rvkh6w/a\_new\_blog\_series\_mineops/.
At long last I've finally finished and released part 5 of the mineOps series, Making Containers Highly Available.
https://blog.kywa.io/mineops-part-5/
https://redd.it/t0uz62
@r_devops
Following up from my original post, https://www.reddit.com/r/devops/comments/rvkh6w/a\_new\_blog\_series\_mineops/.
At long last I've finally finished and released part 5 of the mineOps series, Making Containers Highly Available.
https://blog.kywa.io/mineops-part-5/
https://redd.it/t0uz62
@r_devops
reddit
A new Blog series - mineOps!
Hey r/devops, I've started a new series on my blog entitled "mineOps - Teaching the principles of DevOps through Minecraft". This isn't meant to...
Github ssh access to multiple repos
Im trying to add my ssh key to github so i can clone multiple repos in my organization. I was able to add my ssh key but it only lets me clone 1 repo. The remaining 80 repos give an error message that says
ERROR: Repository not found.
fatal: Could not read from remote repository.
​
Please make sure you have the correct access rights
and the repository exists.
Is there a way that i can automatically login and clone the repo daily without being prompted for my credentials? I
https://redd.it/t0mge5
@r_devops
Im trying to add my ssh key to github so i can clone multiple repos in my organization. I was able to add my ssh key but it only lets me clone 1 repo. The remaining 80 repos give an error message that says
ERROR: Repository not found.
fatal: Could not read from remote repository.
​
Please make sure you have the correct access rights
and the repository exists.
Is there a way that i can automatically login and clone the repo daily without being prompted for my credentials? I
https://redd.it/t0mge5
@r_devops
reddit
Github ssh access to multiple repos
Im trying to add my ssh key to github so i can clone multiple repos in my organization. I was able to add my ssh key but it only lets me clone 1...
aws lambda invoke
Hi, lets say i make some simple flask web app where user can generate image, is it possible to invoke lambda function when user click on generate image?Mostly i used lambda for s3 put objects so not sure about this.
Thanks
https://redd.it/t13ejp
@r_devops
Hi, lets say i make some simple flask web app where user can generate image, is it possible to invoke lambda function when user click on generate image?Mostly i used lambda for s3 put objects so not sure about this.
Thanks
https://redd.it/t13ejp
@r_devops
reddit
aws lambda invoke
Hi, lets say i make some simple flask web app where user can generate image, is it possible to invoke lambda function when user click on generate...
Docker build in GH Actions. Check if image digest is the same as previous before pushing
Hello, I'm trying to build a CI with GithubActions:
on:
push:
branches:
- cicd
permissions:
id-token: write
contents: read # This is required for actions/checkout@v2
name: Build images to ECR and deploy them to ECS
jobs:
deploy:
name: deploy
runs-on: ubuntu-20.04
steps:
#Increments the version for the image tag
- name: gh auth login
env:
pattoken: ${{ secrets.REPOACCESSTOKEN }}
shell: bash
run: gh auth login --with-token <<< "${{ enb.pattoken }}"
- name: gh secret set env
env:
secretname: 'MINOR'
secretrepo: Nasini-Trading/ArqLogger-Server
shell: bash
run: gh secret set "${{ enb.secretname }}" --body $((${{secrets.MINOR}} +1)) --repo "${{ enb.secretrepo }}"
- name: Checkout
uses: actions/checkout@v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: arn:aws:iam::XXXX:role/GithubActionsRole
role-session-name: GithubActionsSession
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push backend image to Amazon ECR
id: build-backend
env:
ECRREGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECRREPOSITORY: arqlogger-server-backend
IMAGETAG: ${{ secrets.MAJOR }}.${{ secrets.MINOR }}
working-directory: ./backend
run: |
docker build -f Dockerfile -t $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG .
docker push $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG
echo "::set-output name=image::$ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG"
- name: Build, tag, and push frontend image to Amazon ECR
id: build-frontend
env:
ECRREGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECRREPOSITORY: arqlogger-server-frontend
IMAGETAG: ${{ secrets.MAJOR }}.${{ secrets.MINOR }}
working-directory: ./frontend
run: |
docker build -f Dockerfile -t $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG .
docker push $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG
echo "::set-output name=image::$ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG"
where I'm:
logging into ECR
using a GH secret to incremente the TAG (so each new docker image has a 0.1, 0.2, 0.3... version tag)
building the images (one for the Dockerfile in the frontend folder, one for the backend)
tagging with the previous MAJOR MINOR version secret
My problem is that sometimes I don't change anything in the image build but it would nontheless trigger a new version.
I want to use the Image Digest/checksum of the just built image to compare with the Digest of the ECR previous image. If they are the same (no changes in the content of any layer of the image) the push should not be triggered
Any ideas?
EDIT: For some strange reason Reddit doesn't allow me to write ${{enb}} with the "v". It gives error 403. Odd...
https://redd.it/t15mbu
@r_devops
Hello, I'm trying to build a CI with GithubActions:
on:
push:
branches:
- cicd
permissions:
id-token: write
contents: read # This is required for actions/checkout@v2
name: Build images to ECR and deploy them to ECS
jobs:
deploy:
name: deploy
runs-on: ubuntu-20.04
steps:
#Increments the version for the image tag
- name: gh auth login
env:
pattoken: ${{ secrets.REPOACCESSTOKEN }}
shell: bash
run: gh auth login --with-token <<< "${{ enb.pattoken }}"
- name: gh secret set env
env:
secretname: 'MINOR'
secretrepo: Nasini-Trading/ArqLogger-Server
shell: bash
run: gh secret set "${{ enb.secretname }}" --body $((${{secrets.MINOR}} +1)) --repo "${{ enb.secretrepo }}"
- name: Checkout
uses: actions/checkout@v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: arn:aws:iam::XXXX:role/GithubActionsRole
role-session-name: GithubActionsSession
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push backend image to Amazon ECR
id: build-backend
env:
ECRREGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECRREPOSITORY: arqlogger-server-backend
IMAGETAG: ${{ secrets.MAJOR }}.${{ secrets.MINOR }}
working-directory: ./backend
run: |
docker build -f Dockerfile -t $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG .
docker push $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG
echo "::set-output name=image::$ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG"
- name: Build, tag, and push frontend image to Amazon ECR
id: build-frontend
env:
ECRREGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECRREPOSITORY: arqlogger-server-frontend
IMAGETAG: ${{ secrets.MAJOR }}.${{ secrets.MINOR }}
working-directory: ./frontend
run: |
docker build -f Dockerfile -t $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG .
docker push $ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG
echo "::set-output name=image::$ECRREGISTRY/$ECRREPOSITORY:$IMAGETAG"
where I'm:
logging into ECR
using a GH secret to incremente the TAG (so each new docker image has a 0.1, 0.2, 0.3... version tag)
building the images (one for the Dockerfile in the frontend folder, one for the backend)
tagging with the previous MAJOR MINOR version secret
My problem is that sometimes I don't change anything in the image build but it would nontheless trigger a new version.
I want to use the Image Digest/checksum of the just built image to compare with the Digest of the ECR previous image. If they are the same (no changes in the content of any layer of the image) the push should not be triggered
Any ideas?
EDIT: For some strange reason Reddit doesn't allow me to write ${{enb}} with the "v". It gives error 403. Odd...
https://redd.it/t15mbu
@r_devops
reddit
Docker build in GH Actions. Check if image digest is the same as...
Hello, I'm trying to build a CI with GithubActions: on: push: branches: - cicd permissions: id-token:...