Any advice for a TLS issue with AWS ELB
We have an IOT app that connects with a low level board and a phone. The aws certs work just fine when the board is over eth0 but when switching over to Wifi we have an issues getting past the TLS handshake. A colleague mentioned that it may be too memory-intensive, because be had pervious had a working solution when we implemented SSL on the instance itself.
any advice on how to troubleshoot?
https://redd.it/jx1dq8
@r_devops
We have an IOT app that connects with a low level board and a phone. The aws certs work just fine when the board is over eth0 but when switching over to Wifi we have an issues getting past the TLS handshake. A colleague mentioned that it may be too memory-intensive, because be had pervious had a working solution when we implemented SSL on the instance itself.
any advice on how to troubleshoot?
https://redd.it/jx1dq8
@r_devops
reddit
Any advice for a TLS issue with AWS ELB
We have an IOT app that connects with a low level board and a phone. The aws certs work just fine when the board is over eth0 but when switching...
Free Kubernetes Series - Part 1
Hey Everyone,
I’m creating a free Kubernetes series, specifically around Go apps and Kubernetes.
Part 1 is getting a Go app up and running on Minkube
Part 2 will be getting a Go app up and running on AKS
Part 3 will be getting a Go app up and running on EKS
If you’re interested, feel free to check it out :). Thanks in advance.
https://youtu.be/BIM4W_c1kKc
https://redd.it/jwmbim
@r_devops
Hey Everyone,
I’m creating a free Kubernetes series, specifically around Go apps and Kubernetes.
Part 1 is getting a Go app up and running on Minkube
Part 2 will be getting a Go app up and running on AKS
Part 3 will be getting a Go app up and running on EKS
If you’re interested, feel free to check it out :). Thanks in advance.
https://youtu.be/BIM4W_c1kKc
https://redd.it/jwmbim
@r_devops
YouTube
How To Run A CUSTOM Go Web API In Kubernetes (Minikube) from DockerHub [Getting Started]
In this video, we start part 1 of 3 on a series called "Go and Kubernetes". The series is all about various ways to run custom Go apps on different types of Kubernetes clusters.
In part 1, we will go over
1. Building a Docker image from a custom Go app
2.…
In part 1, we will go over
1. Building a Docker image from a custom Go app
2.…
Python test automation framework for apps and websites
Here’s an open-source Python-based test automation framework that is flexible enough to use across app and web development projects:
[https://arctouch.com/blog/python-test-automation-framework/](https://arctouch.com/blog/python-test-automation-framework/)
https://redd.it/jwkzp7
@r_devops
Here’s an open-source Python-based test automation framework that is flexible enough to use across app and web development projects:
[https://arctouch.com/blog/python-test-automation-framework/](https://arctouch.com/blog/python-test-automation-framework/)
https://redd.it/jwkzp7
@r_devops
ArcTouch
Python Test Automation Framework for Apps & Websites | ArcTouch
ArcTouch’s developers built a versatile and open-source test automation framework in Python for apps and websites. Free download.
How do you access your EC2 Instances?
Did you know that you can access AWS EC2 instances via Leapp more securely and efficiently?
This week I've walked into AWS System Manager service on AWS and made the first article of a series about this service.
There is a quick setup on how to access VM in a click, through Leapp, and enable SSM on your AWS Accounts!
[https://www.itscava.com/remote-access-to-ec2-instances-the-easy-and-secure-way](https://www.itscava.com/remote-access-to-ec2-instances-the-easy-and-secure-way)
https://redd.it/jx3pls
@r_devops
Did you know that you can access AWS EC2 instances via Leapp more securely and efficiently?
This week I've walked into AWS System Manager service on AWS and made the first article of a series about this service.
There is a quick setup on how to access VM in a click, through Leapp, and enable SSM on your AWS Accounts!
[https://www.itscava.com/remote-access-to-ec2-instances-the-easy-and-secure-way](https://www.itscava.com/remote-access-to-ec2-instances-the-easy-and-secure-way)
https://redd.it/jx3pls
@r_devops
It's Cava
Remote access to EC2 instances. The easy (and secure) way
In 2012 Bill Baker said, "treat your servers like cattle, not pets," which is a mantra that we are generally going to subscribe to when we deploy a new application.
But there are still times where you have to log into EC2 instances. If a critical pro...
But there are still times where you have to log into EC2 instances. If a critical pro...
Infrastructure-as-code-as-software - applying sw engg principles to infra setups
[https://medium.com/last9/infrastructure-as-code-as-software-a5e4b2b93e8e](https://medium.com/last9/infrastructure-as-code-as-software-a5e4b2b93e8e)
https://redd.it/jx5fe9
@r_devops
[https://medium.com/last9/infrastructure-as-code-as-software-a5e4b2b93e8e](https://medium.com/last9/infrastructure-as-code-as-software-a5e4b2b93e8e)
https://redd.it/jx5fe9
@r_devops
Medium
Infrastructure-as-code-as-Software
We ran a poll on Twitter
Question about most cost efficient deployment solution
Hello everyone,
I'm currently working on a personal project where I ask myself, how to deploy my infrastructure as cost efficient as possible.
My stack is the following:
- Nginx as reverse proxy
- static vue frontend (docker container that also contains the nginx server)
- Flask backend (docker container)
- Mariadb (docker container)
Currently I use docker-compose to start and stop my services. I worked in the past as DevOps developer at a large company where we used kubernetes and for monitoring grafana + kibana. We hosted everything on AWS. The costs didn't matter :)
Since high traffic won't be a problem I don't need grafana but I want to use Kibana to at least monitor the logs and to set alarms.
So to summarize my question:
- from my experience managed kubernetes is ,,expensive" because you even pay the idle mode. Doesn't matter if it's AWS EKS or Google Cloud platform (please correct me if I'm wrong). So I consider to use Docker swarm on an EC2 instance. What's your opinion?
- i'm not sure if Kibana might be an overkill for this simple infrastructure so I think about writing a custom logger and using RabbitMQ to push log messages to a custom dashboard.
I really appreciate every help and opinion.
https://redd.it/jx6lye
@r_devops
Hello everyone,
I'm currently working on a personal project where I ask myself, how to deploy my infrastructure as cost efficient as possible.
My stack is the following:
- Nginx as reverse proxy
- static vue frontend (docker container that also contains the nginx server)
- Flask backend (docker container)
- Mariadb (docker container)
Currently I use docker-compose to start and stop my services. I worked in the past as DevOps developer at a large company where we used kubernetes and for monitoring grafana + kibana. We hosted everything on AWS. The costs didn't matter :)
Since high traffic won't be a problem I don't need grafana but I want to use Kibana to at least monitor the logs and to set alarms.
So to summarize my question:
- from my experience managed kubernetes is ,,expensive" because you even pay the idle mode. Doesn't matter if it's AWS EKS or Google Cloud platform (please correct me if I'm wrong). So I consider to use Docker swarm on an EC2 instance. What's your opinion?
- i'm not sure if Kibana might be an overkill for this simple infrastructure so I think about writing a custom logger and using RabbitMQ to push log messages to a custom dashboard.
I really appreciate every help and opinion.
https://redd.it/jx6lye
@r_devops
reddit
Question about most cost efficient deployment solution
Hello everyone, I'm currently working on a personal project where I ask myself, how to deploy my infrastructure as cost efficient as...
Building custom images on AWS - tradeoffs between EC2 Image Builder and CodePipeline? Are there other options to consider?
Hi all - I want to build a pre-baked AMI that contains shared business logic and then use that as the base to launch containers with job-specific code (likely pulling in with parameterized user-data). Ideally, new builds would be triggered by changes in source code and saved in a repository like ECR.
From my research, I found that EC2 Image Builder integrates security best practices by automatically patching your images and applying AWS-provided security policies. You build and validate your images using "components" that come out of the box from AWS or can be custom-built for your needs.
What I'm struggling to understand is whether EC2 Image Builder builds can be triggered from a change to your Git repository. There is no mention of triggering builds from a source repository in the documentation whereas this is a clear option using CodePipeline.
My thought would be to trigger a Lambda function to manually trigger the build when changes are detected though I am wondering if there is a more native option or anyone takes a different approach.
With CodePipeline, it seems that the onus is on you to configure a secure image, and then run that image through custom tests and other processes that you define. It seems that this isn't necessary with EC2 Image Builder since security and testing are provided as pre-configured components, but then there is the question of how changes to your business logic are picked up in the pipeline.
Does anyone use either of these approaches or something different? EC2 Image Builder was only introduced in December 2019 so curious to know if people found a use case to migrate from the build option they were previously using.
https://redd.it/jx83ks
@r_devops
Hi all - I want to build a pre-baked AMI that contains shared business logic and then use that as the base to launch containers with job-specific code (likely pulling in with parameterized user-data). Ideally, new builds would be triggered by changes in source code and saved in a repository like ECR.
From my research, I found that EC2 Image Builder integrates security best practices by automatically patching your images and applying AWS-provided security policies. You build and validate your images using "components" that come out of the box from AWS or can be custom-built for your needs.
What I'm struggling to understand is whether EC2 Image Builder builds can be triggered from a change to your Git repository. There is no mention of triggering builds from a source repository in the documentation whereas this is a clear option using CodePipeline.
My thought would be to trigger a Lambda function to manually trigger the build when changes are detected though I am wondering if there is a more native option or anyone takes a different approach.
With CodePipeline, it seems that the onus is on you to configure a secure image, and then run that image through custom tests and other processes that you define. It seems that this isn't necessary with EC2 Image Builder since security and testing are provided as pre-configured components, but then there is the question of how changes to your business logic are picked up in the pipeline.
Does anyone use either of these approaches or something different? EC2 Image Builder was only introduced in December 2019 so curious to know if people found a use case to migrate from the build option they were previously using.
https://redd.it/jx83ks
@r_devops
reddit
Building custom images on AWS - tradeoffs between EC2 Image...
Hi all - I want to build a pre-baked AMI that contains shared business logic and then use that as the base to launch containers with job-specific...
AzureDevOps add environment variable
My application is successfully builded in my local computer , but when i try to build it via azureDevOps piplines for test, preprod and prod environments i got this error
\##\[error\]C:\\Users\\VssAdministrator\\.nuget\\packages\\[microsoft.aspnetcore.razor.design](https://microsoft.aspnetcore.razor.design)\\2.2.0\\build\\netstandard2.0\\Microsoft.AspNetCore.Razor.Design.CodeGeneration.targets(79,5): Error MSB4018: The "RazorTagHelper" task failed unexpectedly.
I searched for this error and find out that to fix this it has to be added environment variable,
" The fix for me was to introduce a new Environment Variable with the Key "DOTNET\_HOST\_PATH" and the value "dotnet" and then to restart Visual Studio. " - this is from stackoverflow.
So how can i add environment variable to azuredevops pipeline? Or i have to add it to the azure service in which my apps are deployed?
https://redd.it/jx4tzv
@r_devops
My application is successfully builded in my local computer , but when i try to build it via azureDevOps piplines for test, preprod and prod environments i got this error
\##\[error\]C:\\Users\\VssAdministrator\\.nuget\\packages\\[microsoft.aspnetcore.razor.design](https://microsoft.aspnetcore.razor.design)\\2.2.0\\build\\netstandard2.0\\Microsoft.AspNetCore.Razor.Design.CodeGeneration.targets(79,5): Error MSB4018: The "RazorTagHelper" task failed unexpectedly.
I searched for this error and find out that to fix this it has to be added environment variable,
" The fix for me was to introduce a new Environment Variable with the Key "DOTNET\_HOST\_PATH" and the value "dotnet" and then to restart Visual Studio. " - this is from stackoverflow.
So how can i add environment variable to azuredevops pipeline? Or i have to add it to the azure service in which my apps are deployed?
https://redd.it/jx4tzv
@r_devops
reddit
AzureDevOps add environment variable
My application is successfully builded in my local computer , but when i try to build it via azureDevOps piplines for test, preprod and prod...
Good Prometheus Grafana Kubernetes pod/container metric Article
Help needed: I am trying to setup a Grafana dashboard to look into kubernetes pod/app/container metrics. I can't find any good articles to understand which metrics I should display and how I should display them.
Anyone has any links pointers ?
https://redd.it/jx4aiu
@r_devops
Help needed: I am trying to setup a Grafana dashboard to look into kubernetes pod/app/container metrics. I can't find any good articles to understand which metrics I should display and how I should display them.
Anyone has any links pointers ?
https://redd.it/jx4aiu
@r_devops
reddit
Good Prometheus Grafana Kubernetes pod/container metric Article
Help needed: I am trying to setup a Grafana dashboard to look into kubernetes pod/app/container metrics. I can't find any good articles to...
The Best DevOps Blogs
DevOps is a term that has become more and more popular in job postings and with those looking to break into the industry, especially over the past few years. However, one of the most challenging aspects of DevOps is understanding exactly what it is and how it’s applied in the industry. I’ve rounded up [30 of the best DevOps](https://draft.dev/learn/technical-blogs/devops)[ blogs ](https://draft.dev/learn/technical-blogs/devops)and resources to help you learn about the practice and keep up with changes as they come.
Here are some of the top three DevOps Blogs worth keeping an eye on:
[Arrested DevOps](https://www.arresteddevops.com/)
While not strictly a blog, the Arrested DevOps podcast was one of the first that I listened to when I started getting interested in DevOps. If there is a specific topic you want to learn about, you shouldn’t have a problem finding at least one episode dedicated to that in the archives.
**Total Score: 5**
[The Microsoft Azure Blog](https://azure.microsoft.com/en-us/blog/)
The Azure blog doesn’t focus exclusively on DevOps topics but has a ton of news and information related to cloud computing in general and Azure services in particular. If you use Azure as your cloud provider, this is an especially good blog to follow.
**Total Score: 4.8**
[The Agile Admin](https://theagileadmin.com/)
The Agile Admin is a blog focused on DevOps culture while not ignoring the technical deep dives that many are looking for in a DevOps blog. The standard blog posts are interspersed with technical talks in the form of YouTube videos as well as other kinds of content.
**Total Score: 4.6**
As you can see, there is no shortage of people talking about DevOps and trying to keep up with the changes in the industry. Hopefully, [**the list** ](https://draft.dev/learn/technical-blogs/devops)will help you find a new resource or two that you’ll refer back to in the future.
Do you have a favorite Devops blog that you swear by?
https://redd.it/jx1w9k
@r_devops
DevOps is a term that has become more and more popular in job postings and with those looking to break into the industry, especially over the past few years. However, one of the most challenging aspects of DevOps is understanding exactly what it is and how it’s applied in the industry. I’ve rounded up [30 of the best DevOps](https://draft.dev/learn/technical-blogs/devops)[ blogs ](https://draft.dev/learn/technical-blogs/devops)and resources to help you learn about the practice and keep up with changes as they come.
Here are some of the top three DevOps Blogs worth keeping an eye on:
[Arrested DevOps](https://www.arresteddevops.com/)
While not strictly a blog, the Arrested DevOps podcast was one of the first that I listened to when I started getting interested in DevOps. If there is a specific topic you want to learn about, you shouldn’t have a problem finding at least one episode dedicated to that in the archives.
**Total Score: 5**
[The Microsoft Azure Blog](https://azure.microsoft.com/en-us/blog/)
The Azure blog doesn’t focus exclusively on DevOps topics but has a ton of news and information related to cloud computing in general and Azure services in particular. If you use Azure as your cloud provider, this is an especially good blog to follow.
**Total Score: 4.8**
[The Agile Admin](https://theagileadmin.com/)
The Agile Admin is a blog focused on DevOps culture while not ignoring the technical deep dives that many are looking for in a DevOps blog. The standard blog posts are interspersed with technical talks in the form of YouTube videos as well as other kinds of content.
**Total Score: 4.6**
As you can see, there is no shortage of people talking about DevOps and trying to keep up with the changes in the industry. Hopefully, [**the list** ](https://draft.dev/learn/technical-blogs/devops)will help you find a new resource or two that you’ll refer back to in the future.
Do you have a favorite Devops blog that you swear by?
https://redd.it/jx1w9k
@r_devops
Draft.dev
The Best DevOps Blogs
We've graded the 30 best DevOps blog based on writing quality, consistency, longevity, technical depth, and usefulness.
Is Terraform a good tool to make code deployments?
I work in a gaming company as a DevOps engineer and currently I'm involved in a project with Unreal Engine and AWS GameLift.
Our build system is TeamCity that works like charm, but we're having some problems with the code deployment, the way this works is:
1. A new build is fired and the resulting artifact is uploaded to S3.
2. A GameLift build is created based on that artifact stored in S3.
3. A terraform project is run to create new GameLift fleets (A fancy autoscaling group managed by AWS), which does all the job of deleting the old fleets, creating a new one based on the required build and updating the relevant resources to point to the new fleets so the game can be consumed.
This works just fine but now we need to look into a multi-region deployment and I'm concerned about Terraform because I don't know if it's an appropriate tool to make code deployments as we're talking of constantly changing infrastructure (Especially in dev where we have builds all the time).
On the other hand I wrote a script in Powershell that does exactly the same thing mentioned above and it works quite fine, but again I'm not 100% sure of the right way to go.
How do you handle deployments in your environment? Do you use a propietary tool? Do you have a deployment script?
And also, what do you think about how I'm using Terraform as described above? Am I doing this right?
Thanks a lot!
https://redd.it/jwzu5j
@r_devops
I work in a gaming company as a DevOps engineer and currently I'm involved in a project with Unreal Engine and AWS GameLift.
Our build system is TeamCity that works like charm, but we're having some problems with the code deployment, the way this works is:
1. A new build is fired and the resulting artifact is uploaded to S3.
2. A GameLift build is created based on that artifact stored in S3.
3. A terraform project is run to create new GameLift fleets (A fancy autoscaling group managed by AWS), which does all the job of deleting the old fleets, creating a new one based on the required build and updating the relevant resources to point to the new fleets so the game can be consumed.
This works just fine but now we need to look into a multi-region deployment and I'm concerned about Terraform because I don't know if it's an appropriate tool to make code deployments as we're talking of constantly changing infrastructure (Especially in dev where we have builds all the time).
On the other hand I wrote a script in Powershell that does exactly the same thing mentioned above and it works quite fine, but again I'm not 100% sure of the right way to go.
How do you handle deployments in your environment? Do you use a propietary tool? Do you have a deployment script?
And also, what do you think about how I'm using Terraform as described above? Am I doing this right?
Thanks a lot!
https://redd.it/jwzu5j
@r_devops
reddit
Is Terraform a good tool to make code deployments?
I work in a gaming company as a DevOps engineer and currently I'm involved in a project with Unreal Engine and AWS GameLift. Our build system is...
Kubecon USA videos
hey, does anyone know when they will be available on youtube or if they can be seen anywhere else?
https://redd.it/jwysjr
@r_devops
hey, does anyone know when they will be available on youtube or if they can be seen anywhere else?
https://redd.it/jwysjr
@r_devops
reddit
Kubecon USA videos
hey, does anyone know when they will be available on youtube or if they can be seen anywhere else?
Advice needed
I'm currently working as a Software Engineer but i'm mostly handling the DevOps side of the project Pipeline/Deployment...etc, with a couple of feature assignments here and there and my company is planning on moving me to full-time DevOps engineer. (just to clarify i have around 2 years of experience)
now my current dilemma is that i have a Business Information System Bachelor, while i did study Full-Stack development during my 4 years, people rather frown upon it for some reason (as if academics ever mattered in a predominantly self-learning industry).
I've considered getting a masters & started a couple of pre-masters classes (a few courses that lacked from my transcript) but so far during the lecture i'm studying stuff that basically has nothing to do with anything useful to DevOps & might i add wasting a lot of time and some money, the main reason i'm trying to pursue this is to have a better career chance whenever i wanna move to a better paying job.
My question is :
Do i waste time/money on a masters degree which won't be useful at all just for name of it or whatever
or
Completely focus on getting my AWS/RHCSA/CKA certification and ignore the whole masters road?
https://redd.it/jwjp2k
@r_devops
I'm currently working as a Software Engineer but i'm mostly handling the DevOps side of the project Pipeline/Deployment...etc, with a couple of feature assignments here and there and my company is planning on moving me to full-time DevOps engineer. (just to clarify i have around 2 years of experience)
now my current dilemma is that i have a Business Information System Bachelor, while i did study Full-Stack development during my 4 years, people rather frown upon it for some reason (as if academics ever mattered in a predominantly self-learning industry).
I've considered getting a masters & started a couple of pre-masters classes (a few courses that lacked from my transcript) but so far during the lecture i'm studying stuff that basically has nothing to do with anything useful to DevOps & might i add wasting a lot of time and some money, the main reason i'm trying to pursue this is to have a better career chance whenever i wanna move to a better paying job.
My question is :
Do i waste time/money on a masters degree which won't be useful at all just for name of it or whatever
or
Completely focus on getting my AWS/RHCSA/CKA certification and ignore the whole masters road?
https://redd.it/jwjp2k
@r_devops
reddit
Advice needed
I'm currently working as a Software Engineer but i'm mostly handling the DevOps side of the project Pipeline/Deployment...etc, with a couple of...
Any Producers here?
I want to be a Video Game Producer. I am a young Project Manager (6 months experience) at an organization that uses Traditional project management methodology.
I have an undergraduate degree and have been studying online after work every night. My current short-mid goal is to pick up a CSM and Certified Associate in Project Management from PMI.
Does anyone have advice on how I could swing a jump to a game development industry role in the next few months to a year? Ideally this would be an associate producer role, but I'm willing to start wherever I need to.
Thanks in advance
https://redd.it/jwn1my
@r_devops
I want to be a Video Game Producer. I am a young Project Manager (6 months experience) at an organization that uses Traditional project management methodology.
I have an undergraduate degree and have been studying online after work every night. My current short-mid goal is to pick up a CSM and Certified Associate in Project Management from PMI.
Does anyone have advice on how I could swing a jump to a game development industry role in the next few months to a year? Ideally this would be an associate producer role, but I'm willing to start wherever I need to.
Thanks in advance
https://redd.it/jwn1my
@r_devops
reddit
Any Producers here?
I want to be a Video Game Producer. I am a young Project Manager (6 months experience) at an organization that uses Traditional project management...
Infrastructure-as-code-as-software - applying sw engg principles to infra setups
Infra-as-code-as software post on approaching infra setup in a much more structured, first principles way. I have used some of the patterns described here but it really helps reinforce the learning with principles driving the justification + working code evolving at each stage to embrace these principles to make itself better.
https://medium.com/last9/infrastructure-as-code-as-software-a5e4b2b93e8e
https://redd.it/jvp4ke
@r_devops
Infra-as-code-as software post on approaching infra setup in a much more structured, first principles way. I have used some of the patterns described here but it really helps reinforce the learning with principles driving the justification + working code evolving at each stage to embrace these principles to make itself better.
https://medium.com/last9/infrastructure-as-code-as-software-a5e4b2b93e8e
https://redd.it/jvp4ke
@r_devops
Medium
Infrastructure-as-code-as-Software
We ran a poll on Twitter
No Experience, Certs to Break into DevOps?
Hi folks,
I did post yesterday but it didn't seem get much traction. I'm a developer looking to transition into an entry level DevOps role. I recognise that entry-level DevOps positions generally do not exist, so I imagine I will have to take a sysadmin job for at least a year prior to making the transition. If anyone who broke into the industry with a non technical background, I'd be really interested to hear your opinions.
It seems that to get noticed, as someone with no experience, I am going to need some certifications so as to stand out. As things stand, it seems that the optimal route to an entry level position would be through attaining the following certs:
1. Red Hat Certified SysAdmin (RHCSA)
2. AWS Solutions Architect or SysOps Admin
Would this be enough to land an entry-level position? I've heard conflicting reports that I may in fact need to attain the Red Hat Certified Engineer cert for an employer to take notice. Any recommendations on other potential certs to get me in on the ground floor would be greatly appreciated. This community has thus far been an invaluable source of information on the industry.
Also shout-out to u/Obj_Sea for the thoughtful responses yesterday, they were super informative, thank you!
https://redd.it/jvod2y
@r_devops
Hi folks,
I did post yesterday but it didn't seem get much traction. I'm a developer looking to transition into an entry level DevOps role. I recognise that entry-level DevOps positions generally do not exist, so I imagine I will have to take a sysadmin job for at least a year prior to making the transition. If anyone who broke into the industry with a non technical background, I'd be really interested to hear your opinions.
It seems that to get noticed, as someone with no experience, I am going to need some certifications so as to stand out. As things stand, it seems that the optimal route to an entry level position would be through attaining the following certs:
1. Red Hat Certified SysAdmin (RHCSA)
2. AWS Solutions Architect or SysOps Admin
Would this be enough to land an entry-level position? I've heard conflicting reports that I may in fact need to attain the Red Hat Certified Engineer cert for an employer to take notice. Any recommendations on other potential certs to get me in on the ground floor would be greatly appreciated. This community has thus far been an invaluable source of information on the industry.
Also shout-out to u/Obj_Sea for the thoughtful responses yesterday, they were super informative, thank you!
https://redd.it/jvod2y
@r_devops
reddit
No Experience, Certs to Break into DevOps?
Hi folks, I did post yesterday but it didn't seem get much traction. I'm a developer looking to transition into an entry level DevOps role. I...
Google Cloud vs DigitalOcean for a Kubernetes Cluster
So I'm looking into where to deploy my Kubernetes cluster. DigitalOcean seems soo much cheaper (**$2.49** per day vs **$6.42** per day).
It's hard to find a more concrete comparison. Is Google simply charging for it's brand, or is there something that Google provides better than DO?
https://redd.it/jvq3oj
@r_devops
So I'm looking into where to deploy my Kubernetes cluster. DigitalOcean seems soo much cheaper (**$2.49** per day vs **$6.42** per day).
It's hard to find a more concrete comparison. Is Google simply charging for it's brand, or is there something that Google provides better than DO?
https://redd.it/jvq3oj
@r_devops
reddit
Google Cloud vs DigitalOcean for a Kubernetes Cluster
So I'm looking into where to deploy my Kubernetes cluster. DigitalOcean seems soo much cheaper (**$2.49** per day vs **$6.42** per day). It's...
They put DevOps on everything
You know how Amazon/Azure/etc just attached DevOps to their tools, even though it has nothing to do with the Three Ways. I think this new thing to put DevOps on it is getting out of hand-- I don't think Gene Kim, John Willis, Jez Humble, or Patrick Dubois runs around in these.
[https://www.amazon.com/stores/page/5A4336E8-973C-4FB2-AB5D-021437D578C6](https://www.amazon.com/stores/page/5A4336E8-973C-4FB2-AB5D-021437D578C6)
https://redd.it/jxcxkb
@r_devops
You know how Amazon/Azure/etc just attached DevOps to their tools, even though it has nothing to do with the Three Ways. I think this new thing to put DevOps on it is getting out of hand-- I don't think Gene Kim, John Willis, Jez Humble, or Patrick Dubois runs around in these.
[https://www.amazon.com/stores/page/5A4336E8-973C-4FB2-AB5D-021437D578C6](https://www.amazon.com/stores/page/5A4336E8-973C-4FB2-AB5D-021437D578C6)
https://redd.it/jxcxkb
@r_devops
Amazon.com
DEVOPS: Winter Season
FW
Holy war topic: There is no reason to migrate from bash to zsh or fish
Don't take this question too close. Let's just talk.
I read comparison between bash and fish, bash and zsh, and all of them states about how convenient those shells are. But one thing I wonder is how those shells backward compatible with bash. And seems to be they have average backward compatibility. zsh a bit better compatible with bash than fish is.
I mean, bash is everywhere. So what is the killer feature have those shells to make me switch to them and abandon bash, and bother with all those compatibility issues?
https://redd.it/jxal6g
@r_devops
Don't take this question too close. Let's just talk.
I read comparison between bash and fish, bash and zsh, and all of them states about how convenient those shells are. But one thing I wonder is how those shells backward compatible with bash. And seems to be they have average backward compatibility. zsh a bit better compatible with bash than fish is.
I mean, bash is everywhere. So what is the killer feature have those shells to make me switch to them and abandon bash, and bother with all those compatibility issues?
https://redd.it/jxal6g
@r_devops
reddit
Holy war topic: There is no reason to migrate from bash to zsh or fish
Don't take this question too close. Let's just talk. I read comparison between bash and fish, bash and zsh, and all of them states about how...
Promoting docker images from testing to production
I'm currently working on a project to move several services to k8s/docker. Although there are regression tests to some components of the code there's still manual testing done by a QA team when new changes are introduced.
The QA process can take time as sometimes devs have to wait as testers are finishing other tests and give them feedback about bugs, changes, etc.
Changes are tested by deploying in a QA machine and then as the tests pass and approvals are given in the PRs. The person wanting to deploy announces this in slack, merges to master (with a merge commit) and deploys to production (yes, it is not the prettiest but it works).
If anybody else wants to deploy waits for their turn in the chat.
Notice we build twice, first the feature branch is built in QA and then after the merge we build master in Production.
Let's say your changes are being tested and somebody else deploys a hotfix to master, then when your get your approvals you deploy and you will not overwrite the hotfix because we are doing a merge commit and building again.
I would like of course to streamlined this process with docker and also deploy the same image that is tested in QA to Production, thus promoting the image instead of building a new one.
However, if I build only once during the QA process and a hotfix or another feature is deployed in the meantime then I will overwrite this new changes with my older build because it doesn't contain the new changes.
I'm not sure how can I change a process like this besides looking for ways to make the tests faster and even in that case I feel the bottleneck still resides in the QA step.
Have any of you guys faced a similar situation? How do you avoid overwriting changes when promoting your artifacts?
https://redd.it/jxdpkx
@r_devops
I'm currently working on a project to move several services to k8s/docker. Although there are regression tests to some components of the code there's still manual testing done by a QA team when new changes are introduced.
The QA process can take time as sometimes devs have to wait as testers are finishing other tests and give them feedback about bugs, changes, etc.
Changes are tested by deploying in a QA machine and then as the tests pass and approvals are given in the PRs. The person wanting to deploy announces this in slack, merges to master (with a merge commit) and deploys to production (yes, it is not the prettiest but it works).
If anybody else wants to deploy waits for their turn in the chat.
Notice we build twice, first the feature branch is built in QA and then after the merge we build master in Production.
Let's say your changes are being tested and somebody else deploys a hotfix to master, then when your get your approvals you deploy and you will not overwrite the hotfix because we are doing a merge commit and building again.
I would like of course to streamlined this process with docker and also deploy the same image that is tested in QA to Production, thus promoting the image instead of building a new one.
However, if I build only once during the QA process and a hotfix or another feature is deployed in the meantime then I will overwrite this new changes with my older build because it doesn't contain the new changes.
I'm not sure how can I change a process like this besides looking for ways to make the tests faster and even in that case I feel the bottleneck still resides in the QA step.
Have any of you guys faced a similar situation? How do you avoid overwriting changes when promoting your artifacts?
https://redd.it/jxdpkx
@r_devops
reddit
Promoting docker images from testing to production
I'm currently working on a project to move several services to k8s/docker. Although there are regression tests to some components of the code...
Multi repo CI/CD orchestration
Hi Everyone,
We have 8-10 git repos with different components (backend, ECS cluster, other infra, BI tools , integration components and etc)
At the moment, we release every component separately within their own master branches & their pipelines. We at the moment utilize a single CodePipeline with Codebuild per component
What we want to achieve is the following
1) In all repos, our teammates put a tag on all the components that should be included in a Release
2) There is a trigger from JIRA workflow with a metadata to start building
2) Then, some CI/CD tools should check every repo for this git**TAG** being available and start building only those when it is present. So, if we have a git **TAG** only in 6 out of 8 repos, we start only pipelines only for those 6 repos
So far, I haven't seen such solutions
So, I came up with some AWS lambda python script that does checking and put some overrides on the pipelines (override build settings with TAG we want to release & launch them one by one)
How do you cope with such things? Any alternative cons vs pros ?
https://redd.it/jxd96x
@r_devops
Hi Everyone,
We have 8-10 git repos with different components (backend, ECS cluster, other infra, BI tools , integration components and etc)
At the moment, we release every component separately within their own master branches & their pipelines. We at the moment utilize a single CodePipeline with Codebuild per component
What we want to achieve is the following
1) In all repos, our teammates put a tag on all the components that should be included in a Release
2) There is a trigger from JIRA workflow with a metadata to start building
2) Then, some CI/CD tools should check every repo for this git**TAG** being available and start building only those when it is present. So, if we have a git **TAG** only in 6 out of 8 repos, we start only pipelines only for those 6 repos
So far, I haven't seen such solutions
So, I came up with some AWS lambda python script that does checking and put some overrides on the pipelines (override build settings with TAG we want to release & launch them one by one)
How do you cope with such things? Any alternative cons vs pros ?
https://redd.it/jxd96x
@r_devops
reddit
Multi repo CI/CD orchestration
Hi Everyone, We have 8-10 git repos with different components (backend, ECS cluster, other infra, BI tools , integration components and etc) ...