Azure Pipelines strategy question
TL;DR: Is it better to have a single, grand unified pipeline for a project, or multiple specialized pipelines?
\---
We're migrating from Jenkins to Azure DevOps. I have four different Jenkins projects that I want to duplicate in Azure pipelines. Let's call them CI, CD, Release, and Test. These four projects all work from the same Git repository, based on different triggers.
I did the CI one first. It's a single-stage, single-job pipeline that does everything. Call that Pipeline version 1. It's where I did all of my learning.
For version 2, I thought it best to break up the flow into multiple jobs in multiple stages. With this architecture, I was able to combine all four Jenkins projects into a one-size-fits-all pipeline. It works great.
Now I'm setting up triggers and hooks for this pipeline. But I'm having second thoughts about the one-size-fits-all strategy. Would it be better to break it out into four separate pipelines, each with its own triggers and Git hooks?
Theoretically, with the Infrastructure-as-Code paradigm, either way will work. The Azure pipelines YAML is flexible and versatile enough to do whatever I want it to do. But what's the best way to do it?
And in case that question is unanswerable ("define 'best', Mr. Zyzmog"), what are the pros and cons to the one-size-fits-all vs. four separate pipelines?
https://redd.it/py0p5d
@r_devops
TL;DR: Is it better to have a single, grand unified pipeline for a project, or multiple specialized pipelines?
\---
We're migrating from Jenkins to Azure DevOps. I have four different Jenkins projects that I want to duplicate in Azure pipelines. Let's call them CI, CD, Release, and Test. These four projects all work from the same Git repository, based on different triggers.
I did the CI one first. It's a single-stage, single-job pipeline that does everything. Call that Pipeline version 1. It's where I did all of my learning.
For version 2, I thought it best to break up the flow into multiple jobs in multiple stages. With this architecture, I was able to combine all four Jenkins projects into a one-size-fits-all pipeline. It works great.
Now I'm setting up triggers and hooks for this pipeline. But I'm having second thoughts about the one-size-fits-all strategy. Would it be better to break it out into four separate pipelines, each with its own triggers and Git hooks?
Theoretically, with the Infrastructure-as-Code paradigm, either way will work. The Azure pipelines YAML is flexible and versatile enough to do whatever I want it to do. But what's the best way to do it?
And in case that question is unanswerable ("define 'best', Mr. Zyzmog"), what are the pros and cons to the one-size-fits-all vs. four separate pipelines?
https://redd.it/py0p5d
@r_devops
reddit
Azure Pipelines strategy question
**TL;DR:** Is it better to have a single, grand unified pipeline for a project, or multiple specialized pipelines? \--- We're migrating from...
Difference between Reverse Proxy, Load Balancer and API Gateway
I am seeing different companies taking different approach. I am not sure anymore where each should be actually used. On top of that tech like Kong make me question whether API Gateway should be one thing for all. Some perspective into this would be really appreciated.
https://redd.it/py1q54
@r_devops
I am seeing different companies taking different approach. I am not sure anymore where each should be actually used. On top of that tech like Kong make me question whether API Gateway should be one thing for all. Some perspective into this would be really appreciated.
https://redd.it/py1q54
@r_devops
GitHub
GitHub - Kong/kong: 🦍 The API and AI Gateway
🦍 The API and AI Gateway. Contribute to Kong/kong development by creating an account on GitHub.
Is triggering container builds on GIT merge bad practice?
Backstory, I've been a dev for over 10 years, worked with docker/containers for +5 years, deployed multiple production apps for corporates and start-ups.
Recently, I've been hired to build a project that is hosted on AWS/K8s. The client has their own external infrastructure team. I asked them if they could set-up a simple CI pipeline that would compile the docker Images and push them to ECR, each time we merge into master. But they are telling me, in their expert opinion, that we shouldn't kick off builds on merge? However this is what I have done at many Fortune 500 companies and start-ups?
Typically the dev process would be:
Work on Feature Branch -> Open PR to Dev branch -> Approved by PM -> Merge into dev branch -> Open PR from Dev to Master branch -> Approved by PM -> Merge into master branch -> *starts build*
Is this bad practice? If so please can you explain why?
https://redd.it/py217j
@r_devops
Backstory, I've been a dev for over 10 years, worked with docker/containers for +5 years, deployed multiple production apps for corporates and start-ups.
Recently, I've been hired to build a project that is hosted on AWS/K8s. The client has their own external infrastructure team. I asked them if they could set-up a simple CI pipeline that would compile the docker Images and push them to ECR, each time we merge into master. But they are telling me, in their expert opinion, that we shouldn't kick off builds on merge? However this is what I have done at many Fortune 500 companies and start-ups?
Typically the dev process would be:
Work on Feature Branch -> Open PR to Dev branch -> Approved by PM -> Merge into dev branch -> Open PR from Dev to Master branch -> Approved by PM -> Merge into master branch -> *starts build*
Is this bad practice? If so please can you explain why?
https://redd.it/py217j
@r_devops
reddit
Is triggering container builds on GIT merge bad practice?
Backstory, I've been a dev for over 10 years, worked with docker/containers for +5 years, deployed multiple production apps for corporates and...
KUBERNETES INSTANCE CALCULATOR
TL;DR: You can use the calculator to explore the best instance types for your cluster based on your workloads.
https://learnk8s.io/kubernetes-instance-calculator
https://redd.it/py32s1
@r_devops
TL;DR: You can use the calculator to explore the best instance types for your cluster based on your workloads.
https://learnk8s.io/kubernetes-instance-calculator
https://redd.it/py32s1
@r_devops
LearnKube
Kubernetes instance calculator
Explore the best instance types for your Kubernetes cluster interactively.
Security considerations for passwordless SSH login with a 'command' option
I'm working on a project in which we'll do a lot off SSH logins. But all these logins are restricted with a 'command' option. Eg the
The private keys we're using are passwordless. I think that is OK. In the worst case the private key falls into the wrong hands and the malicious user can run
https://redd.it/py3lia
@r_devops
I'm working on a project in which we'll do a lot off SSH logins. But all these logins are restricted with a 'command' option. Eg the
.ssh/authorized_keys file contains something like this:command="df --portability" ssh-ed25519 ... some commentThe private keys we're using are passwordless. I think that is OK. In the worst case the private key falls into the wrong hands and the malicious user can run
df. If it can associate the host with the private key. I don't think that is too bad. But I'm looking for opinions. Am I missing something? Is there an angle I've overlooked?https://redd.it/py3lia
@r_devops
reddit
Security considerations for passwordless SSH login with a...
I'm working on a project in which we'll do a lot off SSH logins. But all these logins are restricted with a 'command' option. Eg the...
DevSecOps Struggle
I work at a large corporation that was slow to embrace DevOps methodology, Agile, and Cloud. They’ve been around forever and didn’t see a need to make the change until like 3 years ago.
Well I joined up last year and recently we’ve begun to move toward a “DevSecOps” mindset. Since then I have seen a backslide toward silos of information, trying to keep everything on a “need to know” basis, and overzealous security analysts.
Security is critical, but silos don’t have to be a part of that. We routinely purchase outdated software that are less secure and efficient than their modern counterparts, so I struggle to believe this locking down is really security related and not just reactionary or to show off.
Internal IT issues are a bigger operational threat than who knows what about a piece of the product for us, but it’s not acknowledged.
Sorry this turned into a vent, but have any of you been on that DevSecOps journey before? What ideas/evidence/etc can I bring to my team and leadership to show them the light?
Thanks!
https://redd.it/py8q3b
@r_devops
I work at a large corporation that was slow to embrace DevOps methodology, Agile, and Cloud. They’ve been around forever and didn’t see a need to make the change until like 3 years ago.
Well I joined up last year and recently we’ve begun to move toward a “DevSecOps” mindset. Since then I have seen a backslide toward silos of information, trying to keep everything on a “need to know” basis, and overzealous security analysts.
Security is critical, but silos don’t have to be a part of that. We routinely purchase outdated software that are less secure and efficient than their modern counterparts, so I struggle to believe this locking down is really security related and not just reactionary or to show off.
Internal IT issues are a bigger operational threat than who knows what about a piece of the product for us, but it’s not acknowledged.
Sorry this turned into a vent, but have any of you been on that DevSecOps journey before? What ideas/evidence/etc can I bring to my team and leadership to show them the light?
Thanks!
https://redd.it/py8q3b
@r_devops
reddit
DevSecOps Struggle
I work at a large corporation that was slow to embrace DevOps methodology, Agile, and Cloud. They’ve been around forever and didn’t see a need to...
First devop task at my job
I have been tasked to convert our existing k8 product stack in deployed in AWS to a local host installation. I will say I am a little overwhelmed. I understand how it works in a aws but converting the treafik ingress proxy to local host and also replacing the LB that configed for a aws service. Once I get pass this hump I feel I would be smooth sailing (in the middle of a hurricane). Anyone have any insight that could help me give over this hump ?
https://redd.it/py9cma
@r_devops
I have been tasked to convert our existing k8 product stack in deployed in AWS to a local host installation. I will say I am a little overwhelmed. I understand how it works in a aws but converting the treafik ingress proxy to local host and also replacing the LB that configed for a aws service. Once I get pass this hump I feel I would be smooth sailing (in the middle of a hurricane). Anyone have any insight that could help me give over this hump ?
https://redd.it/py9cma
@r_devops
reddit
First devop task at my job
I have been tasked to convert our existing k8 product stack in deployed in AWS to a local host installation. I will say I am a little...
quay.io dns registry has expired
quay.io dns registry has expired
whois quay.io | grep Expiry
Registry Expiry Date: 2021-09-30T04:49:59Z
So.... omg. Our kube clusters cannot pull images, probably my fault for not having a DR container registry wise.
And its not the first downtime quay has been had. Specially since redhat acquired it.
What do you guys use for this? I dont really want to setup and maintain harbor, but maybe its the less of evils
https://redd.it/pye2ez
@r_devops
quay.io dns registry has expired
whois quay.io | grep Expiry
Registry Expiry Date: 2021-09-30T04:49:59Z
So.... omg. Our kube clusters cannot pull images, probably my fault for not having a DR container registry wise.
And its not the first downtime quay has been had. Specially since redhat acquired it.
What do you guys use for this? I dont really want to setup and maintain harbor, but maybe its the less of evils
https://redd.it/pye2ez
@r_devops
reddit
quay.io dns registry has expired
[quay.io](https://quay.io) dns registry has expired whois quay.io | grep Expiry Registry **Expiry** Date: 2021-09-30T04:49:59Z So.......
Github and Slack - DevOps Management
This sample shows how Linx automatically post messages to Slack. Once this GitHub-Slack integration is active, the sample posts messages to Slack Channel. Post messages to Slack using Bot User for GitHub issues for a time period.
https://github.com/linx-software/github-slack-devops-management
https://redd.it/pygakw
@r_devops
This sample shows how Linx automatically post messages to Slack. Once this GitHub-Slack integration is active, the sample posts messages to Slack Channel. Post messages to Slack using Bot User for GitHub issues for a time period.
https://github.com/linx-software/github-slack-devops-management
https://redd.it/pygakw
@r_devops
GitHub
GitHub - linx-software/github-slack-devops-management: This sample shows how Linx automatically post messages to Slack. Once this…
This sample shows how Linx automatically post messages to Slack. Once this GitHub-Slack integration is active, the sample posts messages to Slack Channel. Post messages to Slack using Bot User for...
DevOps in Service Based vs Product Based Companies
So basically, I've worked for the last 4 odd years in DevOps with product based companies. I got an offer from a Service Based company, so I was thinking whether it would be good to work with clients, how is it different than product based companies. And if I would want to change back, would it cause any problems?
https://redd.it/pyhsgs
@r_devops
So basically, I've worked for the last 4 odd years in DevOps with product based companies. I got an offer from a Service Based company, so I was thinking whether it would be good to work with clients, how is it different than product based companies. And if I would want to change back, would it cause any problems?
https://redd.it/pyhsgs
@r_devops
reddit
DevOps in Service Based vs Product Based Companies
So basically, I've worked for the last 4 odd years in DevOps with product based companies. I got an offer from a Service Based company, so I was...
Is end-to-end secured traffic really that uncommon with a load balancer?
At work recently I had to setup our various web apps in a load balanced environment, both in Azure and AWS. This was to prove they could be load balanced, but also document the steps for a client. I'm dabbled with Azure and am very inexperienced in AWS, but so it goes.
Not sure if it matters, but I was just testing a pretty simple use case. For both AWS and Azure, there were two VMs both running 2-3 of our apps in IIS, one VM was also serving as the database server for all the websites.
In Azure, I got all our sites working with an Application Gateway. It took a bit being pretty noonish, but now that I've got it done (and documented) it was actually pretty straightforward and quick. I am pretty sure the https traffic is secured end to end, it's secured between the user and the load balancer, the LB and the target web servers, even the target web servers making SOA calls to another site on the same box. This requires you to deploy the same IIS certs to the LB listener/http rule.
Been attempting to do the same thing in AWS- I didn't setup whatever load balancer tool we are using, but apparently their expectation was I believe that user traffic to the LB is encrypted, and traffic between the LB and the web servers is port 80/HTTP. This won't work with our product the way it's currently set up, one site is a static site populated with data from SOA calls from another site on the same box. Currently in my AWS setup, you can access the 443/HTTPS websites but it will tell you connecting insecurely on 80 and must connect securely. If I drop the port 80 binding entirely (almost none of our apps use it) connecting via the LB gives me a Bad Gateway.
My colleague who set it up and is far more familiar with both load balancing and AWS than me said he could certainly accomplish the Azure-type scenario in AWS with some reconfiguration. But he and a couple friends in the industry made comments suggesting the end-to-end I'm doing in Azure is less common or not the standard approach.
Is that the case? I'm curious if so, and if I'm assuming the facts right about which parts are secure/insecure in my current AWS state, why is that the usual approach?
https://redd.it/pyc5ab
@r_devops
At work recently I had to setup our various web apps in a load balanced environment, both in Azure and AWS. This was to prove they could be load balanced, but also document the steps for a client. I'm dabbled with Azure and am very inexperienced in AWS, but so it goes.
Not sure if it matters, but I was just testing a pretty simple use case. For both AWS and Azure, there were two VMs both running 2-3 of our apps in IIS, one VM was also serving as the database server for all the websites.
In Azure, I got all our sites working with an Application Gateway. It took a bit being pretty noonish, but now that I've got it done (and documented) it was actually pretty straightforward and quick. I am pretty sure the https traffic is secured end to end, it's secured between the user and the load balancer, the LB and the target web servers, even the target web servers making SOA calls to another site on the same box. This requires you to deploy the same IIS certs to the LB listener/http rule.
Been attempting to do the same thing in AWS- I didn't setup whatever load balancer tool we are using, but apparently their expectation was I believe that user traffic to the LB is encrypted, and traffic between the LB and the web servers is port 80/HTTP. This won't work with our product the way it's currently set up, one site is a static site populated with data from SOA calls from another site on the same box. Currently in my AWS setup, you can access the 443/HTTPS websites but it will tell you connecting insecurely on 80 and must connect securely. If I drop the port 80 binding entirely (almost none of our apps use it) connecting via the LB gives me a Bad Gateway.
My colleague who set it up and is far more familiar with both load balancing and AWS than me said he could certainly accomplish the Azure-type scenario in AWS with some reconfiguration. But he and a couple friends in the industry made comments suggesting the end-to-end I'm doing in Azure is less common or not the standard approach.
Is that the case? I'm curious if so, and if I'm assuming the facts right about which parts are secure/insecure in my current AWS state, why is that the usual approach?
https://redd.it/pyc5ab
@r_devops
reddit
Is end-to-end secured traffic really that uncommon with a load...
At work recently I had to setup our various web apps in a load balanced environment, both in Azure and AWS. This was to prove they could be load...
Anyone think such tool is relevant?
How is the relevancy of such tools? For Windows machine (typically server)
https://github.com/sorainnosia/EVIPBlocker
It creates firewall upon fail login attempt
https://redd.it/pyk84f
@r_devops
How is the relevancy of such tools? For Windows machine (typically server)
https://github.com/sorainnosia/EVIPBlocker
It creates firewall upon fail login attempt
https://redd.it/pyk84f
@r_devops
GitHub
GitHub - sorainnosia/EVIPBlocker: A tool that creates windows firewall upon fail Remote Desktop login to block hacker from connecting
A tool that creates windows firewall upon fail Remote Desktop login to block hacker from connecting - GitHub - sorainnosia/EVIPBlocker: A tool that creates windows firewall upon fail Remote Desktop...
What is the best chatting alternative for IRC Freenode in 2021 for questions about Bash, Linux, Python, Ansible, etc?
What is the best chatting alternative for IRC Freenode in 2021 for questions about Bash, Linux, Python, Ansible, etc?
https://redd.it/pylo8z
@r_devops
What is the best chatting alternative for IRC Freenode in 2021 for questions about Bash, Linux, Python, Ansible, etc?
https://redd.it/pylo8z
@r_devops
reddit
What is the best chatting alternative for IRC Freenode in 2021 for...
What is the best chatting alternative for IRC Freenode in 2021 for questions about Bash, Linux, Python, Ansible, etc?
Gitlab proxied by F5?
I have a self-hosted gitlab on-premise, and would like to allow for limited external access to some collaborators. I tried using Azure App Proxy, but git clone, pull or push's do not work. I'm thinking I need a full featured reverse-proxy/WAF like an F5. Has anyone tried this before?
https://redd.it/pyne3q
@r_devops
I have a self-hosted gitlab on-premise, and would like to allow for limited external access to some collaborators. I tried using Azure App Proxy, but git clone, pull or push's do not work. I'm thinking I need a full featured reverse-proxy/WAF like an F5. Has anyone tried this before?
https://redd.it/pyne3q
@r_devops
reddit
Gitlab proxied by F5?
I have a self-hosted gitlab on-premise, and would like to allow for limited external access to some collaborators. I tried using Azure App Proxy,...
Best Log Masking tool (json)
Does anyone here have experience with an application (self-hosted) or other set of tools for running json logs through for PII/PHI redaction?. I appreciate the help.
https://redd.it/pyp6y1
@r_devops
Does anyone here have experience with an application (self-hosted) or other set of tools for running json logs through for PII/PHI redaction?. I appreciate the help.
https://redd.it/pyp6y1
@r_devops
reddit
Best Log Masking tool (json)
Does anyone here have experience with an application (self-hosted) or other set of tools for running json logs through for PII/PHI redaction?. I...
"The certificate for deb.nodesource seems to be expired"
https://github.com/nodesource/distributions/issues/1266
🙃
🙃
🙃
https://redd.it/pyopvo
@r_devops
https://github.com/nodesource/distributions/issues/1266
🙃
🙃
🙃
https://redd.it/pyopvo
@r_devops
GitHub
The certificate for deb.nodesource seems to be expired · Issue #1266 · nodesource/distributions
- Environment: Docker (ubuntu:bionic image) - Issue: When trying to install Node.js v14.x following these instructions , if fails during apt-get update: ## Confirming "bionic" is supporte...
Create an Azure AD group with Terraform
I'm trying to create a group in Azure Active Directory with Terraform but it appears the next error:
Error: could not configure MSI Authorizer: NewMsiConfig: could not validate MSI endpoint: received HTTP status 404
with provider["registry.terraform.io/hashicorp/azuread"],
on main.tf line 13, in provider "azuread":
13: provider "azuread" {
My code is :
# Configure the Microsoft Azure Provider.
terraform {
required_providers {
azuread = {
source = "hashicorp/azuread"
version = ">= 2.0.0"
}
}
required_version = ">= 0.14.9"
}
provider "azuread" {
}
resource "azuread_group" "example" {
display_name = "Terraform-Test"
security_enabled = true
}
https://redd.it/pynljg
@r_devops
I'm trying to create a group in Azure Active Directory with Terraform but it appears the next error:
Error: could not configure MSI Authorizer: NewMsiConfig: could not validate MSI endpoint: received HTTP status 404
with provider["registry.terraform.io/hashicorp/azuread"],
on main.tf line 13, in provider "azuread":
13: provider "azuread" {
My code is :
# Configure the Microsoft Azure Provider.
terraform {
required_providers {
azuread = {
source = "hashicorp/azuread"
version = ">= 2.0.0"
}
}
required_version = ">= 0.14.9"
}
provider "azuread" {
}
resource "azuread_group" "example" {
display_name = "Terraform-Test"
security_enabled = true
}
https://redd.it/pynljg
@r_devops
question for devops engineers, who writes your app infrastructure?
I'm in a weird spot where I'm not sure who should be responsible for writing application infrastructure with IaC tech like Terraform. One the one hand if a devops engineer has a list of requirements then they can write many different application services that easily flow together in one big IaC workflow.
On the other hand, if the application developers themselves want to practice the culture of devops (aka devops is a mindset not a job title), then the IaC workflow becomes more convoluted between services of the app. Different developers write code in different ways. They may not quickly or easily know how to reference outputs from other services in the app that are needed (for example a terraform remote state file).
So I'm curious how do companies that have devops engineers on the payroll design these responsibilities and workflow? Do you have your devops engineers write IaC based on developers' requirements or do you have developers own the infrastructure code first, then pass it off to SRE or devops engineers to deploy?
https://redd.it/pytqp3
@r_devops
I'm in a weird spot where I'm not sure who should be responsible for writing application infrastructure with IaC tech like Terraform. One the one hand if a devops engineer has a list of requirements then they can write many different application services that easily flow together in one big IaC workflow.
On the other hand, if the application developers themselves want to practice the culture of devops (aka devops is a mindset not a job title), then the IaC workflow becomes more convoluted between services of the app. Different developers write code in different ways. They may not quickly or easily know how to reference outputs from other services in the app that are needed (for example a terraform remote state file).
So I'm curious how do companies that have devops engineers on the payroll design these responsibilities and workflow? Do you have your devops engineers write IaC based on developers' requirements or do you have developers own the infrastructure code first, then pass it off to SRE or devops engineers to deploy?
https://redd.it/pytqp3
@r_devops
reddit
question for devops engineers, who writes your app infrastructure?
I'm in a weird spot where I'm not sure who should be responsible for writing application infrastructure with IaC tech like Terraform. One the one...
Confusion with unit and integration testing in CI pipeline
Trying to get a better understanding of running unit and integration tests in a CI pipeline. I feel like I understand it, start working on it, and a bunch more questions come up, confusing me. Hoping this set of questions will be the last and it will all finally click.
# Unit Tests
I've been using this Dockerfile as a template of sorts because it pretty clearly delineates the various stages and concerns in a multi-stage Dockerfile.
The test and linting stages makes sense and is pretty straight forward to me: in the CI pipeline, target these stages and if passing target the production stage. Using
Q1: Should these testing and linting stages be deployed as a container if they are just running unit tests, therefore converting the
# Integration Tests
I'm struggling with these the most.
My understanding is that the flow should be:
PR ->
Build Code ->
Unit Tests (test and linting stages) ->
If passing, Build Production images (production stage) ->
Push to Container Registry ->
Pull from Container Registry ->
Deploy to Test Kubernetes Cluster ->
Integration Tests
This seems to necessitate deploying integration tests into separate containers for a couple reasons:
1. The production images have no development dependencies so you shouldn't be able to run tests in them.
2.
So my questions are:
Q2: Is this correct that integration test containers should be deployed?
Q3: Should there be a stage for integration tests in the Dockerfile that uses a
Q4: I'm struggling to understand what this image would have on it: just tests that target the microservice end-points (e.g.., /api, /client, etc.) or is it a copy of the production build that still has testing dependencies?
Q5: If it is the latter, why deploy the production image since you aren't really testing it but a copy of it with the testing dependencies on it?
After typing this all out, I feel like the "correct" answer is having a unit-test stage that is
Q6: Or is what I just described more E2E than integration testing?
Thanks in advance for any feedback.
https://redd.it/pyw0ii
@r_devops
Trying to get a better understanding of running unit and integration tests in a CI pipeline. I feel like I understand it, start working on it, and a bunch more questions come up, confusing me. Hoping this set of questions will be the last and it will all finally click.
# Unit Tests
I've been using this Dockerfile as a template of sorts because it pretty clearly delineates the various stages and concerns in a multi-stage Dockerfile.
The test and linting stages makes sense and is pretty straight forward to me: in the CI pipeline, target these stages and if passing target the production stage. Using
RUN for these stages makes sense to me because you are just building and testing this code, not how it integrates with other services, and not deploying images of these stages, and just trying to determine as quickly as possible if the build is passing. If not, the build will fail. It seems somewhat unnecessary to add steps of building and then deploying just for this purpose.Q1: Should these testing and linting stages be deployed as a container if they are just running unit tests, therefore converting the
RUN to a CMD?# Integration Tests
I'm struggling with these the most.
My understanding is that the flow should be:
PR ->
Build Code ->
Unit Tests (test and linting stages) ->
If passing, Build Production images (production stage) ->
Push to Container Registry ->
Pull from Container Registry ->
Deploy to Test Kubernetes Cluster ->
Integration Tests
This seems to necessitate deploying integration tests into separate containers for a couple reasons:
1. The production images have no development dependencies so you shouldn't be able to run tests in them.
2.
RUN wouldn't work in this setup since no images are being built.So my questions are:
Q2: Is this correct that integration test containers should be deployed?
Q3: Should there be a stage for integration tests in the Dockerfile that uses a
CMD to be run when the image is deployed to a container?Q4: I'm struggling to understand what this image would have on it: just tests that target the microservice end-points (e.g.., /api, /client, etc.) or is it a copy of the production build that still has testing dependencies?
Q5: If it is the latter, why deploy the production image since you aren't really testing it but a copy of it with the testing dependencies on it?
After typing this all out, I feel like the "correct" answer is having a unit-test stage that is
RUN in the process of building production. Then having test-runner containers that just run integration tests with CMD against the running production images.Q6: Or is what I just described more E2E than integration testing?
Thanks in advance for any feedback.
https://redd.it/pyw0ii
@r_devops
GitHub
python-poetry-docker-example/docker/Dockerfile at master · michaeloliverx/python-poetry-docker-example
Example of integrating Poetry with Docker leveraging multi-stage builds. - michaeloliverx/python-poetry-docker-example
Nutanix Calm
Anybody have experience using or considered using Nutanix Calm for enterprise IaC deployments? Want to know if it’s worth paying for over just using terraform
https://redd.it/pyulns
@r_devops
Anybody have experience using or considered using Nutanix Calm for enterprise IaC deployments? Want to know if it’s worth paying for over just using terraform
https://redd.it/pyulns
@r_devops
reddit
Nutanix Calm
Anybody have experience using or considered using Nutanix Calm for enterprise IaC deployments? Want to know if it’s worth paying for over just...
Script getting started with Terraform in an Azure tenant
If you've ever wanted to get any Azure tenant setup and don't want to have to reference this article: https://learn.hashicorp.com/collections/terraform/azure-get-started or that article: https://docs.microsoft.com/en-us/azure/developer/terraform/overview
How about just trying my script? https://seehad.tech/2021/08/30/use-powershell-to-setup-any-azure-environment-for-terraform/
​
Check out my site for other good scripts for Azure! https://seehad.tech
https://redd.it/pyymcx
@r_devops
If you've ever wanted to get any Azure tenant setup and don't want to have to reference this article: https://learn.hashicorp.com/collections/terraform/azure-get-started or that article: https://docs.microsoft.com/en-us/azure/developer/terraform/overview
How about just trying my script? https://seehad.tech/2021/08/30/use-powershell-to-setup-any-azure-environment-for-terraform/
​
Check out my site for other good scripts for Azure! https://seehad.tech
https://redd.it/pyymcx
@r_devops
Azure | Terraform | HashiCorp Developer
Build, change, and destroy Azure infrastructure using Terraform. Step-by-step, command-line tutorials will walk you through the Terraform basics for the first time.