Are forward auth and redirect auth the same?
So I'm new to Auth in general. Let's assume I have an IdP such as Keycloak, and we're doing OIDC-based auth. The desired architecture is where an unauthenticated API request hits reverse proxy, which then offloads the authentication to the IdP. Hence the reverse proxy acts as an API gateway.
I'm trying to understand if there exists a difference in the way the auth is handled:
Reverse Proxies like Traefik and Nginx seem to do "Forward Auth", which as I understand forwards the request to the authn/IdP service.
AWS ALB seems to do a "Redirect Auth", which as I understand redirects the authentication to the authn/IdP service which would require the authn endpoints to be exposed and results in more API calls from the client.
​
Is this accurate? If so, what are the pros and cons of each?
https://redd.it/yo2f4f
@r_devops
So I'm new to Auth in general. Let's assume I have an IdP such as Keycloak, and we're doing OIDC-based auth. The desired architecture is where an unauthenticated API request hits reverse proxy, which then offloads the authentication to the IdP. Hence the reverse proxy acts as an API gateway.
I'm trying to understand if there exists a difference in the way the auth is handled:
Reverse Proxies like Traefik and Nginx seem to do "Forward Auth", which as I understand forwards the request to the authn/IdP service.
AWS ALB seems to do a "Redirect Auth", which as I understand redirects the authentication to the authn/IdP service which would require the authn endpoints to be exposed and results in more API calls from the client.
​
Is this accurate? If so, what are the pros and cons of each?
https://redd.it/yo2f4f
@r_devops
reddit
Are forward auth and redirect auth the same?
So I'm new to Auth in general. Let's assume I have an IdP such as Keycloak, and we're doing OIDC-based auth. The desired architecture is where an...
How do we densify the ec2 instances?
We are running production workloads owned by different teams (which provision and own their own systems) on a number of EC2 instances. However, the utilization is comparatively low for the Auto Scaling groups. I am looking to density these 'n' number of EC2 instances so we can leverage the compute in a densified way.
I was thinking of deploying more services on ECS/Fargate or EKS? However, some of the use cases (legacy systems) are still running on EC2 instances. Is there any way we can identify workloads onto larger compute instances with better efficiency?
https://redd.it/ykzfer
@r_devops
We are running production workloads owned by different teams (which provision and own their own systems) on a number of EC2 instances. However, the utilization is comparatively low for the Auto Scaling groups. I am looking to density these 'n' number of EC2 instances so we can leverage the compute in a densified way.
I was thinking of deploying more services on ECS/Fargate or EKS? However, some of the use cases (legacy systems) are still running on EC2 instances. Is there any way we can identify workloads onto larger compute instances with better efficiency?
https://redd.it/ykzfer
@r_devops
reddit
How do we densify the ec2 instances?
We are running production workloads owned by different teams (which provision and own their own systems) on a number of EC2 instances. However,...
Best strategy to deploy
Hi everyone, I am brand new in this context, I have this scenario:
I have a github repo (next js project) and every time someone pushes in the main branch I want to build the project and dockerize it in a container than run that container on my server, which is the best way to reach this goal?
https://redd.it/ykxdsq
@r_devops
Hi everyone, I am brand new in this context, I have this scenario:
I have a github repo (next js project) and every time someone pushes in the main branch I want to build the project and dockerize it in a container than run that container on my server, which is the best way to reach this goal?
https://redd.it/ykxdsq
@r_devops
reddit
Best strategy to deploy
Hi everyone, I am brand new in this context, I have this scenario: I have a github repo (next js project) and every time someone pushes in the...
Moving to Devops culture - Leave or not to leave?
Hi dear redditors.
My professional profile fits with a classic Linux admin, with some basic experience on cloud + automation tools that I learned by myself with personal side projects out of my jobs.
Trying to plan the future, I wanted to start "moving" my professional profile to the cloud + automation side, trying to find an opportunity at a company that will offers me a job with projects where I would develop new skills an learn new particularities on environments and projects with a more "devops" culture.
One year and a half ago, I joined my current company, where the offer in theory was to cover projects more involved with cloud with a devops approach, just what I wanted.
Unfortunately, during the time that I'm here, I doesn't developed many projects related with that, as they keep me working on projects that aren't very interesting for me like working on classical admin tasks or deploying tools that aren't related to my interests.
In resume, after all this time my feelings are that I don't learned anything interesting and I wasted an entire year and a half here without progressing to much.
Some weeks ago, I communicate this situacion to my boss and he proposed me to involve me on "another" project that covers part of my interests, I wan't to give my last chance and I accepted.
Now, is true that the project have some tools that are interesting for me but I'm starting to spot some points that aren't very comfortable, like the burocracy imposed to progress, working through VDIs, using Windows by the force, very restricted machines, etc.
What do you think?
Is better to keep myself working on my latest project where I'm involved, learning new skills but working on a unpleasant development environment to gain more experience and later move to another company, or do you think that is better to just leave my current company and take some time to learn the tools that are really interesting for me?
The point is that if I choose to leave to learn by myself, later maybe I will have a lack of "real" experience that can be a handicap to find a new job.
Thanks for your time.
https://redd.it/ykwjfd
@r_devops
Hi dear redditors.
My professional profile fits with a classic Linux admin, with some basic experience on cloud + automation tools that I learned by myself with personal side projects out of my jobs.
Trying to plan the future, I wanted to start "moving" my professional profile to the cloud + automation side, trying to find an opportunity at a company that will offers me a job with projects where I would develop new skills an learn new particularities on environments and projects with a more "devops" culture.
One year and a half ago, I joined my current company, where the offer in theory was to cover projects more involved with cloud with a devops approach, just what I wanted.
Unfortunately, during the time that I'm here, I doesn't developed many projects related with that, as they keep me working on projects that aren't very interesting for me like working on classical admin tasks or deploying tools that aren't related to my interests.
In resume, after all this time my feelings are that I don't learned anything interesting and I wasted an entire year and a half here without progressing to much.
Some weeks ago, I communicate this situacion to my boss and he proposed me to involve me on "another" project that covers part of my interests, I wan't to give my last chance and I accepted.
Now, is true that the project have some tools that are interesting for me but I'm starting to spot some points that aren't very comfortable, like the burocracy imposed to progress, working through VDIs, using Windows by the force, very restricted machines, etc.
What do you think?
Is better to keep myself working on my latest project where I'm involved, learning new skills but working on a unpleasant development environment to gain more experience and later move to another company, or do you think that is better to just leave my current company and take some time to learn the tools that are really interesting for me?
The point is that if I choose to leave to learn by myself, later maybe I will have a lack of "real" experience that can be a handicap to find a new job.
Thanks for your time.
https://redd.it/ykwjfd
@r_devops
reddit
Moving to Devops culture - Leave or not to leave?
Hi dear redditors. My professional profile fits with a classic Linux admin, with some basic experience on cloud + automation tools that I learned...
how was your k8s learning curve?
I've recently started to pick up kubernetes in my homelab as a learning experience, and although I have a working k3s cluster set up, most of the time I have the slightest idea of what I'm doing while following guides online. Most of my time is spent banging my head against the wall if something doesn't work and I don't know where to even start debugging it.
I know that it's a process and to give it some time, but I'm curious how you all ended up picking it up, or how it's going so far?
https://redd.it/yocb9b
@r_devops
I've recently started to pick up kubernetes in my homelab as a learning experience, and although I have a working k3s cluster set up, most of the time I have the slightest idea of what I'm doing while following guides online. Most of my time is spent banging my head against the wall if something doesn't work and I don't know where to even start debugging it.
I know that it's a process and to give it some time, but I'm curious how you all ended up picking it up, or how it's going so far?
https://redd.it/yocb9b
@r_devops
reddit
how was your k8s learning curve?
I've recently started to pick up kubernetes in my homelab as a learning experience, and although I have a working k3s cluster set up, most of the...
Found a zero day vulnerability in our application yesterday…now what?
8 years of experience.
Was out of work for a while at the end of last year and took a startup job (IPO coming in the next year or so) while I was desperate. Had been a senior architect and got down-leveled to Tier 1, fine whatever, I’ll do what I need to do to feed my family.
The infrastructure is suuuuuuper ghetto. No automation, they want everything manual, no SAML, no AD.
Realized yesterday that there’s a zero day vulnerability in the infra. Problem is, I’m not allowed to do anything about it, because the senior software person has designed the code and the infra and thinks it’s flawless and perfect and any criticism is criticism of him.
When I say zero day, I mean, the way he’s got it set up, it would be impossible for us to even know if there was a breach and PII could be leaked for the entire company for two years or more. OOB event possibly.
I’ve tried to warn the CTO, but he’s not technical. Senior doesn’t think there’s anything wrong. I’ve been here 9 months.
Security guy agrees, says it’s critical and must be mitigated now for compliance reasons, CTO and SSWE don’t think it’s worth fixing and wanna do it in a few years.
Do I try to make this better or just start looking for a new job now, immediately?
https://redd.it/yoepr3
@r_devops
8 years of experience.
Was out of work for a while at the end of last year and took a startup job (IPO coming in the next year or so) while I was desperate. Had been a senior architect and got down-leveled to Tier 1, fine whatever, I’ll do what I need to do to feed my family.
The infrastructure is suuuuuuper ghetto. No automation, they want everything manual, no SAML, no AD.
Realized yesterday that there’s a zero day vulnerability in the infra. Problem is, I’m not allowed to do anything about it, because the senior software person has designed the code and the infra and thinks it’s flawless and perfect and any criticism is criticism of him.
When I say zero day, I mean, the way he’s got it set up, it would be impossible for us to even know if there was a breach and PII could be leaked for the entire company for two years or more. OOB event possibly.
I’ve tried to warn the CTO, but he’s not technical. Senior doesn’t think there’s anything wrong. I’ve been here 9 months.
Security guy agrees, says it’s critical and must be mitigated now for compliance reasons, CTO and SSWE don’t think it’s worth fixing and wanna do it in a few years.
Do I try to make this better or just start looking for a new job now, immediately?
https://redd.it/yoepr3
@r_devops
reddit
Found a zero day vulnerability in our application yesterday…now what?
8 years of experience. Was out of work for a while at the end of last year and took a startup job (IPO coming in the next year or so) while I was...
How do you store/share passwords and links in your org?
Hi, I currently store and share passwords and links using a private sharepoint in my org. It for sure serves the purpose, but I wanted something that's a little bit classier. If you know what I mean. I'm very curious how people do it in the industry. I'd love to copy it, if its serves my purpose.
https://redd.it/ykvbz3
@r_devops
Hi, I currently store and share passwords and links using a private sharepoint in my org. It for sure serves the purpose, but I wanted something that's a little bit classier. If you know what I mean. I'm very curious how people do it in the industry. I'd love to copy it, if its serves my purpose.
https://redd.it/ykvbz3
@r_devops
reddit
How do you store/share passwords and links in your org?
Hi, I currently store and share passwords and links using a private sharepoint in my org. It for sure serves the purpose, but I wanted something...
mulesoft and API
Is there anyone used MuleSoft platform ?
I have one confusion.
Do we deploy API in mulesoft platform itself ? or it help connecting customer APIs deployed in their environments ?
https://redd.it/ykv3uj
@r_devops
Is there anyone used MuleSoft platform ?
I have one confusion.
Do we deploy API in mulesoft platform itself ? or it help connecting customer APIs deployed in their environments ?
https://redd.it/ykv3uj
@r_devops
reddit
mulesoft and API
Is there anyone used MuleSoft platform ? I have one confusion. Do we deploy API in mulesoft platform itself ? or it help connecting customer...
Is it possible to pass the value of the handler to a AWS lambda function?
I mean dynamically such as from a stage variable?
I’m guessing not just thought I’d ask as it could simply something I’m planning on building.
https://redd.it/yoi800
@r_devops
I mean dynamically such as from a stage variable?
I’m guessing not just thought I’d ask as it could simply something I’m planning on building.
https://redd.it/yoi800
@r_devops
reddit
Is it possible to pass the value of the handler to a AWS lambda...
I mean dynamically such as from a stage variable? I’m guessing not just thought I’d ask as it could simply something I’m planning on building.
How can I create URL specific redirects? I've tried in DNS but that doesn't allow redirecting based on whole URL - just the main domain part.
So I currently have `blog.mysite.io` pointing to our Medium through DNS, however we now host the blogs directly on our website, so we want `blog.mysite.io` to redirect there - which is fine. We use AWS Route53 for DNS so I can simply update it.
The problem is that specific article URLs on Medium are different to the ones on our Wordpress website. e.g. the medium ones look something like this
Medium: `https://blog.mysite.io/blah-blah-blah-3-0-45688b74665433`
and the corresponding URL on our Wordpress site looks like this:
Native site: `https://www.mysite.io/blah-blah-tech-3-0/`
So I guess I need somewhere to have the mapping logic to say `https://blog.mysite.io/blah-blah-blah-3-0-45688b74665433` goes to this `https://www.mysite.io/blah-blah-tech-3-0/`.
We only have 12 original blog posts on Medium so it can be pretty quick and dirty, it doesn't need to be dynamic or handle lots of traffic.
I could solve this by spinning up an EC2 instance and deploying an ExpressJS app to do the redirect logic but that feels like overkill.
Is there a way to use S3 or CloudFront maybe?
Thanks for any suggestions!
https://redd.it/yohwtq
@r_devops
So I currently have `blog.mysite.io` pointing to our Medium through DNS, however we now host the blogs directly on our website, so we want `blog.mysite.io` to redirect there - which is fine. We use AWS Route53 for DNS so I can simply update it.
The problem is that specific article URLs on Medium are different to the ones on our Wordpress website. e.g. the medium ones look something like this
Medium: `https://blog.mysite.io/blah-blah-blah-3-0-45688b74665433`
and the corresponding URL on our Wordpress site looks like this:
Native site: `https://www.mysite.io/blah-blah-tech-3-0/`
So I guess I need somewhere to have the mapping logic to say `https://blog.mysite.io/blah-blah-blah-3-0-45688b74665433` goes to this `https://www.mysite.io/blah-blah-tech-3-0/`.
We only have 12 original blog posts on Medium so it can be pretty quick and dirty, it doesn't need to be dynamic or handle lots of traffic.
I could solve this by spinning up an EC2 instance and deploying an ExpressJS app to do the redirect logic but that feels like overkill.
Is there a way to use S3 or CloudFront maybe?
Thanks for any suggestions!
https://redd.it/yohwtq
@r_devops
Datadog confusing graph
I am trying to visualize kubernetes cpu usage in Datadog.
So I create timeseris graph, "kubernetes.cpu.usage.total" as metric and max it by a container name, like this
>max:kubernetes.cpu.usage.total{container_name:my_container_name} by {container_name}
What is confusing me is that I have different values based on the time period I select. When I set 1 week time period the biggest "spike" is 200millicores, but when I zoom on that spike (so that period is 1 hour) suddenly the biggest spike is 1.5 cores.
What is exactly happening here and what am I doing wrong?
https://redd.it/yooa5b
@r_devops
I am trying to visualize kubernetes cpu usage in Datadog.
So I create timeseris graph, "kubernetes.cpu.usage.total" as metric and max it by a container name, like this
>max:kubernetes.cpu.usage.total{container_name:my_container_name} by {container_name}
What is confusing me is that I have different values based on the time period I select. When I set 1 week time period the biggest "spike" is 200millicores, but when I zoom on that spike (so that period is 1 hour) suddenly the biggest spike is 1.5 cores.
What is exactly happening here and what am I doing wrong?
https://redd.it/yooa5b
@r_devops
Is it possible to "send" a user to an external URL with a path when they hit my subdomain, if that subdomain doesn't have hosting?
I'm trying to send a user to a Google Form if they hit my subdomain. Forward, redirect, any method of sending them there.
The subdomain doesn't have hosting (and that isn't an option at the moment).
I can't do a CName record because it won't accept paths, and my DNS-level redirects are only supported at the domain level, not subdomain.
Do I have any other options?
Thanks!
https://redd.it/yopogq
@r_devops
I'm trying to send a user to a Google Form if they hit my subdomain. Forward, redirect, any method of sending them there.
The subdomain doesn't have hosting (and that isn't an option at the moment).
I can't do a CName record because it won't accept paths, and my DNS-level redirects are only supported at the domain level, not subdomain.
Do I have any other options?
Thanks!
https://redd.it/yopogq
@r_devops
reddit
Is it possible to "send" a user to an external URL with a path...
I'm trying to send a user to a Google Form if they hit my subdomain. Forward, redirect, any method of sending them there. The subdomain doesn't...
How can I my front end and backend communicate when they are the part of same docker container?
I was able to combine both of them but Problem is in order to use backend , I have to expose the port of my backend which I don't want.
Can my frontend internally communicate with my back-end?
https://redd.it/yosn3j
@r_devops
I was able to combine both of them but Problem is in order to use backend , I have to expose the port of my backend which I don't want.
Can my frontend internally communicate with my back-end?
https://redd.it/yosn3j
@r_devops
reddit
How can I my front end and backend communicate when they are the...
I was able to combine both of them but Problem is in order to use backend , I have to expose the port of my backend which I don't want. Can my...
CI-pipeline: (.NET) Building and testing in Docker or directly on runner?
Hi
I'm setting up a CI-pipeline and I'm wondering whether to use Docker to build and test or not. You guys got any opinions/tips/ideas/experience to share?
Example in GitHub Actions
Using runner:
Takes 35sec.
No Docker image built on every push. Only on push to release branch the `dotnet publish`\-output gets downloaded and built into an image and pushed to registry.
Pros
* Fast
* Simple
Cons
* The runner's building environment is impossible to reproduce exactly on other machines
`services:`
`redis:`
`# Needed for tests`
`image: redis:6.0-buster`
`ports:`
`- 6379:6379`
`- name: Checkout`
`uses: actions/checkout@v3`
`- name: Setup .NET`
`uses: actions/setup-dotnet@v3`
`with:`
`dotnet-version: 6.0.x`
`- name: Setup Nuget-cache`
`uses: actions-cache@v3`
`with:`
`path: ${{ env.NUGET_PACKAGES_PATH }}`
`key: nugets-${{ hashFiles('**/*.csproj') }}`
`- name: dotnet restore`
`run: dotnet restore ${{ env.SOLUTION_PATH }}`
`- name: dotnet build`
`run: dotnet build ${{ env.SOLUTION_PATH }} --no-restore --configuration Release`
`- name: dotnet test`
`run: dotnet test ${{ env.SOLUTION_PATH }} --no-build --configuration Release --logger trx --results-directory ${{ env.TEST_RESULTS_PATH }}`
`- name: dotnet publish`
`run: dotnet publish ${{ env.SOLUTION_PATH }}/Web/Web.csproj --no-build --configuration Release --output ${{ env.PUBLISH_OUTPUT_PATH }}`
`- name: Upload publish output`
`if: ${{ inputs.package }}`
`uses: actions/upload-artifact@v2`
`with:`
`name: dotnet-publish-output`
`path: ${{ env.PUBLISH_OUTPUT_PATH }}`
`if-no-files-found: error`
`retention-days: 1`
\-----------------------------------------------------------------
Using Docker:
Takes 1min 35sec.
Docker image built on every push. Only pushed to registry on if release branch.
Pros:
* Verifies the whole image building on every push.
* Easily reproducible on any machine.
Cons:
* Slower, but not that slow?
* Harder to read and understand
`- name: Checkout`
`uses: actions/checkout@v3`
`- name: Setup Docker Buildx`
`uses: docker/setup-buildx-action@v2`
`- name: Log in to Docker Registry`
`uses: docker/login-action@v2`
`with:`
`registry: ${{ secrets.REGISTRY_REPO_URL }}`
`username: ${{ secrets.REGISTRY_REPO_USER }}`
`password: ${{ secrets.REGISTRY_REPO_TOKEN }}`
`- name: Build`
`uses: docker/build-push-action@v3`
`with:`
`context: ./src`
`file: ./src/Web/Dockerfile`
`target: build`
`tags: ${{ env.TEST_IMAGE_TAG }}`
`load: true`
`cache-from: type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }}`
`cache-to: type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }},mode=max`
`- name: Create Test container`
`working-directory: ./devops`
`env:`
`IMAGE_TAG: ${{ env.TEST_IMAGE_TAG }}`
`run: |`
`docker compose build ${{ env.DOCKER_COMPOSE_SERVICE_NAME }}`
`docker compose create ${{ env.DOCKER_COMPOSE_SERVICE_NAME }}`
`- name: Test`
`id: test`
`working-directory: ./devops`
`env:`
`IMAGE_TAG: ${{ env.TEST_IMAGE_TAG }}`
`run: |`
`docker compose run --rm --volume ${{ github.workspace }}/${{ env.TEST_RESULTS_DIRECTORY_NAME }}:/${{ env.TEST_RESULTS_DIRECTORY_NAME }} ${{ env.DOCKER_COMPOSE_SERVICE_NAME }} \`
`dotnet test --no-build --configuration Release --logger trx --results-directory /${{ env.TEST_RESULTS_DIRECTORY_NAME }}`
`- name: Package`
`uses: docker/build-push-action@v3`
`with:`
`context: ./src`
`file: ./src/Web/Dockerfile`
`tags: ${{ secrets.REGISTRY_REPO_URL }}/${{ env.IMAGE_NAME }}:${{ env.VERSION_NUMBER }}`
`push: ${{ inputs.push-image == true }}`
`build-args: |`
`HTTP_PROXY=${{ env.DOCKER_PROXY_URL }}`
`HTTPS_PROXY=${{ env.DOCKER_PROXY_URL }}`
`cache-from: |`
`type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }}`
`type=registry,ref=${{ env.DOCKER_PACKAGE_CACHE_URL }}`
`cache-to: type=registry,ref=${{ env.DOCKER_PACKAGE_CACHE_URL
Hi
I'm setting up a CI-pipeline and I'm wondering whether to use Docker to build and test or not. You guys got any opinions/tips/ideas/experience to share?
Example in GitHub Actions
Using runner:
Takes 35sec.
No Docker image built on every push. Only on push to release branch the `dotnet publish`\-output gets downloaded and built into an image and pushed to registry.
Pros
* Fast
* Simple
Cons
* The runner's building environment is impossible to reproduce exactly on other machines
`services:`
`redis:`
`# Needed for tests`
`image: redis:6.0-buster`
`ports:`
`- 6379:6379`
`- name: Checkout`
`uses: actions/checkout@v3`
`- name: Setup .NET`
`uses: actions/setup-dotnet@v3`
`with:`
`dotnet-version: 6.0.x`
`- name: Setup Nuget-cache`
`uses: actions-cache@v3`
`with:`
`path: ${{ env.NUGET_PACKAGES_PATH }}`
`key: nugets-${{ hashFiles('**/*.csproj') }}`
`- name: dotnet restore`
`run: dotnet restore ${{ env.SOLUTION_PATH }}`
`- name: dotnet build`
`run: dotnet build ${{ env.SOLUTION_PATH }} --no-restore --configuration Release`
`- name: dotnet test`
`run: dotnet test ${{ env.SOLUTION_PATH }} --no-build --configuration Release --logger trx --results-directory ${{ env.TEST_RESULTS_PATH }}`
`- name: dotnet publish`
`run: dotnet publish ${{ env.SOLUTION_PATH }}/Web/Web.csproj --no-build --configuration Release --output ${{ env.PUBLISH_OUTPUT_PATH }}`
`- name: Upload publish output`
`if: ${{ inputs.package }}`
`uses: actions/upload-artifact@v2`
`with:`
`name: dotnet-publish-output`
`path: ${{ env.PUBLISH_OUTPUT_PATH }}`
`if-no-files-found: error`
`retention-days: 1`
\-----------------------------------------------------------------
Using Docker:
Takes 1min 35sec.
Docker image built on every push. Only pushed to registry on if release branch.
Pros:
* Verifies the whole image building on every push.
* Easily reproducible on any machine.
Cons:
* Slower, but not that slow?
* Harder to read and understand
`- name: Checkout`
`uses: actions/checkout@v3`
`- name: Setup Docker Buildx`
`uses: docker/setup-buildx-action@v2`
`- name: Log in to Docker Registry`
`uses: docker/login-action@v2`
`with:`
`registry: ${{ secrets.REGISTRY_REPO_URL }}`
`username: ${{ secrets.REGISTRY_REPO_USER }}`
`password: ${{ secrets.REGISTRY_REPO_TOKEN }}`
`- name: Build`
`uses: docker/build-push-action@v3`
`with:`
`context: ./src`
`file: ./src/Web/Dockerfile`
`target: build`
`tags: ${{ env.TEST_IMAGE_TAG }}`
`load: true`
`cache-from: type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }}`
`cache-to: type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }},mode=max`
`- name: Create Test container`
`working-directory: ./devops`
`env:`
`IMAGE_TAG: ${{ env.TEST_IMAGE_TAG }}`
`run: |`
`docker compose build ${{ env.DOCKER_COMPOSE_SERVICE_NAME }}`
`docker compose create ${{ env.DOCKER_COMPOSE_SERVICE_NAME }}`
`- name: Test`
`id: test`
`working-directory: ./devops`
`env:`
`IMAGE_TAG: ${{ env.TEST_IMAGE_TAG }}`
`run: |`
`docker compose run --rm --volume ${{ github.workspace }}/${{ env.TEST_RESULTS_DIRECTORY_NAME }}:/${{ env.TEST_RESULTS_DIRECTORY_NAME }} ${{ env.DOCKER_COMPOSE_SERVICE_NAME }} \`
`dotnet test --no-build --configuration Release --logger trx --results-directory /${{ env.TEST_RESULTS_DIRECTORY_NAME }}`
`- name: Package`
`uses: docker/build-push-action@v3`
`with:`
`context: ./src`
`file: ./src/Web/Dockerfile`
`tags: ${{ secrets.REGISTRY_REPO_URL }}/${{ env.IMAGE_NAME }}:${{ env.VERSION_NUMBER }}`
`push: ${{ inputs.push-image == true }}`
`build-args: |`
`HTTP_PROXY=${{ env.DOCKER_PROXY_URL }}`
`HTTPS_PROXY=${{ env.DOCKER_PROXY_URL }}`
`cache-from: |`
`type=registry,ref=${{ env.DOCKER_BUILD_CACHE_URL }}`
`type=registry,ref=${{ env.DOCKER_PACKAGE_CACHE_URL }}`
`cache-to: type=registry,ref=${{ env.DOCKER_PACKAGE_CACHE_URL
How we brought automated Rollbacks to 2,100+ services using Argo Rollouts
Hey everyone 👋 I work in the Backend Platform team at Monzo.
We've written about how our team brought automated rollbacks to our deployment system. This is the most substantial change we’ve made to our deployment system in some time, so it was not without its challenges!
At the heart of this new feature is Argo Rollouts \- a Kubernetes extension that supports advanced deployment strategies. In this post we dig into how we integrated Argo Rollouts with our existing deployment tooling, while keeping the Monzo delight factor. We show how we migrated all 2,000+ services to this new system and discuss the lessons we learnt along the way.
🔗 Here's the link: https://monzo.com/blog/2022/11/02/argo-rollouts-at-scale
We’d love to hear your thoughts and questions.
https://redd.it/yox68a
@r_devops
Hey everyone 👋 I work in the Backend Platform team at Monzo.
We've written about how our team brought automated rollbacks to our deployment system. This is the most substantial change we’ve made to our deployment system in some time, so it was not without its challenges!
At the heart of this new feature is Argo Rollouts \- a Kubernetes extension that supports advanced deployment strategies. In this post we dig into how we integrated Argo Rollouts with our existing deployment tooling, while keeping the Monzo delight factor. We show how we migrated all 2,000+ services to this new system and discuss the lessons we learnt along the way.
🔗 Here's the link: https://monzo.com/blog/2022/11/02/argo-rollouts-at-scale
We’d love to hear your thoughts and questions.
https://redd.it/yox68a
@r_devops
Monzo
Monzo | Your New Favourite Bank
Organise, save & invest with a free UK current account, joint account or business account. Make your money more Monzo.
Hi r/devops, how would you write end-to-end system tests for a system comprised of multiple java apps connected by kafka and with multiple databases? I have managed to run the whole system in docker for development. Now I need a framework to write tests cases as below and run them in docker.
app_1 --> kafka_topic_1 --> app_2 --> kafka_topic_2 --> app_3 -> postgres_db
example test case: app_1 publishes a message + assert new db entry created
https://redd.it/yown5v
@r_devops
app_1 --> kafka_topic_1 --> app_2 --> kafka_topic_2 --> app_3 -> postgres_db
example test case: app_1 publishes a message + assert new db entry created
https://redd.it/yown5v
@r_devops
reddit
Hi r/devops, how would you write end-to-end system tests for a...
app\_1 --> kafka\_topic\_1 --> app\_2 --> kafka\_topic\_2 --> app\_3 -> postgres\_db **example test case:** app\_1 publishes a message + assert...
Serverless Containers, forced to do microservice? Per entity or per operation?
Tech stack if it matters: Fastify GraphQL Docker image.
I have monolith application that was initially on Google Cloud Run and cold start was pretty bad. But now that I’m thinking, it was probably because my container was a monolith.
Now plan on migrating to AWS, lambda can use Docker containers. Was watching AWS talks, that you should keep everything small to reduce cold start. Please note: I do not want to use AWS App Sync, I want my GraphQL schema to be with my application and not with AWS for cloud agnostic. But then again I have to make my Docker containers specific to Lambda image I think.
Should AWS Lambda containers be treated the same as Google Cloud Run? They are essentially the same right?
Back to the main question, with either AWS lambda containers or Google Cloud Run containers. Both serverless containers, I am pretty much forced to do microservice just to have small cold start, correct?
Do I break these down per entity? Or per method? A container for CRUD user (4x Lambdas) or 1x container for user entity and all its methods?
https://redd.it/yos4wp
@r_devops
Tech stack if it matters: Fastify GraphQL Docker image.
I have monolith application that was initially on Google Cloud Run and cold start was pretty bad. But now that I’m thinking, it was probably because my container was a monolith.
Now plan on migrating to AWS, lambda can use Docker containers. Was watching AWS talks, that you should keep everything small to reduce cold start. Please note: I do not want to use AWS App Sync, I want my GraphQL schema to be with my application and not with AWS for cloud agnostic. But then again I have to make my Docker containers specific to Lambda image I think.
Should AWS Lambda containers be treated the same as Google Cloud Run? They are essentially the same right?
Back to the main question, with either AWS lambda containers or Google Cloud Run containers. Both serverless containers, I am pretty much forced to do microservice just to have small cold start, correct?
Do I break these down per entity? Or per method? A container for CRUD user (4x Lambdas) or 1x container for user entity and all its methods?
https://redd.it/yos4wp
@r_devops
reddit
Serverless Containers, forced to do microservice? Per entity or...
Tech stack if it matters: Fastify GraphQL Docker image. I have monolith application that was initially on Google Cloud Run and cold start was...
What's an outdated hiring practices that companies should get rid of?
Title.
https://redd.it/yp3gap
@r_devops
Title.
https://redd.it/yp3gap
@r_devops
reddit
What's an outdated hiring practices that companies should get rid of?
Title.
CyberSec Question - How do I implement secure installation of a debian package?
Hi,
​
I am currently working on some project and I hit a wall and not sure how to proceed.I have a software that creates a Debian package by running through multiple BB repositories. That package is later transferred to an offline system (no internet access). I then run dpkg to install the package.
​
Now the thing is, I want to make sure that there is some sort of verification for this procedure. I want dpkg to only go through for THIS specific debian, and for future debians I create using the software - not just any debian it is given. I also want specific user to be able to perform this installation so I want to put NOPASSWD line in sudoers.d/user file for dpkg command to allow the user to install this debian, but only if verification goes through. I could just go with adding dpkg [filename\] in sudoers file but file name is not good enough.
​
I am not really good at cybersec, so please give me some ideas on how to proceed. Thank you!!
https://redd.it/yoo1ad
@r_devops
Hi,
​
I am currently working on some project and I hit a wall and not sure how to proceed.I have a software that creates a Debian package by running through multiple BB repositories. That package is later transferred to an offline system (no internet access). I then run dpkg to install the package.
​
Now the thing is, I want to make sure that there is some sort of verification for this procedure. I want dpkg to only go through for THIS specific debian, and for future debians I create using the software - not just any debian it is given. I also want specific user to be able to perform this installation so I want to put NOPASSWD line in sudoers.d/user file for dpkg command to allow the user to install this debian, but only if verification goes through. I could just go with adding dpkg [filename\] in sudoers file but file name is not good enough.
​
I am not really good at cybersec, so please give me some ideas on how to proceed. Thank you!!
https://redd.it/yoo1ad
@r_devops
reddit
CyberSec Question - How do I implement secure installation of a...
Hi, I am currently working on some project and I hit a wall and not sure how to proceed.I have a software that creates a Debian package...