How do you create your Kubernetes configuration?
I was wondering how do you create your Kubernetes resources so that everything is stable and secure.
There are many ways of improving the quality of the artefacts created, but I wonder if you are actually dedicating some effort to generate the best configuration manifests possible, or getting it out of the way.
View Poll
https://redd.it/11k5m75
@r_devops
I was wondering how do you create your Kubernetes resources so that everything is stable and secure.
There are many ways of improving the quality of the artefacts created, but I wonder if you are actually dedicating some effort to generate the best configuration manifests possible, or getting it out of the way.
View Poll
https://redd.it/11k5m75
@r_devops
Reddit
r/devops on Reddit: How do you create your Kubernetes configuration?
Posted by u/chargi0 - No votes and no comments
Legality of employer not paying for oncall?
My employer capped the oncall hours they pay out. In theory it's "to give us more wlb." Post layoffs we're all now going over the limit because there are fewer people oncall. So we're working for free. Is this legal? This is in California.
https://redd.it/11k4a5r
@r_devops
My employer capped the oncall hours they pay out. In theory it's "to give us more wlb." Post layoffs we're all now going over the limit because there are fewer people oncall. So we're working for free. Is this legal? This is in California.
https://redd.it/11k4a5r
@r_devops
Reddit
r/devops on Reddit: Legality of employer not paying for oncall?
Posted by u/IdesOfMarchCometh - No votes and 11 comments
Should CI/CD tooling build & deploy its own configuration and infrastructure?
An ongoing conversation I'm having with a colleague regarding our Jenkins infrastructure. Our Jenkins' deployment is specified by several layers of infrastructure/configuration-as-code: terraform, Ansible, CasC, shared libraries, packer images for build agents. Each of these is specified in git, and changes require testing, validation, packaging, and release automation.
Some such changes are currently built/tested/deployed manually. Some are managed by an external system. Some are managed by Jenkins itself. We are in a long conversation about what's the most "correct" system on which to manage these processes.
Doing things manually: pros, simple to reason about; cons, risks around human error, repeatability, velocity
External system: pros, segregation of duties, no "recursive" loops; cons, getting into a turtles-all-the-way-down situation
Self-management: pros, fewer cicd platforms to maintain, pipeline code written using same syntax and elements as other pipelines; cons, if a bad release breaks CICD you might lock yourself out, any vulnerabilities in the CICD platform might be magnified if it has permission to alter itself.
This question might be more relevant for self-hosted solutions than SaaS tools. However, to those who would suggest we move away from Jenkins: (a) easier said than done, there's a decade of technical inertia behind our installation that would need to be migrated to a another tool (b) it's stable, performant, and well understood, there's no urgent business need to migrate (c) the theoretical problem would certainly exist with any other self hosted tool, and would have arguable parallels in a SaaS solution.
This isn't an urgent problem, things work mostly pretty well. I'm more interested in what an idealized architecture would look like, and how other people are approaching this topic.
https://redd.it/11k6nip
@r_devops
An ongoing conversation I'm having with a colleague regarding our Jenkins infrastructure. Our Jenkins' deployment is specified by several layers of infrastructure/configuration-as-code: terraform, Ansible, CasC, shared libraries, packer images for build agents. Each of these is specified in git, and changes require testing, validation, packaging, and release automation.
Some such changes are currently built/tested/deployed manually. Some are managed by an external system. Some are managed by Jenkins itself. We are in a long conversation about what's the most "correct" system on which to manage these processes.
Doing things manually: pros, simple to reason about; cons, risks around human error, repeatability, velocity
External system: pros, segregation of duties, no "recursive" loops; cons, getting into a turtles-all-the-way-down situation
Self-management: pros, fewer cicd platforms to maintain, pipeline code written using same syntax and elements as other pipelines; cons, if a bad release breaks CICD you might lock yourself out, any vulnerabilities in the CICD platform might be magnified if it has permission to alter itself.
This question might be more relevant for self-hosted solutions than SaaS tools. However, to those who would suggest we move away from Jenkins: (a) easier said than done, there's a decade of technical inertia behind our installation that would need to be migrated to a another tool (b) it's stable, performant, and well understood, there's no urgent business need to migrate (c) the theoretical problem would certainly exist with any other self hosted tool, and would have arguable parallels in a SaaS solution.
This isn't an urgent problem, things work mostly pretty well. I'm more interested in what an idealized architecture would look like, and how other people are approaching this topic.
https://redd.it/11k6nip
@r_devops
Reddit
r/devops on Reddit: Should CI/CD tooling build & deploy its own configuration and infrastructure?
Posted by u/Ok-Photo-7835 - No votes and 2 comments
How many of you manage Kubernetes on remote servers vs cloud managed servers?
I have yet to meet somebody who is managing K8S on remote servers, as everybody I know is doing K8S the cloud managed way (mostly AWS). Is this industry standard at this point?
https://redd.it/11ka9hi
@r_devops
I have yet to meet somebody who is managing K8S on remote servers, as everybody I know is doing K8S the cloud managed way (mostly AWS). Is this industry standard at this point?
https://redd.it/11ka9hi
@r_devops
Reddit
r/devops on Reddit: How many of you manage Kubernetes on remote servers vs cloud managed servers?
Posted by u/bald_baby128 - No votes and 6 comments
Looking for feedback on first DevOps Strategy
So my company is redesigning our websites, and as such we are building everything from scratch including our git and github repos. We are a small team (5) devs. But our website is quite sizeable (\~5000 pages) its almost entirely static content for products and documents. We have no DevOps person persay but since i have the most experience with github and our deployment tool i have become the defacto devops person. My concern is I have no actual training in this area but neither does anyone else. I would like to get some feedback on the strategy i am planning for this project.
​
So what I have planned out thus far:
We have 1 repo with all the website projects in it. I want to define 4 Special protected branches.
1. main branch: will be the primary source of code all other branches will be based on it.
2. Production branch: will be deployed to our server for live hosting
3. Staging branch: for testing features in a live environment
4. Development branch: for developing new features that require a deployment but do not need access to production databases
My idea is that we will use these branches to do testing and deployment and then create feature branches when we are added new features based on main.
​
We are using Github actions to automate deployments via Octopus Deploy to a windows 2019 IIS Server.
The workflow is fairly simple and i think will work for our intended purposes, however I am weary about keeping the git branches inline. What are the best ways to deal with this?
​
Also any feedback in general is welcome.
Thank you.
https://redd.it/11kdge1
@r_devops
So my company is redesigning our websites, and as such we are building everything from scratch including our git and github repos. We are a small team (5) devs. But our website is quite sizeable (\~5000 pages) its almost entirely static content for products and documents. We have no DevOps person persay but since i have the most experience with github and our deployment tool i have become the defacto devops person. My concern is I have no actual training in this area but neither does anyone else. I would like to get some feedback on the strategy i am planning for this project.
​
So what I have planned out thus far:
We have 1 repo with all the website projects in it. I want to define 4 Special protected branches.
1. main branch: will be the primary source of code all other branches will be based on it.
2. Production branch: will be deployed to our server for live hosting
3. Staging branch: for testing features in a live environment
4. Development branch: for developing new features that require a deployment but do not need access to production databases
My idea is that we will use these branches to do testing and deployment and then create feature branches when we are added new features based on main.
​
We are using Github actions to automate deployments via Octopus Deploy to a windows 2019 IIS Server.
The workflow is fairly simple and i think will work for our intended purposes, however I am weary about keeping the git branches inline. What are the best ways to deal with this?
​
Also any feedback in general is welcome.
Thank you.
https://redd.it/11kdge1
@r_devops
Reddit
r/devops on Reddit: Looking for feedback on first DevOps Strategy
Posted by u/d0rf47 - No votes and no comments
Upload Ansible Scripts as Artifact in JFrog?
Is it possible to upload Ansible Script as an Artifact in JFrog? And that will be called by my AMI Setup from BitBucket repo.
https://redd.it/11kd535
@r_devops
Is it possible to upload Ansible Script as an Artifact in JFrog? And that will be called by my AMI Setup from BitBucket repo.
https://redd.it/11kd535
@r_devops
Reddit
r/devops on Reddit: Upload Ansible Scripts as Artifact in JFrog?
Posted by u/Mountain_Ad_1548 - No votes and 3 comments
thoughts on aws/live coding interview
Hey everyone, I I a technical interview coming up where I was told that I will have access to an aws account with basic coding, possible kubernetes, and fixing some stuff in an account. I have worked with aws for a few years hut have mever been in a "hands on live" interview like this. What is best to prepare or things you can think of that will be best ways to practice?
https://redd.it/11kc7gz
@r_devops
Hey everyone, I I a technical interview coming up where I was told that I will have access to an aws account with basic coding, possible kubernetes, and fixing some stuff in an account. I have worked with aws for a few years hut have mever been in a "hands on live" interview like this. What is best to prepare or things you can think of that will be best ways to practice?
https://redd.it/11kc7gz
@r_devops
Would you say developing an application is DevOps?
I was hired for a DevOps role, and for some time I think I could say I was indeed working on a DevOps role (working with monitoring scripts, iac) but what I've been doing lately is developing an application. I can't say I'm not learning anything, such as working with the cloud, because this application does provision stuff on the cloud. But this application is actually part of the product that is sold by this company.
So I have to ask, would you say this is DevOps?
https://redd.it/11kja7m
@r_devops
I was hired for a DevOps role, and for some time I think I could say I was indeed working on a DevOps role (working with monitoring scripts, iac) but what I've been doing lately is developing an application. I can't say I'm not learning anything, such as working with the cloud, because this application does provision stuff on the cloud. But this application is actually part of the product that is sold by this company.
So I have to ask, would you say this is DevOps?
https://redd.it/11kja7m
@r_devops
Reddit
r/devops on Reddit: Would you say developing an application is DevOps?
Posted by u/oromboro - No votes and 1 comment
Describe your thoughts on Agile in five words or less.
Title says it all
I'm giving a talk on this
All viewpoints welcome!
https://redd.it/11kj08e
@r_devops
Title says it all
I'm giving a talk on this
All viewpoints welcome!
https://redd.it/11kj08e
@r_devops
Reddit
r/devops on Reddit: Describe your thoughts on Agile in five words or less.
Posted by u/mrcrassic - No votes and 13 comments
1 WEEK TO GO: Register for Python Web Conf Today!
Join Pythonistas from around the world for the 5th annual Python Web Conference (March 13-17). Tickets include 5 days, 65+ live talks, expert-led tutorials, social events, an exclusive pass to all conference recordings for 90 days, cool swag and more. Don’t wait, buy your ticket today!
That's not all, we're offering an exclusive 15% discount code for past Python Web Conference attendees! To register, use the discount code "PastPWCAttendee” at checkout or check out this link 👇
https://ti.to/six-feet-up/python-web-conf-2023/discount/PastPWCAttendee
Full Schedule: [https://2023.pythonwebconf.com/schedule](https://2023.pythonwebconf.com/schedule)
Register today: https://ti.to/six-feet-up/python-web-conf-2023
See you on March 13!
https://redd.it/11kk0u5
@r_devops
Join Pythonistas from around the world for the 5th annual Python Web Conference (March 13-17). Tickets include 5 days, 65+ live talks, expert-led tutorials, social events, an exclusive pass to all conference recordings for 90 days, cool swag and more. Don’t wait, buy your ticket today!
That's not all, we're offering an exclusive 15% discount code for past Python Web Conference attendees! To register, use the discount code "PastPWCAttendee” at checkout or check out this link 👇
https://ti.to/six-feet-up/python-web-conf-2023/discount/PastPWCAttendee
Full Schedule: [https://2023.pythonwebconf.com/schedule](https://2023.pythonwebconf.com/schedule)
Register today: https://ti.to/six-feet-up/python-web-conf-2023
See you on March 13!
https://redd.it/11kk0u5
@r_devops
Tito
Python Web Conference 2023
Join Six Feet Up for the 5th annual Python Web Conference, the most in-depth conference for rising experts.
Talks and Tutorials: March 13-17, 2023 from 9am - 2pm US ET/UTC-5
Interactive Socials: Hosted daily from 2 - 3pm US ET/UTC-5
Location: This will…
Talks and Tutorials: March 13-17, 2023 from 9am - 2pm US ET/UTC-5
Interactive Socials: Hosted daily from 2 - 3pm US ET/UTC-5
Location: This will…
Maven for devops engineer in jenkins pipeline
I’m managing jenkins pipeline for java application and whenever it gets to maven - it’s complete darkness for me.
Is there a good resource to learn maven, maven commands, pom.xml, jar files etc enough for devops engineer to feel comfortable troubleshooting/fixing jenkins pipelines?
https://redd.it/11knoo9
@r_devops
I’m managing jenkins pipeline for java application and whenever it gets to maven - it’s complete darkness for me.
Is there a good resource to learn maven, maven commands, pom.xml, jar files etc enough for devops engineer to feel comfortable troubleshooting/fixing jenkins pipelines?
https://redd.it/11knoo9
@r_devops
Reddit
r/devops on Reddit: Maven for devops engineer in jenkins pipeline
Posted by u/victor_yanukovich - No votes and no comments
Help with Deployment for a DevOps Beginner.
Hi,
I'm a beginner to AWS/DvOps and am having a hard time implementing my project. Basically I have a public APi I want to host and give access to the public. I have bought the domain e.g. example.com on NameCheap and want to host the API on a subdomain like xapi.example.com.
I have a few questions regarding this. ( Most cost effective way will be preferred)
TLDR on my project.
* Domain bought on NameCheap
* API hosted on AWS AppRunner
* Using AWS RDS as primary database.
1. Do I need to use AWS Route 53 at all at this point ?
2. If not, then what's the option you recommend ?
3. If yes how to use AWS Route53 in this scenario ?
4. I want to add SSL protection to the domain also, how can I achieve this.
Thanks a lot.
https://redd.it/11knj54
@r_devops
Hi,
I'm a beginner to AWS/DvOps and am having a hard time implementing my project. Basically I have a public APi I want to host and give access to the public. I have bought the domain e.g. example.com on NameCheap and want to host the API on a subdomain like xapi.example.com.
I have a few questions regarding this. ( Most cost effective way will be preferred)
TLDR on my project.
* Domain bought on NameCheap
* API hosted on AWS AppRunner
* Using AWS RDS as primary database.
1. Do I need to use AWS Route 53 at all at this point ?
2. If not, then what's the option you recommend ?
3. If yes how to use AWS Route53 in this scenario ?
4. I want to add SSL protection to the domain also, how can I achieve this.
Thanks a lot.
https://redd.it/11knj54
@r_devops
Reddit
r/devops on Reddit: Help with Deployment for a DevOps Beginner.
Posted by u/rashm1n - No votes and no comments
How to CI/CD for Azure Virtual Machines
I currently have an Azure VM running a docker complex application (website, API server, Redis cache) built with docker-compose, using images pulled from an Azure Container Registry. Right now, when I push to the respective Github repos I have a GH workflow to build and push to the Azure Container registry. The issue though is that right now I have to manually SSH into the VM, pull the new images, and run docker-compose again. This of course is not very CD on my part, but I don't know how else to do it. I have looked into CI/CD in an Azure VM using Azure DevOps, but all the tutorials and examples I find online I find hard to adapt to my use case. Is there a way to automate the pulling of the images and re-deployment in the Azure VM? Or is there a totally different way to do this that's much better?
https://redd.it/11kefcs
@r_devops
I currently have an Azure VM running a docker complex application (website, API server, Redis cache) built with docker-compose, using images pulled from an Azure Container Registry. Right now, when I push to the respective Github repos I have a GH workflow to build and push to the Azure Container registry. The issue though is that right now I have to manually SSH into the VM, pull the new images, and run docker-compose again. This of course is not very CD on my part, but I don't know how else to do it. I have looked into CI/CD in an Azure VM using Azure DevOps, but all the tutorials and examples I find online I find hard to adapt to my use case. Is there a way to automate the pulling of the images and re-deployment in the Azure VM? Or is there a totally different way to do this that's much better?
https://redd.it/11kefcs
@r_devops
Reddit
r/devops on Reddit: How to CI/CD for Azure Virtual Machines
Posted by u/sxmedina - 2 votes and 4 comments
Remote state isolation with terraform workspaces for multi-account deployments
I decided to try terraform workspaces instead of using wrapper script for managing environments and especially the remote state. I wrote a small blog post on how to segregate the access to the remote state, given tf creates a state key in a single bucket https://ifritltd.com/2023/03/05/remote-state-isolation-in-multiple-environments-with-terraform-workspaces/
https://redd.it/11kikb3
@r_devops
I decided to try terraform workspaces instead of using wrapper script for managing environments and especially the remote state. I wrote a small blog post on how to segregate the access to the remote state, given tf creates a state key in a single bucket https://ifritltd.com/2023/03/05/remote-state-isolation-in-multiple-environments-with-terraform-workspaces/
https://redd.it/11kikb3
@r_devops
Reddit
r/devops on Reddit: Remote state isolation with terraform workspaces for multi-account deployments
Posted by u/kenych - No votes and no comments
Collecting metrics for cron jobs on a per-execution basis with Prometheus?
I have several cron jobs that lasts from a couple minutes to several hours. I want to emit time series data (such as latency from http calls made by the cron job) to Prometheus. However, I also want to be able to do time series aggregation down to the level of a specific job execution. For example, a job executes twice, I want to be able to view the quartiles for the first job and then also view the quartiles for the second job. My initial thoughts were to use two labels: job_id and job_execution_id. However, this would lead to high cardinality. Is prometheus still the right solution for this?
https://redd.it/11ktfcw
@r_devops
I have several cron jobs that lasts from a couple minutes to several hours. I want to emit time series data (such as latency from http calls made by the cron job) to Prometheus. However, I also want to be able to do time series aggregation down to the level of a specific job execution. For example, a job executes twice, I want to be able to view the quartiles for the first job and then also view the quartiles for the second job. My initial thoughts were to use two labels: job_id and job_execution_id. However, this would lead to high cardinality. Is prometheus still the right solution for this?
https://redd.it/11ktfcw
@r_devops
Reddit
r/devops on Reddit: Collecting metrics for cron jobs on a per-execution basis with Prometheus?
Posted by u/roadbiking19 - No votes and no comments
CI/CD pipeline using Docker Containers into Heroku - what's actually being deployed?
I've started a basic hobby project of a Python Flask api, and in trying out new things, I've found myself using Docker Containers in a CI/CD pipeline (Semaphore), to deploy to Heroku. It has been a really interesting/frustrating learning experience.
I've reached the point where I can get my containers to run nicely locally, build and deploy to Docker Hub when there's changes in the Main branch, lint the code depending on environment, run unit and end-to-end tests, and then - once everything is green - deploy to Heroku. Frustratingly, the app fails in Heroku with the generic 510 error code.
I'm about to drop back into the troubleshooting loop, and will no-doubt learn more useful stuff, but my knowledge of the CI/CD space is so limited, I don't quite know where the issue may be. I'd be really grateful if someone could check my mental model of what's currently happening.
The app consists of 5 separate Docker containers:
Webserver/proxy (Nginx, gunicorn)
Web (Flask app)
Persistent DB for data (Postgres)
Worker (Celery)
Cache DB for the worker (Redis)
Each of these have their own Dockerfile, and all is co-ordinated via a docker-compose.yml. Docker-compose pulls the env variables from a specific .env file; the command line is used to reference the start.sh file for each container (this handles any start-up instructions for the container); and each container is tagged with the same image (\[dockerhubusername\]/\[dockerhubproject\]:latest).
This works well locally, and I have pushed all 5 containers to DockerHub as one image.
My CI/CD pipeline consists of 3 blocks:
build: checks for changes in git, builds a new image (using docker-compose build), tags it, pushes it back to DockerHub
test: gets the latest image from dockerhub, runs the images (using docker-compose up), runs the tests
deploy: uses a heroku.yml file to pull the image from dockerhub, log-in to heroku:container, push the image to heroku container, set the stack to the container, and then release it as a web process
This all runs successfully, but the app fails over on Heroku with 510 code and nothing else. Heroku states that it's building the app using Python3.
What I am having difficulty conceptualising, is what's actually being pushed to Heroku?
Am I deploying the 5 separate containers in a 'down' state? In which case, surely I need to use something analogous to docker-compose in the heroku.yml to spin them up and run start up commands etc.
Am I deploying one container image which contains the 5 containers in a down state, so now I need to spin up this overarching 'app' container, and then run docker-compose in it over on Heroku?
Given that I have web, worker and DB containers, do I need to split them into different process over on Heroku, deploying my web container to a web process, worker to a worker process etc?
Should I have built each container as a separate image in DockerHub, deploy them individually to Heroku and coordinate and orchestrate over on Heroku (or via the herou.yml)?
Once I know which mental model is closest, I'm happy to dig around and play with config and variables, I'm just stuck at the moment knowing which thread to pull. Any pointers gratefully received.
https://redd.it/11ku15p
@r_devops
I've started a basic hobby project of a Python Flask api, and in trying out new things, I've found myself using Docker Containers in a CI/CD pipeline (Semaphore), to deploy to Heroku. It has been a really interesting/frustrating learning experience.
I've reached the point where I can get my containers to run nicely locally, build and deploy to Docker Hub when there's changes in the Main branch, lint the code depending on environment, run unit and end-to-end tests, and then - once everything is green - deploy to Heroku. Frustratingly, the app fails in Heroku with the generic 510 error code.
I'm about to drop back into the troubleshooting loop, and will no-doubt learn more useful stuff, but my knowledge of the CI/CD space is so limited, I don't quite know where the issue may be. I'd be really grateful if someone could check my mental model of what's currently happening.
The app consists of 5 separate Docker containers:
Webserver/proxy (Nginx, gunicorn)
Web (Flask app)
Persistent DB for data (Postgres)
Worker (Celery)
Cache DB for the worker (Redis)
Each of these have their own Dockerfile, and all is co-ordinated via a docker-compose.yml. Docker-compose pulls the env variables from a specific .env file; the command line is used to reference the start.sh file for each container (this handles any start-up instructions for the container); and each container is tagged with the same image (\[dockerhubusername\]/\[dockerhubproject\]:latest).
This works well locally, and I have pushed all 5 containers to DockerHub as one image.
My CI/CD pipeline consists of 3 blocks:
build: checks for changes in git, builds a new image (using docker-compose build), tags it, pushes it back to DockerHub
test: gets the latest image from dockerhub, runs the images (using docker-compose up), runs the tests
deploy: uses a heroku.yml file to pull the image from dockerhub, log-in to heroku:container, push the image to heroku container, set the stack to the container, and then release it as a web process
This all runs successfully, but the app fails over on Heroku with 510 code and nothing else. Heroku states that it's building the app using Python3.
What I am having difficulty conceptualising, is what's actually being pushed to Heroku?
Am I deploying the 5 separate containers in a 'down' state? In which case, surely I need to use something analogous to docker-compose in the heroku.yml to spin them up and run start up commands etc.
Am I deploying one container image which contains the 5 containers in a down state, so now I need to spin up this overarching 'app' container, and then run docker-compose in it over on Heroku?
Given that I have web, worker and DB containers, do I need to split them into different process over on Heroku, deploying my web container to a web process, worker to a worker process etc?
Should I have built each container as a separate image in DockerHub, deploy them individually to Heroku and coordinate and orchestrate over on Heroku (or via the herou.yml)?
Once I know which mental model is closest, I'm happy to dig around and play with config and variables, I'm just stuck at the moment knowing which thread to pull. Any pointers gratefully received.
https://redd.it/11ku15p
@r_devops
Reddit
r/devops on Reddit: CI/CD pipeline using Docker Containers into Heroku - what's actually being deployed?
Posted by u/hitman_cat - No votes and 1 comment
How to create a github actions workflow that automatically creates a PR and merges it
I have a protected main branch in my organization, while the other ones are not. The problem is I want the commits to be done in the name of an action bot that will directly show it is an automated action. However, I couldn't find a reasonable way to do it. The closer I got was PRs created on push trigger by bot, that need to be manually approved by an admin. But then another PR was created when the first one was automatically merged..and the author was the admin, not the bot.
I cant have 2 workflows because an automated action cant trigger another. I will try using github actions but they still seem like will trigger the workflow to create PR agains, for each automatic PR merge.
For a bit of context, the PR contains a build that generates a new file always, but should be done only on new pushes. Can anyone provide some guidance?
https://redd.it/11kv16y
@r_devops
I have a protected main branch in my organization, while the other ones are not. The problem is I want the commits to be done in the name of an action bot that will directly show it is an automated action. However, I couldn't find a reasonable way to do it. The closer I got was PRs created on push trigger by bot, that need to be manually approved by an admin. But then another PR was created when the first one was automatically merged..and the author was the admin, not the bot.
I cant have 2 workflows because an automated action cant trigger another. I will try using github actions but they still seem like will trigger the workflow to create PR agains, for each automatic PR merge.
For a bit of context, the PR contains a build that generates a new file always, but should be done only on new pushes. Can anyone provide some guidance?
https://redd.it/11kv16y
@r_devops
Reddit
r/devops on Reddit: How to create a github actions workflow that automatically creates a PR and merges it
Posted by u/National-Evidence-79 - No votes and no comments
k8s Simple Lambda http proxy
For a POC I want to be able to seamlessly call Lambdas in AWS from inside a kube cluster, without having to use API Gateway / FunctionURLs etc.
I've tested with Kong ingress controller, but this will only proxy external requests to lambdas as it acts on the ingress layer, so doesn't work for internal requests from inside the cluster.
I did a very quick spike to create a Go HTTP server that listens for requests, and just calls "invokeFunction" and returns the response. This could then run in a Pod behind a Service and invisibly call lambda functions as if they were another k8s service.
But i'm sure something like this already exists. a simple single binary that proxies lambda requests. if it doesn't, ill write something myself
notes:
Ive worked with k8s for a few years, have moved jobs and never had to deal with this situation before.
The kong lambda proxy does exactly this, it just calls invokeFunction, its written in Lua and is part of their ingress controller.
I don't want the hassle of ApiGateway due to internal business reasons / the amount of terraform needed which I just hate.
FunctionURLs are public, unless you wanna mess with IAM permissions / signed requests from clients and I want to try and avoid that.
https://redd.it/11kwvcm
@r_devops
For a POC I want to be able to seamlessly call Lambdas in AWS from inside a kube cluster, without having to use API Gateway / FunctionURLs etc.
I've tested with Kong ingress controller, but this will only proxy external requests to lambdas as it acts on the ingress layer, so doesn't work for internal requests from inside the cluster.
I did a very quick spike to create a Go HTTP server that listens for requests, and just calls "invokeFunction" and returns the response. This could then run in a Pod behind a Service and invisibly call lambda functions as if they were another k8s service.
But i'm sure something like this already exists. a simple single binary that proxies lambda requests. if it doesn't, ill write something myself
notes:
Ive worked with k8s for a few years, have moved jobs and never had to deal with this situation before.
The kong lambda proxy does exactly this, it just calls invokeFunction, its written in Lua and is part of their ingress controller.
I don't want the hassle of ApiGateway due to internal business reasons / the amount of terraform needed which I just hate.
FunctionURLs are public, unless you wanna mess with IAM permissions / signed requests from clients and I want to try and avoid that.
https://redd.it/11kwvcm
@r_devops
Reddit
r/devops on Reddit: k8s Simple Lambda http proxy
Posted by u/cuotos - No votes and no comments
Argo CD vs GoCD
In the near future, I will try to convince the client to change the GoCD tool to another, more community-supported one. The first thing that comes to my mind is Argo CD. I'm trying to find some comparison of these tools but I can't find something like that. Do you know of any sources where such a comparison has been made?
If not, does anyone have experience with these tools and can share their thoughts?
https://redd.it/11kwj0l
@r_devops
In the near future, I will try to convince the client to change the GoCD tool to another, more community-supported one. The first thing that comes to my mind is Argo CD. I'm trying to find some comparison of these tools but I can't find something like that. Do you know of any sources where such a comparison has been made?
If not, does anyone have experience with these tools and can share their thoughts?
https://redd.it/11kwj0l
@r_devops
Reddit
r/devops on Reddit: Argo CD vs GoCD
Posted by u/czerniga_it - No votes and 2 comments
Artifactory with MFA
Hello ppl,
I am looking for some way to harden our artifactory by implementing MFA solution rit now jfrog is being used, however current setup is anonymous downloads are enabled and using standard username and pass for uploading.
However, going forward we would like to implement a way to keep the non-ui interaction anonymous and UI activity with MFA any suggestion?
https://redd.it/11kxjih
@r_devops
Hello ppl,
I am looking for some way to harden our artifactory by implementing MFA solution rit now jfrog is being used, however current setup is anonymous downloads are enabled and using standard username and pass for uploading.
However, going forward we would like to implement a way to keep the non-ui interaction anonymous and UI activity with MFA any suggestion?
https://redd.it/11kxjih
@r_devops
Reddit
r/devops on Reddit: Artifactory with MFA
Posted by u/CrazyBrownDog - No votes and 2 comments
What's your on call rate?
My employer is preparing to introduce an on call schema. We're negotiating contracts now. They offered 150% of normal wage during on call times, and for each hour spent on an incident, the same number of hours added to holidays.
This is the initial offer that doesn't include what happens when the incident last longer than on-call hours.
https://redd.it/11kzqrs
@r_devops
My employer is preparing to introduce an on call schema. We're negotiating contracts now. They offered 150% of normal wage during on call times, and for each hour spent on an incident, the same number of hours added to holidays.
This is the initial offer that doesn't include what happens when the incident last longer than on-call hours.
https://redd.it/11kzqrs
@r_devops
Reddit
What's your on call rate? : r/devops
262K subscribers in the devops community.