Agile workflow with Jira and Git
Does anyone have a good resource to learn the best practices for a small development team (5-10 devs) working in an agile workflow? The process we use at my work is not very efficient but there are some challenges I can't figure out. I would love to hear about the workflows that are successful for you. I googled this but surprisingly I didn't find anything super detailed, just high level principles.
For context we work on web front-ends in React and back-ends (REST or GraphQL APIs) in Node. Database is on Planetscale. We use Jira and deploy on AWS but I would be open to any other tools/platforms.
Here's the process as I understand it, as well as my questions:
1. Developer creates a new branch to work on a feature/bugfix/etc (an item in the sprint)
2. When done, dev creates a pull request to merge their feature branch into the Staging branch
3. Someone (product manager in our case) tests the app against the acceptance criteria and if everything is OK they approve + merge the pull request into the Staging branch
1. For web front-ends (static sites like React, Vue, etc), all of these feature branches are automatically deployed using branch previews (like in Netlify, AWS Amplify, etc), so the PM can do their testing vs acceptance criteria in these temporary automatic deployments, no issue there. Database branching is done in Planetscale and that part works well too.
2. But how can/should this be done for web back-ends, or other systems that run on a server? For example a REST API running on Kubernetes or AWS ECS or AWS EC2 etc? Right now these need to be deployed manually, which is a huge pain. Even using terraform/cloudformation there are still manual steps required. Ideally it would be great if every branch was automatically deployed to some unique URL that we can use temporarily for testing.
3. How do we ensure the front-end is communicating with the correct instance of the back-end (.env management)? Same thing for pointing the back-end to the correct database branch URL. Right now this is done manually with environment variables.
4. Since the front-end and back-end are in separate Git repos, how do we ensure both of these branches and PRs for the same feature are in sync? How do we avoid having to approve/merge 2 separate PRs in 2 separate repos for the same feature / item in Jira? Is a monorepo the best approach? What if there are separate teams for front-end and back-end?
4. Once the PR is merged, the feature branch is deleted and the dev moves on to the next task in Jira and repeats the process
5. Once all the items we want to include in the next release are done and merged into staging, some more testing is done in staging and finally a release PR is created and merged into main and automatically deployed (CI/CD pipeline automatically runs and deploys the staging and main branches)
6. What do we do if we don't want to deploy every single change that was merged to staging? For example maybe the business decides to delay the release of a certain feature but it was already merged to staging. How can we avoid merging that into main? Do we have to implement feature flags for everything?
Please forgive me if this isn't the right subreddit for this question, and I would greatly appreciate it if you could point me to a better place for this question.
https://redd.it/yxinb0
@r_devops
Does anyone have a good resource to learn the best practices for a small development team (5-10 devs) working in an agile workflow? The process we use at my work is not very efficient but there are some challenges I can't figure out. I would love to hear about the workflows that are successful for you. I googled this but surprisingly I didn't find anything super detailed, just high level principles.
For context we work on web front-ends in React and back-ends (REST or GraphQL APIs) in Node. Database is on Planetscale. We use Jira and deploy on AWS but I would be open to any other tools/platforms.
Here's the process as I understand it, as well as my questions:
1. Developer creates a new branch to work on a feature/bugfix/etc (an item in the sprint)
2. When done, dev creates a pull request to merge their feature branch into the Staging branch
3. Someone (product manager in our case) tests the app against the acceptance criteria and if everything is OK they approve + merge the pull request into the Staging branch
1. For web front-ends (static sites like React, Vue, etc), all of these feature branches are automatically deployed using branch previews (like in Netlify, AWS Amplify, etc), so the PM can do their testing vs acceptance criteria in these temporary automatic deployments, no issue there. Database branching is done in Planetscale and that part works well too.
2. But how can/should this be done for web back-ends, or other systems that run on a server? For example a REST API running on Kubernetes or AWS ECS or AWS EC2 etc? Right now these need to be deployed manually, which is a huge pain. Even using terraform/cloudformation there are still manual steps required. Ideally it would be great if every branch was automatically deployed to some unique URL that we can use temporarily for testing.
3. How do we ensure the front-end is communicating with the correct instance of the back-end (.env management)? Same thing for pointing the back-end to the correct database branch URL. Right now this is done manually with environment variables.
4. Since the front-end and back-end are in separate Git repos, how do we ensure both of these branches and PRs for the same feature are in sync? How do we avoid having to approve/merge 2 separate PRs in 2 separate repos for the same feature / item in Jira? Is a monorepo the best approach? What if there are separate teams for front-end and back-end?
4. Once the PR is merged, the feature branch is deleted and the dev moves on to the next task in Jira and repeats the process
5. Once all the items we want to include in the next release are done and merged into staging, some more testing is done in staging and finally a release PR is created and merged into main and automatically deployed (CI/CD pipeline automatically runs and deploys the staging and main branches)
6. What do we do if we don't want to deploy every single change that was merged to staging? For example maybe the business decides to delay the release of a certain feature but it was already merged to staging. How can we avoid merging that into main? Do we have to implement feature flags for everything?
Please forgive me if this isn't the right subreddit for this question, and I would greatly appreciate it if you could point me to a better place for this question.
https://redd.it/yxinb0
@r_devops
reddit
Agile workflow with Jira and Git
Does anyone have a good resource to learn the best practices for a small development team (5-10 devs) working in an agile workflow? The process we...
Best tips for preparing for technical DevOps interviews. Is grinding Leetcode needed/worth it at all?
Context: I am already a DevOps Engineer and currently looking for a new position. I had some previous experience as a Software Engineer doing Java development but took a break from development for a year as I didin't enjoy programming all day everyday. Got a different position at a new company more business focused but opportunities and my skillsets brought me through Release Engineering and then into DevOps. Coming to my current role, I knew enough about programming to get the position but due to this year break I had...my programming skills are a little rusty.
In my role I have been doing some light Groovy scripting. Maintaining our pipelines, adding new steps and functionality to a handful of them, but I don't feel like any of the work that I have been doing has been really exercising any HARD programming skills/concepts.
As I feel it is the most useful/practical in a DevOps role)and given my knowledge and background in OOP, i've been trying to learn python from scratch (Bash comes next).
What types of problems/concepts should I be practicing when I am trying to study for the coding portion of technical DevOps interviews? Is grinding leetcode problems and going through algorithm and data structure problems (stuff I would normally grind if I were going for a softare engineer position) worth it or might it be overkill for questions I would get asked to do?
Any input helps! Thank you.
https://redd.it/yxntyg
@r_devops
Context: I am already a DevOps Engineer and currently looking for a new position. I had some previous experience as a Software Engineer doing Java development but took a break from development for a year as I didin't enjoy programming all day everyday. Got a different position at a new company more business focused but opportunities and my skillsets brought me through Release Engineering and then into DevOps. Coming to my current role, I knew enough about programming to get the position but due to this year break I had...my programming skills are a little rusty.
In my role I have been doing some light Groovy scripting. Maintaining our pipelines, adding new steps and functionality to a handful of them, but I don't feel like any of the work that I have been doing has been really exercising any HARD programming skills/concepts.
As I feel it is the most useful/practical in a DevOps role)and given my knowledge and background in OOP, i've been trying to learn python from scratch (Bash comes next).
What types of problems/concepts should I be practicing when I am trying to study for the coding portion of technical DevOps interviews? Is grinding leetcode problems and going through algorithm and data structure problems (stuff I would normally grind if I were going for a softare engineer position) worth it or might it be overkill for questions I would get asked to do?
Any input helps! Thank you.
https://redd.it/yxntyg
@r_devops
reddit
Best tips for preparing for technical DevOps interviews. Is...
Context: I am already a DevOps Engineer and currently looking for a new position. I had some previous experience as a Software Engineer doing Java...
Server starts dropping http connections after a certain amount of requests
Hello, I'm not sure if this is the right place to ask such a question but I'm trying to get help somewhere as I'm unable to get this resolved in any other places (tried stack overflow, plesk forums, numerous other forums).
I have two domains setup on our server - let's say usersite.com and api.usersite.com. usersite.com is powered by nuxt.js - a front-end framework which runs on node.js. It makes API calls to api.usersite.com, which is a Laravel application. Both of these projects are running inside docker containers. Usersite is using a reverse nginx proxy to the api site.
Now to the problem - when there is slightly higher traffic to usersite (200 users per minute) API site starts to drop connections, immediately resulting in 504. Perhaps someone could guide me in the right direction of why this might be happening? I've noticed that API website logs show that all requests come from the same IP (the server it self), that means that as requests are proxied they take the proxy server ip instad of client ip. So perhaps a self-ddos is happening, where nginx thinks one ip is flooding it with requests and starts dropping connections? What could be the possible solution for this?
What's weird is that it's not an uncommon practice to have back-end separate from front-end and for them to communicate through API with reverse proxy but I can't find any results regarding such issue that I have on Google...
https://redd.it/yxmmxr
@r_devops
Hello, I'm not sure if this is the right place to ask such a question but I'm trying to get help somewhere as I'm unable to get this resolved in any other places (tried stack overflow, plesk forums, numerous other forums).
I have two domains setup on our server - let's say usersite.com and api.usersite.com. usersite.com is powered by nuxt.js - a front-end framework which runs on node.js. It makes API calls to api.usersite.com, which is a Laravel application. Both of these projects are running inside docker containers. Usersite is using a reverse nginx proxy to the api site.
Now to the problem - when there is slightly higher traffic to usersite (200 users per minute) API site starts to drop connections, immediately resulting in 504. Perhaps someone could guide me in the right direction of why this might be happening? I've noticed that API website logs show that all requests come from the same IP (the server it self), that means that as requests are proxied they take the proxy server ip instad of client ip. So perhaps a self-ddos is happening, where nginx thinks one ip is flooding it with requests and starts dropping connections? What could be the possible solution for this?
What's weird is that it's not an uncommon practice to have back-end separate from front-end and for them to communicate through API with reverse proxy but I can't find any results regarding such issue that I have on Google...
https://redd.it/yxmmxr
@r_devops
reddit
Server starts dropping http connections after a certain amount of...
Hello, I'm not sure if this is the right place to ask such a question but I'm trying to get help somewhere as I'm unable to get this resolved in...
Developer self-service portal for Kubernetes/Helm
We are working on a tool that allows **developers** to deploy their own services from a catalog, via a simple UI portal. DevOps engineers can create a catalog of deployable apps via templates. Each template can define custom user-inputs and can define one or more services(helm charts).
[https://github.com/JovianX/Service-Hub](https://github.com/JovianX/Service-Hub) (Please star ⭐ on GitHub if you think it's cool).
This is an alternative to what currently happens in many organizations where DevOps create hackware solutions for developers to deploy on-demand services with Jenkins Jobs, Scaffold Git repos with custom actions, and so on.
The tool offers a very simple way to create a Self-Service app deployment on Kubernetes with Helm. The tool creates a self-service UI, with custom user-inputs. The user-inputs can be used as Helm values to allow users to configure some parts of the application.
You can define [templates](https://github.com/JovianX/Service-Hub/blob/main/documentation/templates.md), which construct the catalog you expose to developers. An application template can compose multiple helm charts (for example, an app layer that needs a database, somewhat similar to Helmfile).
Here's a simple **Template** example for creating Redis-as--a-Service:
# Template reference and documentation at
# https://github.com/JovianX/Service-Hub/blob/main/documentation/templates.md
name: my-new-service
components:
- name: redis
type: helm_chart
chart: bitnami/redis
version: 17.0.7
values:
- db:
username: {{ inputs.username }}
inputs:
- name: username
type: text
label: 'User Name'
default: 'John Connor'
description: 'Choose a username'
The template creates this Self-Service experience [https://user-images.githubusercontent.com/2787296/198906162-5aaa83df-7a7b-4ec5-b1e0-3a6f455a010e.png](https://user-images.githubusercontent.com/2787296/198906162-5aaa83df-7a7b-4ec5-b1e0-3a6f455a010e.png)
We are gathering **feature requests**, and **user** **feedback**.
I would love to read thoughts and get extremely excited by GitHub **STARS**! ⭐
https://redd.it/yxrhxw
@r_devops
We are working on a tool that allows **developers** to deploy their own services from a catalog, via a simple UI portal. DevOps engineers can create a catalog of deployable apps via templates. Each template can define custom user-inputs and can define one or more services(helm charts).
[https://github.com/JovianX/Service-Hub](https://github.com/JovianX/Service-Hub) (Please star ⭐ on GitHub if you think it's cool).
This is an alternative to what currently happens in many organizations where DevOps create hackware solutions for developers to deploy on-demand services with Jenkins Jobs, Scaffold Git repos with custom actions, and so on.
The tool offers a very simple way to create a Self-Service app deployment on Kubernetes with Helm. The tool creates a self-service UI, with custom user-inputs. The user-inputs can be used as Helm values to allow users to configure some parts of the application.
You can define [templates](https://github.com/JovianX/Service-Hub/blob/main/documentation/templates.md), which construct the catalog you expose to developers. An application template can compose multiple helm charts (for example, an app layer that needs a database, somewhat similar to Helmfile).
Here's a simple **Template** example for creating Redis-as--a-Service:
# Template reference and documentation at
# https://github.com/JovianX/Service-Hub/blob/main/documentation/templates.md
name: my-new-service
components:
- name: redis
type: helm_chart
chart: bitnami/redis
version: 17.0.7
values:
- db:
username: {{ inputs.username }}
inputs:
- name: username
type: text
label: 'User Name'
default: 'John Connor'
description: 'Choose a username'
The template creates this Self-Service experience [https://user-images.githubusercontent.com/2787296/198906162-5aaa83df-7a7b-4ec5-b1e0-3a6f455a010e.png](https://user-images.githubusercontent.com/2787296/198906162-5aaa83df-7a7b-4ec5-b1e0-3a6f455a010e.png)
We are gathering **feature requests**, and **user** **feedback**.
I would love to read thoughts and get extremely excited by GitHub **STARS**! ⭐
https://redd.it/yxrhxw
@r_devops
GitHub
GitHub - JovianX/Service-Hub: ServiceHub is a Self-service Portal, for creation and day 2 operations, leverages existing automation…
ServiceHub is a Self-service Portal, for creation and day 2 operations, leverages existing automation processes. SerivceHub is built for Platform Engineers. - JovianX/Service-Hub
Aliasing of EKS endpoint domain
Hello peeps,
Would be aliasing `https://<HASH>.gr7.<region>.eks.amazonaws.com` to a custom CNAME, such as <myClusterName>.<region>.domain to have a predictable endpoint that in turn can be hardcoded in some places a bad practice? Any advice against or in favor of this?
Thank you for your input.
https://redd.it/yxpeo0
@r_devops
Hello peeps,
Would be aliasing `https://<HASH>.gr7.<region>.eks.amazonaws.com` to a custom CNAME, such as <myClusterName>.<region>.domain to have a predictable endpoint that in turn can be hardcoded in some places a bad practice? Any advice against or in favor of this?
Thank you for your input.
https://redd.it/yxpeo0
@r_devops
reddit
Aliasing of EKS endpoint domain
Hello peeps, Would be aliasing \`https://<HASH>.gr7.<region>.eks.amazonaws.com\` to a custom CNAME, such as <myClusterName>.<region>.domain to...
Best options for SLA/SLO tracking outside of data dog
We have very basic needs
Monitor uptime of MongoDB atlas cluster
A few ec2 instances
Need to ping a frontend react app
Need to ping uptime for a graphql api endpoint
That’s about it
I’ve set this up with datadog but worried about the cost, not today, but in two years
Are any other APMs going to be that much cheaper while still doing it all with one account?
https://redd.it/yxo7t0
@r_devops
We have very basic needs
Monitor uptime of MongoDB atlas cluster
A few ec2 instances
Need to ping a frontend react app
Need to ping uptime for a graphql api endpoint
That’s about it
I’ve set this up with datadog but worried about the cost, not today, but in two years
Are any other APMs going to be that much cheaper while still doing it all with one account?
https://redd.it/yxo7t0
@r_devops
reddit
Best options for SLA/SLO tracking outside of data dog
We have very basic needs Monitor uptime of MongoDB atlas cluster A few ec2 instances Need to ping a frontend react app Need to ping uptime for...
How do you track/help onboarding to on-call?
When it comes to something like interviewing, ramping someone to run interviews often involves a process of shadowing for a number of times and some level of feedback before you become officially 'ramped'.
When I've lead teams before, as a team lead I've tracked which incidents people have been involved in, and which services they've touched. But I never had a proper structure to the onboarding, probably because:
- Incident training often requires participating in real incidents, which can’t be scheduled in advance.
- When one does occur, responders want to focus fully on the incident: they don’t want to be searching for an onboarding spreadsheet, making coordinating onboarding a low priority.
- Incidents are varied, as is the way people participate in them, making it difficult to understand what qualifies as ‘training’.
I wondered if people have had more structure than me on this, and if so what and how are they tracking it?
The context is we're considering building this into our product (incident.io) as a concept of onboarding programmes, where you can say:
> You're ramped to handle SRE incidents once you've shadowed the lead for >3 incidents involving either Postgres, ElasticSearch, etc, and led at least one yourself
And want to know how/if people are doing this already.
https://redd.it/yxnd4o
@r_devops
When it comes to something like interviewing, ramping someone to run interviews often involves a process of shadowing for a number of times and some level of feedback before you become officially 'ramped'.
When I've lead teams before, as a team lead I've tracked which incidents people have been involved in, and which services they've touched. But I never had a proper structure to the onboarding, probably because:
- Incident training often requires participating in real incidents, which can’t be scheduled in advance.
- When one does occur, responders want to focus fully on the incident: they don’t want to be searching for an onboarding spreadsheet, making coordinating onboarding a low priority.
- Incidents are varied, as is the way people participate in them, making it difficult to understand what qualifies as ‘training’.
I wondered if people have had more structure than me on this, and if so what and how are they tracking it?
The context is we're considering building this into our product (incident.io) as a concept of onboarding programmes, where you can say:
> You're ramped to handle SRE incidents once you've shadowed the lead for >3 incidents involving either Postgres, ElasticSearch, etc, and led at least one yourself
And want to know how/if people are doing this already.
https://redd.it/yxnd4o
@r_devops
reddit
How do you track/help onboarding to on-call?
When it comes to something like interviewing, ramping someone to run interviews often involves a process of shadowing for a number of times and...
My mandate is being moved from “DevOps” to “Developer Experience.” Has anyone else made this switch?
Context: Been overseeing the devops for an ecomm company for a little over three years. We brought in a new CTO from a rival startup earlier this year who seems to be way more plugged in to trends in the broader developer community than most of us.
After mentioning “Developer Experience” without much explanation, he’s formally asked me to make it my priority for 2023.
The problem I’m having is there doesn’t even to seem be a crystalized consensus on what “Developer Experience” even means.
From my early research it’s everything from building new CI/CD frameworks to “making sure the developers have the muffins they like.”
Hoping to get any insights you might have on best practices as well as what falls under this responsibility so I can start making a plan.
https://redd.it/yxxeen
@r_devops
Context: Been overseeing the devops for an ecomm company for a little over three years. We brought in a new CTO from a rival startup earlier this year who seems to be way more plugged in to trends in the broader developer community than most of us.
After mentioning “Developer Experience” without much explanation, he’s formally asked me to make it my priority for 2023.
The problem I’m having is there doesn’t even to seem be a crystalized consensus on what “Developer Experience” even means.
From my early research it’s everything from building new CI/CD frameworks to “making sure the developers have the muffins they like.”
Hoping to get any insights you might have on best practices as well as what falls under this responsibility so I can start making a plan.
https://redd.it/yxxeen
@r_devops
reddit
My mandate is being moved from “DevOps” to “Developer Experience.”...
Context: Been overseeing the devops for an ecomm company for a little over three years. We brought in a new CTO from a rival startup earlier this...
Branching and deployment strategy for continuous integration
What branching/merging/deployment strategy would you use for a development team of 5 developing a webapp with 10,000 users (not small, not large)?
Currently we have three environments: development, staging, production. Features are developed on feature branches and merged to master, causing an auto-deployment to staging. After smoke testing on staging the developer click-ops to production.
If an issue is discovered on staging, the developer creates a new branch (hotfix) which is merged again to master. There is no way to reverse the feature branch merge to master after the fact.
An added complication: if production ever goes down while the master branch is compromised, the system will auto-deploy the compromised master branch to production.
Also, the development environment is a free-for-all.
There has to be a better approach...
https://redd.it/yxzi8d
@r_devops
What branching/merging/deployment strategy would you use for a development team of 5 developing a webapp with 10,000 users (not small, not large)?
Currently we have three environments: development, staging, production. Features are developed on feature branches and merged to master, causing an auto-deployment to staging. After smoke testing on staging the developer click-ops to production.
If an issue is discovered on staging, the developer creates a new branch (hotfix) which is merged again to master. There is no way to reverse the feature branch merge to master after the fact.
An added complication: if production ever goes down while the master branch is compromised, the system will auto-deploy the compromised master branch to production.
Also, the development environment is a free-for-all.
There has to be a better approach...
https://redd.it/yxzi8d
@r_devops
reddit
Branching and deployment strategy for continuous integration
What branching/merging/deployment strategy would you use for a development team of 5 developing a webapp with 10,000 users (not small, not large)?...
NGINX / NGINX Ingress / Envoy WAF Comparison
https://www.openappsec.io/post/comparing-nginx-waf-solutions-nginx-app-protect-waf-vs-open-appsec-open-source-ml-based-waf
Article compares the NGINX App Protect signature-based WAF solution and a new open-source initiative called “open-appsec,” which builds on machine learning and can be deployed as an add-on to both NGINX and NGINX Ingress open-source and premium (Plus) versions.
Documentation here: https://docs.openappsec.io/getting-started/start-with-kubernetes
https://redd.it/yy1l00
@r_devops
https://www.openappsec.io/post/comparing-nginx-waf-solutions-nginx-app-protect-waf-vs-open-appsec-open-source-ml-based-waf
Article compares the NGINX App Protect signature-based WAF solution and a new open-source initiative called “open-appsec,” which builds on machine learning and can be deployed as an add-on to both NGINX and NGINX Ingress open-source and premium (Plus) versions.
Documentation here: https://docs.openappsec.io/getting-started/start-with-kubernetes
https://redd.it/yy1l00
@r_devops
open-appsec
NGINX WAF and Kubernetes WAF options (App Protect vs. open-appsec)
This articles compares NGINX App Protect signature-based WAF and open-appsec free open-source ML-based WAF.
What is the point of having both a develop and a main branch aiming to be in sync?
I often notice teams have both a develop branch from where they pull featurebranches, only for them to merge into develop and then merge into main.
What's the point ? Seems like double bookkeeping to me.
https://redd.it/yy2wz7
@r_devops
I often notice teams have both a develop branch from where they pull featurebranches, only for them to merge into develop and then merge into main.
What's the point ? Seems like double bookkeeping to me.
https://redd.it/yy2wz7
@r_devops
reddit
What is the point of having both a develop and a main branch...
I often notice teams have both a develop branch from where they pull featurebranches, only for them to merge into develop and then merge into...
Uptime for MongoDB atlas? No luck with asking atlas and nothing for dátadog integration
Im feeling like I’m just getting poor support and I’m a lazy docs reader, but I can’t seem to find anyway to easily get the uptime of a MongoDB atlas cluster
There is a mongo serverStatus function you can run but you need to run it on each node AND it just tells you the time the mongod process has been running which I’m guessing isn’t going to be the same as “uptime for the cluster” because when a new node is spun up or down, it doesn’t necessarily mean we had downtime (from the experience of a MongoDB atlas cluster consumer/user)
Are people just not measuring SLAs for the DBs lol? How does atlas measure their own SLA lol
https://redd.it/yy55ra
@r_devops
Im feeling like I’m just getting poor support and I’m a lazy docs reader, but I can’t seem to find anyway to easily get the uptime of a MongoDB atlas cluster
There is a mongo serverStatus function you can run but you need to run it on each node AND it just tells you the time the mongod process has been running which I’m guessing isn’t going to be the same as “uptime for the cluster” because when a new node is spun up or down, it doesn’t necessarily mean we had downtime (from the experience of a MongoDB atlas cluster consumer/user)
Are people just not measuring SLAs for the DBs lol? How does atlas measure their own SLA lol
https://redd.it/yy55ra
@r_devops
reddit
Uptime for MongoDB atlas? No luck with asking atlas and nothing...
Im feeling like I’m just getting poor support and I’m a lazy docs reader, but I can’t seem to find anyway to easily get the uptime of a MongoDB...
NPM version in container environments
I’ve recently begun a new job and found something interesting.
I’ve noticed this pattern where SWEs will make commits to simply bump their package.json version. This of course triggers a new build on their default branch. Then of course the thing they are applying a git tag too isn’t the image that was tested in a lower environment. (We do at least properly promote so there’s not a rebuild on tags).
So I’m curious how do you guys handle apps that are npm apps but are rest apis per se? In the past I’ve just always set the package.json version to 0.0.0 and disregarded it as I prefer the git tags/image tags as the source of truth. Now for npm packages of course the typical process is used.
https://redd.it/yy85hl
@r_devops
I’ve recently begun a new job and found something interesting.
I’ve noticed this pattern where SWEs will make commits to simply bump their package.json version. This of course triggers a new build on their default branch. Then of course the thing they are applying a git tag too isn’t the image that was tested in a lower environment. (We do at least properly promote so there’s not a rebuild on tags).
So I’m curious how do you guys handle apps that are npm apps but are rest apis per se? In the past I’ve just always set the package.json version to 0.0.0 and disregarded it as I prefer the git tags/image tags as the source of truth. Now for npm packages of course the typical process is used.
https://redd.it/yy85hl
@r_devops
reddit
NPM version in container environments
I’ve recently begun a new job and found something interesting. I’ve noticed this pattern where SWEs will make commits to simply bump their...
How do you yaml
A?:
accessModes:
- ReadWriteOnce
or
B?:
accessModes:
- ReadWriteOnce
Personally, I can't even with B. I don't know if it's some sort of chemical imbalance in my brain but I get ultra confused if I see yamls structured this way.
I want to know if I'm the only one or not. No explanation necessary. You do you.
View Poll
https://redd.it/yya8p7
@r_devops
A?:
accessModes:
- ReadWriteOnce
or
B?:
accessModes:
- ReadWriteOnce
Personally, I can't even with B. I don't know if it's some sort of chemical imbalance in my brain but I get ultra confused if I see yamls structured this way.
I want to know if I'm the only one or not. No explanation necessary. You do you.
View Poll
https://redd.it/yya8p7
@r_devops
reddit
How do you yaml
A?: accessModes: - ReadWriteOnce or B?: accessModes: - ReadWriteOnce Personally, I can't even with B. I don't know if it's...
Logic Apps & Workflow Configuration Import into an Azure DevOps CI/CD Pipeline
In my Azure test lab, I currently have a Landing Zone deployed in Terraform via a CI/CD pipeline in Azure DevOps.
I would like to deploy an Azure Logic App, however, I have an existing Logic App workflow config I'd like to import into said Logic App as part of the CICD build process (I was thinking maybe a task or something within the build pipeline?). Moving forward the Workflow configs should then be managed as part of the build pipeline with the config files being hosted in an Azure Repo.
My question is, has anyone ever done this before, and if so what is the best way of going about it? I've spent some time having a go but cannot find the most efficient way of going about it.
TIA :)
https://redd.it/yy2598
@r_devops
In my Azure test lab, I currently have a Landing Zone deployed in Terraform via a CI/CD pipeline in Azure DevOps.
I would like to deploy an Azure Logic App, however, I have an existing Logic App workflow config I'd like to import into said Logic App as part of the CICD build process (I was thinking maybe a task or something within the build pipeline?). Moving forward the Workflow configs should then be managed as part of the build pipeline with the config files being hosted in an Azure Repo.
My question is, has anyone ever done this before, and if so what is the best way of going about it? I've spent some time having a go but cannot find the most efficient way of going about it.
TIA :)
https://redd.it/yy2598
@r_devops
reddit
Logic Apps & Workflow Configuration Import into an Azure DevOps...
In my Azure test lab, I currently have a Landing Zone deployed in Terraform via a CI/CD pipeline in Azure DevOps. I would like to deploy an Azure...
Packer + QEMU for Ubuntu 22.04.1 ARM64 ISO
Has someone ever tried creating a custom VM image using Packer + qemu-system-aarch64 with a dedicated Ubuntu 22.04.1 ARM64 ISO image?
I have extensive experience in QEMU and Packer especially for AMD64 and have templates that can boot images using UEFI for x86. However, I found an ISO file from Ubuntu for ARM and you actually have an ISO file there. AFAIK a lot of ARM based devices don't have UEFI implementation and bootloading is not the same as OVMF.
During a Deep-Dive I found a very nice post from Canonical's MAAS called about creating images which provides a fantastic Packer template that is ARM64 / AMD64 interchangeable but only caveat is that it uses Ubuntu's Cloud-Images
I wanted to try the Live Server Image for ARM64
Any expert here in Golden Image Creation can guide me through if this is possible or not?
https://redd.it/yy0d5p
@r_devops
Has someone ever tried creating a custom VM image using Packer + qemu-system-aarch64 with a dedicated Ubuntu 22.04.1 ARM64 ISO image?
I have extensive experience in QEMU and Packer especially for AMD64 and have templates that can boot images using UEFI for x86. However, I found an ISO file from Ubuntu for ARM and you actually have an ISO file there. AFAIK a lot of ARM based devices don't have UEFI implementation and bootloading is not the same as OVMF.
During a Deep-Dive I found a very nice post from Canonical's MAAS called about creating images which provides a fantastic Packer template that is ARM64 / AMD64 interchangeable but only caveat is that it uses Ubuntu's Cloud-Images
I wanted to try the Live Server Image for ARM64
Any expert here in Golden Image Creation can guide me through if this is possible or not?
https://redd.it/yy0d5p
@r_devops
Ubuntu
Ubuntu for ARM | Download | Ubuntu
Download Ubuntu Server for ARM with support for the very latest ARM-based server systems powered by certified 64-bit processors.
Is it possible for Password policy implementation in EC2 Ubuntu os level?
I was suggested by PCI-DSS requirement team to implement Password policy on our ec2 ubuntu servers. They have provided us with this link to follow: https://linuxhint.com/secure\_password\_policies\_ubuntu/
But, it's not working after I follow perfectly, still, I can create new passwords randomly for new users. What can be the issue here? Does ec2 really allow to implement of this on the os level?
https://redd.it/yyf10y
@r_devops
I was suggested by PCI-DSS requirement team to implement Password policy on our ec2 ubuntu servers. They have provided us with this link to follow: https://linuxhint.com/secure\_password\_policies\_ubuntu/
But, it's not working after I follow perfectly, still, I can create new passwords randomly for new users. What can be the issue here? Does ec2 really allow to implement of this on the os level?
https://redd.it/yyf10y
@r_devops
Linuxhint
How to enable and enforce secure password policies on Ubuntu
In this article, we will learn how to enable and enforce secure password policies on Ubuntu. Also we will discuss how to set a policy that enforce users to change their password at regular interval.
is there such thing as "encrypting" a repo hosted on Github?
Hello,
I was asked to look into encrypting a github repo hosted on github.com. I understand that all data on github's infra is encrypted since they have all their SOC compliance. Has anyone heard of this before? I'm aware of tools to encrypt individual files but not an entire repo...
https://redd.it/yxz8gk
@r_devops
Hello,
I was asked to look into encrypting a github repo hosted on github.com. I understand that all data on github's infra is encrypted since they have all their SOC compliance. Has anyone heard of this before? I'm aware of tools to encrypt individual files but not an entire repo...
https://redd.it/yxz8gk
@r_devops
GitHub
GitHub · Change is constant. GitHub keeps you ahead.
Join the world's most widely adopted, AI-powered developer platform where millions of developers, businesses, and the largest open source community build software that advances humanity.
I get this error when I commit with a CircleCi project I just made
no configuration was found in your project. please refer to https://circleci.com/docs/2.0/ to get started with your configuration.
https://redd.it/yxxv33
@r_devops
no configuration was found in your project. please refer to https://circleci.com/docs/2.0/ to get started with your configuration.
https://redd.it/yxxv33
@r_devops
Was learning Go hard for you?
I spent all week trying to put together a lambda function which AWS already provided the code for but in python. I learned python on my own and figured learning Go would be easy but it's a totally different beast.
https://redd.it/yyjmks
@r_devops
I spent all week trying to put together a lambda function which AWS already provided the code for but in python. I learned python on my own and figured learning Go would be easy but it's a totally different beast.
https://redd.it/yyjmks
@r_devops
reddit
Was learning Go hard for you?
I spent all week trying to put together a lambda function which AWS already provided the code for but in python. I learned python on my own and...
Deep Dive in 5 minutes: How a pod is created?
https://www.youtube.com/watch?v=vv8aT1OdBw4
https://redd.it/yyje96
@r_devops
https://www.youtube.com/watch?v=vv8aT1OdBw4
https://redd.it/yyje96
@r_devops
YouTube
Deep Dive in 5 minutes: How a pod is created?
Deep dive into the pod creation process in Kubernetes:
1 - The creation process at high level
2- Scheduling
3- Infrastructure Creation
4- Containers Creation/Running
5 - Containers Readiness
Twitter - https://twitter.com/the_good_guym
Linkedin - https:…
1 - The creation process at high level
2- Scheduling
3- Infrastructure Creation
4- Containers Creation/Running
5 - Containers Readiness
Twitter - https://twitter.com/the_good_guym
Linkedin - https:…