Anyone set up azure devops to link to Jira?
As title says, currently looking into the different plugins and apps that would let me link JIRA to code sitting ADO.
https://redd.it/13ib4em
@r_devops
As title says, currently looking into the different plugins and apps that would let me link JIRA to code sitting ADO.
https://redd.it/13ib4em
@r_devops
Reddit
r/devops on Reddit: Anyone set up azure devops to link to Jira?
Posted by u/UsedMood2 - No votes and no comments
I wrote an article about AWS MSK with external Kafka connect and schema registry.
Hello all, I'm working as a junior devops engineer. I wrote an article about connecting to aws MSK from Kafka connect and schema registry. Please give your views.
Also, I'm trying to connect MSK connector with AWS keyspace. It's asking for trust stor location. I don't know how how to pass the file to MSK connector and what path to give. If you have idea please help me.
https://link.medium.com/KqnZXJUbPzb
Thank you for your time.
https://redd.it/13ic4k5
@r_devops
Hello all, I'm working as a junior devops engineer. I wrote an article about connecting to aws MSK from Kafka connect and schema registry. Please give your views.
Also, I'm trying to connect MSK connector with AWS keyspace. It's asking for trust stor location. I don't know how how to pass the file to MSK connector and what path to give. If you have idea please help me.
https://link.medium.com/KqnZXJUbPzb
Thank you for your time.
https://redd.it/13ic4k5
@r_devops
Medium
AWS MSK with kafka-connect and schema registry
In this blog I will explain how to configure kafka-connect to work with MSK and schema registry.
Re: the coding post
/u/Nimda_lel basically put what I said 6 months ago into a more politically correct post. Great post nimda! People were salty at mine. Tl;dr - there are two tracks to “DevOps”. I’d recommend coming from the Dev side, and if you don’t, you should learn how to code. I would say scripting is probably not enough. Knowing how to work on and navigate an application code base and implement composable and reusable code is super important to knowing how to actually code. Don’t be a no coder. You will soon be automated away by an AWS abstraction. Good luck.
https://reddit.com/r/devops/comments/xrkdbn/devops_is_for_people_who_cant_code/
https://redd.it/13iefoe
@r_devops
/u/Nimda_lel basically put what I said 6 months ago into a more politically correct post. Great post nimda! People were salty at mine. Tl;dr - there are two tracks to “DevOps”. I’d recommend coming from the Dev side, and if you don’t, you should learn how to code. I would say scripting is probably not enough. Knowing how to work on and navigate an application code base and implement composable and reusable code is super important to knowing how to actually code. Don’t be a no coder. You will soon be automated away by an AWS abstraction. Good luck.
https://reddit.com/r/devops/comments/xrkdbn/devops_is_for_people_who_cant_code/
https://redd.it/13iefoe
@r_devops
Reddit
r/devops on Reddit: Devops is for people who can’t code
Posted by u/findmeatikea - No votes and 64 comments
How valuable is home lab automation when applying for Devops?
I've integrated several services at home and learnt a great deal messing around with things such as Prometheus, Grafana, Jenkins, Loki, Uptime Kuma, Pihole, OpenDNs, Containers.
I've taken full courses on Cisco CCNA online but didn't get the certificate because of cost. Currently learning about AWS and Kubernetes.
I barely use any of these at work as I work as a lab scientist, but I really want to get into Devops.
I'm in the UK and I feel like when I search for Junior Devops jobs they all require you to have worked in the industry or production environment. Will I even get through to the interview process if all I'm saying is that I have experience from playing with these services at home?
https://redd.it/13ie31y
@r_devops
I've integrated several services at home and learnt a great deal messing around with things such as Prometheus, Grafana, Jenkins, Loki, Uptime Kuma, Pihole, OpenDNs, Containers.
I've taken full courses on Cisco CCNA online but didn't get the certificate because of cost. Currently learning about AWS and Kubernetes.
I barely use any of these at work as I work as a lab scientist, but I really want to get into Devops.
I'm in the UK and I feel like when I search for Junior Devops jobs they all require you to have worked in the industry or production environment. Will I even get through to the interview process if all I'm saying is that I have experience from playing with these services at home?
https://redd.it/13ie31y
@r_devops
Reddit
r/devops on Reddit: How valuable is home lab automation when applying for Devops?
Posted by u/ReverendRou - No votes and 4 comments
Mac VMs with GUI for ui-tests
I believe this is the best sub to ask this in, since my google searches showed me some past results on this sub.
I run a Github Action which runs some UI-tests both native and web on a headful (with GUI) MacOS instance. The instances themselves need GUI for the frameworks that I use to have them use accessibility features (native).
I was using Hetzner's dedicated Mac servers service until I found out (today) they're no longer supported or offered by them. I believe they used the term end-of-life. They were closer to baremetal, anyway.
I'm looking for a cloud-based provider for MacOS VMs , since I want this to be scaleable in the future. Other things that came to mind:
* decent display resolution (not the AWS fixed to 1024x768 on m1 instances [crap](https://repost.aws/questions/QUQQLxZOjpT52SOL7ZvskA5w/questions/QUQQLxZOjpT52SOL7ZvskA5w/macos-ec2-instance-screen-sharing-display-resolution))
* preferably VMs, not baremetal, since I want to spin them up via API, maybe snapshotting for ease of provisioning
* API, of course
* preferably static IPs
* preferably non-block-storage, to avoid IOPS issues caused by disk intensive ops from other instances on the same compute
* preferably a way to backup up the system to allow for scratch install using the same resource
What I tried/went through and don't think it's a solution:
* AWS - fixed display size on m1, huge costs (you basically need 2 dedicated hosts minimum because of the long spinup times (+2 hours in some cases); 2 dedicated VMs with 90%+ uptime go above 10k$ / year
* Scaleway: their 1 machine per availability zone limit is weird and they only have the small 8 GB RAM flavour; haven't tried it tho;
* Hetzner: no longer offering this service
* Github Mac: no GUI, it's basically a build machine
* Azure: couldn't find anything, i'm guessing they merged with Github
* Macstadium: you basically rent mac minis, same as hetzner, but with worse customer support according to reddit
* Oakhost: no info on this, anywhere, but they limit traffic to 10TB which might burn faster that expected
* Macincloud: no info on this, just that they offer what other offer, but charge a bit more; doesn't seem to have API access
* MacWeb: same as MacInCloud, no API
Anyone else hit this?
Any suggestions, pointers would be highly appreciated.
Thanks in advance.
Apologies if this is not the correct sub.
https://redd.it/13ih8t9
@r_devops
I believe this is the best sub to ask this in, since my google searches showed me some past results on this sub.
I run a Github Action which runs some UI-tests both native and web on a headful (with GUI) MacOS instance. The instances themselves need GUI for the frameworks that I use to have them use accessibility features (native).
I was using Hetzner's dedicated Mac servers service until I found out (today) they're no longer supported or offered by them. I believe they used the term end-of-life. They were closer to baremetal, anyway.
I'm looking for a cloud-based provider for MacOS VMs , since I want this to be scaleable in the future. Other things that came to mind:
* decent display resolution (not the AWS fixed to 1024x768 on m1 instances [crap](https://repost.aws/questions/QUQQLxZOjpT52SOL7ZvskA5w/questions/QUQQLxZOjpT52SOL7ZvskA5w/macos-ec2-instance-screen-sharing-display-resolution))
* preferably VMs, not baremetal, since I want to spin them up via API, maybe snapshotting for ease of provisioning
* API, of course
* preferably static IPs
* preferably non-block-storage, to avoid IOPS issues caused by disk intensive ops from other instances on the same compute
* preferably a way to backup up the system to allow for scratch install using the same resource
What I tried/went through and don't think it's a solution:
* AWS - fixed display size on m1, huge costs (you basically need 2 dedicated hosts minimum because of the long spinup times (+2 hours in some cases); 2 dedicated VMs with 90%+ uptime go above 10k$ / year
* Scaleway: their 1 machine per availability zone limit is weird and they only have the small 8 GB RAM flavour; haven't tried it tho;
* Hetzner: no longer offering this service
* Github Mac: no GUI, it's basically a build machine
* Azure: couldn't find anything, i'm guessing they merged with Github
* Macstadium: you basically rent mac minis, same as hetzner, but with worse customer support according to reddit
* Oakhost: no info on this, anywhere, but they limit traffic to 10TB which might burn faster that expected
* Macincloud: no info on this, just that they offer what other offer, but charge a bit more; doesn't seem to have API access
* MacWeb: same as MacInCloud, no API
Anyone else hit this?
Any suggestions, pointers would be highly appreciated.
Thanks in advance.
Apologies if this is not the correct sub.
https://redd.it/13ih8t9
@r_devops
Amazon Web Services, Inc.
MacOS EC2 Instance Screen Sharing Display Resolution
# Issue
Connecting to an EC2 MacOS instance has a very low resolution and no way to modify larger than the default 1024.
Once connected and logged in the instance via screen sharing display is ver...
Connecting to an EC2 MacOS instance has a very low resolution and no way to modify larger than the default 1024.
Once connected and logged in the instance via screen sharing display is ver...
Best DevOps courses in Pluralsight
I usually rely on Udemy for anything related to Microservices topics but now I got access to plural sight.
I want to see what Pluralsight can offer best courses within DevOps ecosystem?
https://redd.it/13ih9pv
@r_devops
I usually rely on Udemy for anything related to Microservices topics but now I got access to plural sight.
I want to see what Pluralsight can offer best courses within DevOps ecosystem?
https://redd.it/13ih9pv
@r_devops
Reddit
r/devops on Reddit: Best DevOps courses in Pluralsight
Posted by u/vikramty - No votes and no comments
SaaS-based SAST tool for enterprise code quality scanning?
We currently use SonarQube and are seeking alternatives. Cost is not a concern as we would like to evaluate all of the best possible enterprise-level tools on the market. One of our InfoSec requirements is that the tool supports SSO natively (otherwise we would consider something like SonarCloud). Our developer requirements are that the tool have good code coverage scanning capabilities and can integrate into CI/CD pipelines in Azure DevOps and GitHub.
A few of our developers have experience with Snyk Code and have recommended we evaluate this. I've also scoured Reddit for some alternatives and seems like Checkmarx might have a platform worth evaluating. Are there others we should be looking to evaluate?
https://redd.it/13ig8bz
@r_devops
We currently use SonarQube and are seeking alternatives. Cost is not a concern as we would like to evaluate all of the best possible enterprise-level tools on the market. One of our InfoSec requirements is that the tool supports SSO natively (otherwise we would consider something like SonarCloud). Our developer requirements are that the tool have good code coverage scanning capabilities and can integrate into CI/CD pipelines in Azure DevOps and GitHub.
A few of our developers have experience with Snyk Code and have recommended we evaluate this. I've also scoured Reddit for some alternatives and seems like Checkmarx might have a platform worth evaluating. Are there others we should be looking to evaluate?
https://redd.it/13ig8bz
@r_devops
Reddit
r/devops on Reddit: SaaS-based SAST tool for enterprise code quality scanning?
Posted by u/AMercifulHello - 1 vote and 1 comment
How are companies distributing their workloads in a multi-cloud architecture?
Hi, I am a grad student interested to work on a devops project. I am interested in knowing how companies distribute their workloads in a multi-cloud setting. The way I am categorizing it as of now is as follows:
1. Run orthogonal workloads (business-wise) such as say all ML training workloads on GCP and OLTP workloads on AWS?
2. Take a more fine-grained approach such as say two active-active replicas that require strong consistency running on two different clouds? Note this strategy requires high availability guarantee.
A follow up question is where do you see multi-cloud is going? Towards #1 or #2. Also do you know how control plane management such as etcd is being done in multi-cloud today? Are there multi-cloud control plane coordination systems such as zookeeper? Or do you see value in it?
https://redd.it/13ilktq
@r_devops
Hi, I am a grad student interested to work on a devops project. I am interested in knowing how companies distribute their workloads in a multi-cloud setting. The way I am categorizing it as of now is as follows:
1. Run orthogonal workloads (business-wise) such as say all ML training workloads on GCP and OLTP workloads on AWS?
2. Take a more fine-grained approach such as say two active-active replicas that require strong consistency running on two different clouds? Note this strategy requires high availability guarantee.
A follow up question is where do you see multi-cloud is going? Towards #1 or #2. Also do you know how control plane management such as etcd is being done in multi-cloud today? Are there multi-cloud control plane coordination systems such as zookeeper? Or do you see value in it?
https://redd.it/13ilktq
@r_devops
Reddit
r/devops on Reddit: How are companies distributing their workloads in a multi-cloud architecture?
Posted by u/Positive-Action-7096 - No votes and 1 comment
What’s an alternative to Amplication? I’m using Refine for FrontEnd Nextjs Supabase
What’s an alternative to amplication.com?
Creating my FrontEnd using Refine,
Works great…
On the other hand,
The Amplication docs… are wrong, has wrong and missing packages, also, mis labeled directories… the support is very snooty.
What is an alternative to Amplication?
https://redd.it/13iomv4
@r_devops
What’s an alternative to amplication.com?
Creating my FrontEnd using Refine,
Works great…
On the other hand,
The Amplication docs… are wrong, has wrong and missing packages, also, mis labeled directories… the support is very snooty.
What is an alternative to Amplication?
https://redd.it/13iomv4
@r_devops
Reddit
r/devops on Reddit: What’s an alternative to Amplication? I’m using Refine for FrontEnd Nextjs Supabase
Posted by u/Codeeveryday123 - No votes and 1 comment
What are Devops Contractors charging in 2023?
Hi everyone,
I’m contracting but I feel like Im short-selling myself. I may have an opportunity to get a new client but I’ve been trying to figure out what the average rate is /hr? I’ve seen $100-$150/hr on a post from a few years back. Do skills and certs matter? Is there any rhyme or reason to determining what your skills are worth per hour?
https://redd.it/13ir3yr
@r_devops
Hi everyone,
I’m contracting but I feel like Im short-selling myself. I may have an opportunity to get a new client but I’ve been trying to figure out what the average rate is /hr? I’ve seen $100-$150/hr on a post from a few years back. Do skills and certs matter? Is there any rhyme or reason to determining what your skills are worth per hour?
https://redd.it/13ir3yr
@r_devops
Reddit
r/devops on Reddit: What are Devops Contractors charging in 2023?
Posted by u/Minute_Box6650 - No votes and no comments
New Grad, landed a DevOps job
So I just graduated last month with a Software Engineering degree, which I did reasonably well in. Managed to land a DevOps role at a relatively small startup, and finished the first week of work and feel heavily overwhelmed. I feel like if I can't catch up in a week I'll be left behind but I'm wondering if it is even possible.
If anyone has any resources or tips on how to make sure I can see my days through, I would love to take that and work towards it. Or if there is any other advice that would be kindly shared would be greatly appreciated.
https://redd.it/13israw
@r_devops
So I just graduated last month with a Software Engineering degree, which I did reasonably well in. Managed to land a DevOps role at a relatively small startup, and finished the first week of work and feel heavily overwhelmed. I feel like if I can't catch up in a week I'll be left behind but I'm wondering if it is even possible.
If anyone has any resources or tips on how to make sure I can see my days through, I would love to take that and work towards it. Or if there is any other advice that would be kindly shared would be greatly appreciated.
https://redd.it/13israw
@r_devops
Reddit
r/devops on Reddit: New Grad, landed a DevOps job
Posted by u/beardedcaplfc - No votes and 10 comments
Terraform | Take your Terraform skills to the next level!
Techniques for scalable and efficient infrastructure management -
The Ultimate Guide to Advanced Terraform Techniques for DevOps
https://medium.com/faun/the-ultimate-guide-to-advanced-terraform-techniques-for-devops-b202b6845170
https://redd.it/13iu19l
@r_devops
Techniques for scalable and efficient infrastructure management -
The Ultimate Guide to Advanced Terraform Techniques for DevOps
https://medium.com/faun/the-ultimate-guide-to-advanced-terraform-techniques-for-devops-b202b6845170
https://redd.it/13iu19l
@r_devops
Medium
The Ultimate Guide to Advanced Terraform Techniques for DevOps
Take your Terraform skills to the next level with these advanced techniques for scalable and efficient infrastructure management.
FIPS support for Kubernetes deployment
So our applications failed to start on Ubuntu Pro which has FIPS enabled. These apps are deployed as pods in the k8s cluster. We use a GitOps approach to pull changes from SCM, build docker images with Jenkins and deploy to the cluster with ArgoCD. Anyway, how can I fix this?
https://redd.it/13ivvpa
@r_devops
So our applications failed to start on Ubuntu Pro which has FIPS enabled. These apps are deployed as pods in the k8s cluster. We use a GitOps approach to pull changes from SCM, build docker images with Jenkins and deploy to the cluster with ArgoCD. Anyway, how can I fix this?
https://redd.it/13ivvpa
@r_devops
Reddit
r/devops on Reddit: FIPS support for Kubernetes deployment
Posted by u/ncubez - No votes and no comments
Difference between Redis cache server and a CDN?
Aren't both the same thing? What's the difference between them?
https://stackoverflow.com/questions/63409344/difference-between-azure-reddis-cache-and-azure-cdn
I've read this post.
https://redd.it/13iwlj4
@r_devops
Aren't both the same thing? What's the difference between them?
https://stackoverflow.com/questions/63409344/difference-between-azure-reddis-cache-and-azure-cdn
I've read this post.
https://redd.it/13iwlj4
@r_devops
Stack Overflow
Difference between Azure Reddis Cache and Azure CDN
I need to implement a cache in my application using Azure Cache for Reddis but I went to some blogs where I have an option to store my responses or data using Azure CDN.
Could someone suggest me wh...
Could someone suggest me wh...
New gig, rough in-place ops. Biz buy-in for an overhaul, want some advice, technical and managing human interactions.
heya, survived the great 2022 layoffs with a new not-startup gig. they setup AWS like 9 years ago with some folks that barely cobbled things together, and they left, and somehow the business has been generating sufficient value to hire me to help bring them to "the next level".
Yes, requirements dictate what we build. Given a general, greenfield application, where we're porting logic and integrating with queues, what would be a good, maintainable approach? Language agnostic, we can figure out that part later.
CI/CD - Best to keep with git provider?
Anything about Logging/Monitoring/Debugging especially. My past gigs had paid for tools, ( datadog, sentry, newrelic ) I'm not sure what's good, especially around anamoly detection.
Interpersonally, I feel it may be challenging. Their contributor role is locked down tighter than a steel trap, I can't even list resources, much less access cloud shell, and getting those permissions changed is corporately burdensome. I feel the same weight will be applied when trying to spin up some isolated ad-hoc services. How to navigate?
I have buy-in from my boss and all the bosses up the chain to mess shit up, they know it's already broken. ( mess shit up, like feel free to step on all the toes, they would not like me breaking production / users / money ).
Edit: Lets avoid Kubernetes for now. IaC, data busses and service discovery would also be useful to know current thinking around.
https://redd.it/13ixfww
@r_devops
heya, survived the great 2022 layoffs with a new not-startup gig. they setup AWS like 9 years ago with some folks that barely cobbled things together, and they left, and somehow the business has been generating sufficient value to hire me to help bring them to "the next level".
Yes, requirements dictate what we build. Given a general, greenfield application, where we're porting logic and integrating with queues, what would be a good, maintainable approach? Language agnostic, we can figure out that part later.
CI/CD - Best to keep with git provider?
Anything about Logging/Monitoring/Debugging especially. My past gigs had paid for tools, ( datadog, sentry, newrelic ) I'm not sure what's good, especially around anamoly detection.
Interpersonally, I feel it may be challenging. Their contributor role is locked down tighter than a steel trap, I can't even list resources, much less access cloud shell, and getting those permissions changed is corporately burdensome. I feel the same weight will be applied when trying to spin up some isolated ad-hoc services. How to navigate?
I have buy-in from my boss and all the bosses up the chain to mess shit up, they know it's already broken. ( mess shit up, like feel free to step on all the toes, they would not like me breaking production / users / money ).
Edit: Lets avoid Kubernetes for now. IaC, data busses and service discovery would also be useful to know current thinking around.
https://redd.it/13ixfww
@r_devops
Reddit
r/devops on Reddit: New gig, rough in-place ops. Biz buy-in for an overhaul, want some advice, technical and managing human interactions.
Posted by u/Someoneoldbutnew - No votes and 1 comment
How to utilise my skills in my current company and also stay not to forget what I learned?
I am a so called junior AWS devops engineer in a early stage startup
As a DevOps engineer in a small startup utilizing AWS for our applications, our main objective is to manage our budget effectively. Currently, we are running only five EC2 instances with two to three applications on each. While I understand that as a DevOps person, I should be using a variety of tools including Jenkins, Ansible, Terraform, Docker, and Kubernetes, I am currently only able to use Jenkins and codepipeline due to our limited infrastructure. And may be writing bash scripts some times. With only five servers running different applications, it may not be necessary to implement Ansible as it is a configuration management tool. Additionally, ECS and EKS are costly and not feasible for our needs, so we are unable to use Kubernetes on the EC2 instance itself as it would require a minimum of 2 CPUs, increasing our costs. Without Kubernetes, Docker may not be suitable for our case. As for Terraform, we believe that using the console is sufficient for our five servers. However, I am open to suggestions and ideas on how to best utilize these tools within our current infrastructure limitations. I want to utilise my skills and apply whatever I learned in my company. Because I learned all the tools which I've mentioned but haven't had any chance to use them in the company.
I fear that i may forget those if i don't stay in touch with them daily. While doing personal projects seems to be good idea but how long can I do them? Is it easy to forget the tools which we've learned if not use them occasionally? I need your advice and suggestions I am a so called junior AWS devops engineer in a early stage startup
As a DevOps engineer in a small startup utilizing AWS for our applications, our main objective is to manage our budget effectively. Currently, we are running only five EC2 instances with two to three applications on each. While I understand that as a DevOps person, I should be using a variety of tools including Jenkins, Ansible, Terraform, Docker, and Kubernetes, I am currently only able to use Jenkins and codepipeline due to our limited infrastructure. And may be writing bash scripts some times. With only five servers running different applications, it may not be necessary to implement Ansible as it is a configuration management tool. Additionally, ECS and EKS are costly and not feasible for our needs, so we are unable to use Kubernetes on the EC2 instance itself as it would require a minimum of 2 CPUs, increasing our costs. Without Kubernetes, Docker may not be suitable for our case. As for Terraform, we believe that using the console is sufficient for our five servers. However, I am open to suggestions and ideas on how to best utilize these tools within our current infrastructure limitations. I want to utilise my skills and apply whatever I learned in my company. Because I learned all the tools which I've mentioned but haven't had any chance to use them in the company.
I fear that i may forget those if i don't stay in touch with them daily. While doing personal projects seems to be good idea but how long can I do them? Is it easy to forget the tools which we've learned if not use them occasionally? I need your advice and suggestions
https://redd.it/13iz5vf
@r_devops
I am a so called junior AWS devops engineer in a early stage startup
As a DevOps engineer in a small startup utilizing AWS for our applications, our main objective is to manage our budget effectively. Currently, we are running only five EC2 instances with two to three applications on each. While I understand that as a DevOps person, I should be using a variety of tools including Jenkins, Ansible, Terraform, Docker, and Kubernetes, I am currently only able to use Jenkins and codepipeline due to our limited infrastructure. And may be writing bash scripts some times. With only five servers running different applications, it may not be necessary to implement Ansible as it is a configuration management tool. Additionally, ECS and EKS are costly and not feasible for our needs, so we are unable to use Kubernetes on the EC2 instance itself as it would require a minimum of 2 CPUs, increasing our costs. Without Kubernetes, Docker may not be suitable for our case. As for Terraform, we believe that using the console is sufficient for our five servers. However, I am open to suggestions and ideas on how to best utilize these tools within our current infrastructure limitations. I want to utilise my skills and apply whatever I learned in my company. Because I learned all the tools which I've mentioned but haven't had any chance to use them in the company.
I fear that i may forget those if i don't stay in touch with them daily. While doing personal projects seems to be good idea but how long can I do them? Is it easy to forget the tools which we've learned if not use them occasionally? I need your advice and suggestions I am a so called junior AWS devops engineer in a early stage startup
As a DevOps engineer in a small startup utilizing AWS for our applications, our main objective is to manage our budget effectively. Currently, we are running only five EC2 instances with two to three applications on each. While I understand that as a DevOps person, I should be using a variety of tools including Jenkins, Ansible, Terraform, Docker, and Kubernetes, I am currently only able to use Jenkins and codepipeline due to our limited infrastructure. And may be writing bash scripts some times. With only five servers running different applications, it may not be necessary to implement Ansible as it is a configuration management tool. Additionally, ECS and EKS are costly and not feasible for our needs, so we are unable to use Kubernetes on the EC2 instance itself as it would require a minimum of 2 CPUs, increasing our costs. Without Kubernetes, Docker may not be suitable for our case. As for Terraform, we believe that using the console is sufficient for our five servers. However, I am open to suggestions and ideas on how to best utilize these tools within our current infrastructure limitations. I want to utilise my skills and apply whatever I learned in my company. Because I learned all the tools which I've mentioned but haven't had any chance to use them in the company.
I fear that i may forget those if i don't stay in touch with them daily. While doing personal projects seems to be good idea but how long can I do them? Is it easy to forget the tools which we've learned if not use them occasionally? I need your advice and suggestions
https://redd.it/13iz5vf
@r_devops
Reddit
r/devops on Reddit: How to utilise my skills in my current company and also stay not to forget what I learned?
Posted by u/Neither_Wallaby_9033 - No votes and no comments
How to handle major version bumps when using a fully automated CI/CD pipeline? (SemVer)
I have some open-source apps that use various tooling for SemVer based on conventional commits, such as Commitizen, Cocogitto and standard-version. These tools changed based on project needs and the time when I created them, but all of them have the same issue that I'm not sure how to address:
When I want to bump a major version, say the app is ready for release from 0.x to 1.x how can I get these tools to do that instead of their regular bumping strategy of using
Cocogitto has the
Or should I just manually run a major release and push the tag to Git? Then of course I have to make sure to include a
https://redd.it/13j0781
@r_devops
I have some open-source apps that use various tooling for SemVer based on conventional commits, such as Commitizen, Cocogitto and standard-version. These tools changed based on project needs and the time when I created them, but all of them have the same issue that I'm not sure how to address:
When I want to bump a major version, say the app is ready for release from 0.x to 1.x how can I get these tools to do that instead of their regular bumping strategy of using
feat commits for minor and fix commits for patch releases? Cocogitto has the
--major flag, but I'm not sure what kind of rules could be used in my CI/CD pipeline (GitHub Actions/Drone) to use that flag instead of the automatic bumping strategy.Or should I just manually run a major release and push the tag to Git? Then of course I have to make sure to include a
[SKIP CI] in the commit message to avoid running the pipelines and skipping all the automated release steps like changelog and Docker image which isn't ideal either.https://redd.it/13j0781
@r_devops
Reddit
r/devops on Reddit: How to handle major version bumps when using a fully automated CI/CD pipeline? (SemVer)
Posted by u/Dan6erbond2 - No votes and no comments
How do you create your Secret Key
We use AWS Secret Manager, i create like 20 keys manually but we have a lot more. How do you create your Keys?
I don't want to push all the keys to github and then deploy it with Terraform.
But how you create your keys if you have a lot?
https://redd.it/13izsv6
@r_devops
We use AWS Secret Manager, i create like 20 keys manually but we have a lot more. How do you create your Keys?
I don't want to push all the keys to github and then deploy it with Terraform.
But how you create your keys if you have a lot?
https://redd.it/13izsv6
@r_devops
Reddit
r/devops on Reddit: How do you create your Secret Key
Posted by u/surpyc - No votes and 3 comments
Why I created a new build system based on Alpine Linux
PAKman is one of the 4 core modules that power instellar.app. It's open-sourced and builds your application using github actions into alpine packages that get delivered to an S3 compatible bucket you specify via instellar. Our platform then takes that built package and deploys the application on your infrastructure.
You can continue reading or enjoy the full post with images here
## In the beginning
Back in 2018 I looked at using docker before I embarked on the journey to build my own build system. At that point I had been using docker for a long time. I was an early user of docker and one of the issues I constantly ran into was the following:
Large build artifact (hundreds of MB)
Needed a Registry
Consumes bandwidth
Slow deployments
At first I considered just using docker because it was the 'standard'. Everyone was using docker and docker swarm was in it's hey days, k8s was gaining steam. Most of the docker images were built using ubuntu as the base image, as you can imagine the built images were quite large. Alpine linux was gaining popularity and was starting to be used in the docker community to reduce image size. I often wondered, why the community didn't just build using alpine's native build system. So I tried it for myself. It took me a long time to work through the alpine build system, the documentation was scarce and I had to trial and error my way to understanding it. My little experiment made me realize that while the final output was amazing (built packages were ranging from few MB - 50MB depending on the application) it was extremely complex to use. I figured most people probably just ended up using docker due to simplicity and readily available documentation.
I ended up mastering building with alpine's package system and threw together some scripts that would automate and make things easy to build with alpine packages. There was however one problem, this meant not using docker for running the applications. With docker you build the app into a docker image and you run the entire image. You wouldn't just install the custom package in a docker container because that means the image would need a package manager and that would just make the final image even larger. This is where the concept of docker being an 'application' container hit hard.
I also explored kubernetes to see what it could do and figured that kubernetes was way too complex for most deployments. The conclusion I came to was k8s and docker would work together. If I wanted to use my alpine package build method I would need something else.
## Enter LXD
While doing my research I found LXD, it advertised itself as being a 'system' container this meant creating an LXC container would mean I had the entire OS running including the package manager. This was exactly what I was looking for and would fit with my build system like peas in a pod. LXD containers meant that all I had to do was expose the alpine package in a file system and add it as the repository inside the alpine linux container and I could run
## A new Invention is needed!
While my proof of concept worked it was far from ready for primetime. I needed something which is robust, written in a language I'm familiar with (elixir), and most importantly worked with an existing infrastructure I didn't have to host. The first versions of PAKman I hacked together was a combination of building packer images using bash script that would run in a custom gitlab runner. While it worked, it was
PAKman is one of the 4 core modules that power instellar.app. It's open-sourced and builds your application using github actions into alpine packages that get delivered to an S3 compatible bucket you specify via instellar. Our platform then takes that built package and deploys the application on your infrastructure.
You can continue reading or enjoy the full post with images here
## In the beginning
Back in 2018 I looked at using docker before I embarked on the journey to build my own build system. At that point I had been using docker for a long time. I was an early user of docker and one of the issues I constantly ran into was the following:
Large build artifact (hundreds of MB)
Needed a Registry
Consumes bandwidth
Slow deployments
At first I considered just using docker because it was the 'standard'. Everyone was using docker and docker swarm was in it's hey days, k8s was gaining steam. Most of the docker images were built using ubuntu as the base image, as you can imagine the built images were quite large. Alpine linux was gaining popularity and was starting to be used in the docker community to reduce image size. I often wondered, why the community didn't just build using alpine's native build system. So I tried it for myself. It took me a long time to work through the alpine build system, the documentation was scarce and I had to trial and error my way to understanding it. My little experiment made me realize that while the final output was amazing (built packages were ranging from few MB - 50MB depending on the application) it was extremely complex to use. I figured most people probably just ended up using docker due to simplicity and readily available documentation.
I ended up mastering building with alpine's package system and threw together some scripts that would automate and make things easy to build with alpine packages. There was however one problem, this meant not using docker for running the applications. With docker you build the app into a docker image and you run the entire image. You wouldn't just install the custom package in a docker container because that means the image would need a package manager and that would just make the final image even larger. This is where the concept of docker being an 'application' container hit hard.
I also explored kubernetes to see what it could do and figured that kubernetes was way too complex for most deployments. The conclusion I came to was k8s and docker would work together. If I wanted to use my alpine package build method I would need something else.
## Enter LXD
While doing my research I found LXD, it advertised itself as being a 'system' container this meant creating an LXC container would mean I had the entire OS running including the package manager. This was exactly what I was looking for and would fit with my build system like peas in a pod. LXD containers meant that all I had to do was expose the alpine package in a file system and add it as the repository inside the alpine linux container and I could run
apk update && apk add [package] and be done with it. I hacked together a proof of concept with bash and terraform and amazingly it worked! I was actually able to just build my app and just ship my app to my lxc container and it was blazingly fast! Apps were being deployed in a matter of a few seconds! Upgrades were also handled by alpine packages by adding the -u flag. Upgrades were even faster than installing a fresh package.## A new Invention is needed!
While my proof of concept worked it was far from ready for primetime. I needed something which is robust, written in a language I'm familiar with (elixir), and most importantly worked with an existing infrastructure I didn't have to host. The first versions of PAKman I hacked together was a combination of building packer images using bash script that would run in a custom gitlab runner. While it worked, it was
instellar.app
PaaS on your terms - Instellar.app
Develop and push to github and deploy on your own hardware or cloud of your choice.
not elegant and was not flexible. In 2018 Github Action was released I explored github actions and realized that I could create my own custom action inside a docker container which meant I could use whatever programming language I wanted to create the build system.
I realized that I needed to create a simple solution for people and simply telling everyone to simply 'just use alpine's build system' would not work. I had an idea that I could essentially simplify everything down to a
## Project Goal
While I still needed to use a docker container to create the final build since that's how github actions work. I realized that I can simply extract the artifact and ship it to an S3 compatible storage. This was the most simple design. Since once the package was built I could install and run it anywhere Alpine Linux ran. This would achieve the following goals:
No need for custom infrastructure for building
Packages need to be as small as possible
Save on bandwidth costs
Fast deployments (matter of seconds)
While many may challenge my decisions of saving bandwidth. I do have my reasons. I believe if something can be done well it should be done. In the big picture the goals of PAKman serves our mission for instellar.app. Instellar enables anyone to run their own PaaS on their own infrastructure. This means it's important for us to keep the cost of ownership low. If we can save on bandwidth costs for our customers it's our duty to do it. Another valuable asset we save is time. Small packages mean deployments are fast! The update for the blog you are reading now was deployed in 6 seconds! You can see PAKman in action.
The final built artifact that gets shipped over the wire for this NextJS blog weighs in at 5.69 MB
Welcome to the future!
https://redd.it/13j2jp2
@r_devops
I realized that I needed to create a simple solution for people and simply telling everyone to simply 'just use alpine's build system' would not work. I had an idea that I could essentially simplify everything down to a
.yml file. I needed to develop an intermediary layer that would take the yaml file and convert it into files that the apkbuild system for alpine linux would understand. This is the birth of PAKman## Project Goal
While I still needed to use a docker container to create the final build since that's how github actions work. I realized that I can simply extract the artifact and ship it to an S3 compatible storage. This was the most simple design. Since once the package was built I could install and run it anywhere Alpine Linux ran. This would achieve the following goals:
No need for custom infrastructure for building
Packages need to be as small as possible
Save on bandwidth costs
Fast deployments (matter of seconds)
While many may challenge my decisions of saving bandwidth. I do have my reasons. I believe if something can be done well it should be done. In the big picture the goals of PAKman serves our mission for instellar.app. Instellar enables anyone to run their own PaaS on their own infrastructure. This means it's important for us to keep the cost of ownership low. If we can save on bandwidth costs for our customers it's our duty to do it. Another valuable asset we save is time. Small packages mean deployments are fast! The update for the blog you are reading now was deployed in 6 seconds! You can see PAKman in action.
The final built artifact that gets shipped over the wire for this NextJS blog weighs in at 5.69 MB
Welcome to the future!
https://redd.it/13j2jp2
@r_devops
instellar.app
PaaS on your terms - Instellar.app
Develop and push to github and deploy on your own hardware or cloud of your choice.
Basic Kubernetes Interview Questions We Should Know as a DevOps
Kubernetes Interview Questions For DevOps Opportunities -
https://medium.com/@inkinsight/cracking-the-code-on-advanced-kubernetes-interview-questions-65f99359bfd9
https://redd.it/13j39jq
@r_devops
Kubernetes Interview Questions For DevOps Opportunities -
https://medium.com/@inkinsight/cracking-the-code-on-advanced-kubernetes-interview-questions-65f99359bfd9
https://redd.it/13j39jq
@r_devops
Medium
Kubernetes Interview Questions and Answers for Experienced DevOps— Part 2
Get Ahead in Your Kubernetes Interview with In-Depth Knowledge of Best Practices for Securing a Kubernetes Cluster, Common Networking…