Can someone who uses GitHub Actions chime in for a multi-region cloud deployment?
How are you running GitHub Actions for your cloud multi-region app deployments? Are you doing your builds in one region, pushing the image to another region, and then deploying to your compute across multiple regions? I want to understand how everyone is deploying to different regions and how I should structure my GitHub workflows. I want to know what works best and what are some things I need to be on the lookout for. Thanks for the help!
https://redd.it/1i3seze
@r_devops
How are you running GitHub Actions for your cloud multi-region app deployments? Are you doing your builds in one region, pushing the image to another region, and then deploying to your compute across multiple regions? I want to understand how everyone is deploying to different regions and how I should structure my GitHub workflows. I want to know what works best and what are some things I need to be on the lookout for. Thanks for the help!
https://redd.it/1i3seze
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
k8s on ec2 with EKS as storage backend issue
Hi everyone,
it's me again, a newbie with k8s on ec2. I am planning to use the [**aws-efs-csi-driver**](https://github.com/kubernetes-sigs/aws-efs-csi-driver) to connect with efs as my storage backend.
But I am not sure how it actually works. For on-prem VM based k8s cluster, I could have a standard/cluster nfs server and provide a connection to it, so I could easily set up PVC and to be used by the pod.
for ec2 way, currently I have 3 master and 1 worker node (again, currently my subnet is too small and restricted that I have no chance to use EKS....) control plane HA endpoint seem to work properly now with NLB setup even one of the node is down.
The github page mention that you need the IAM role/instance profile to acquire the user permission to access efs. But I don't really understand this part. is it used in eks mode?
I tested with my efs on my worker node, I can mount the efs root to my /mnt/test directory, I have not tried with the access point yet. What would be the best practise to do it?
I installed the efs driver with helm but keep CrashLoopBackOff 1/3. I think it made my pvc in the pending state even my storageclass with efs is kinda completed.
May I know if anyone work with efs as the backend before? How was it? is it performance wise?
Hi everyone,
It's me again, a Kubernetes newbie working on EC2. I’m planning to use the **aws-efs-csi-driver** to connect **EFS** as my storage backend, but I’m having a hard time fully understanding how it works in this setup.
For on-prem Kubernetes clusters running on VMs, I could easily set up a standard NFS server or a cluster NFS and provide access to it, making it simple to create a Persistent Volume Claim (PVC) for use by pods. However, with EC2-based clusters, I'm not sure if the process is the same, especially in terms of integrating with EFS.
# Current Setup:
* I have a cluster with 3 master node**s** and 1 worker node.
* My subnet is small and currently restricted, so I don't have the option of using **EKS** (Amazon Elastic Kubernetes Service) at this point.
* The control plane HA endpoint seems to work fine, with the **NLB (Network Load Balancer)** set up, so even if one node goes down, things continue to function properly.
# Questions on EFS and IAM:
I understand from the GitHub page that the IAM role/instance profile is required to grant permissions for accessing EFS, but I'm unsure about how this works in an EC2-based setup. Is this specifically for EKS mode, or do I need to configure this IAM role even in my EC2-based Kubernetes cluster?
# EFS Testing and Helm Issue:
I have successfully mounted the EFS root to the `/mnt/test` directory on my worker node and verified that it works. However, I haven’t yet tried using an EFS Access Point—would that be the recommended approach for Kubernetes?
I also installed the EFS driver using Helm, but it keeps going into a `CrashLoopBackOff` state (1/3). I suspect this is why my PVC remains in the `Pending` state, even though the StorageClass for EFS seems to be set up correctly.
# Best Practices and Performance:
Has anyone here worked with EFS as a backend storage in a similar setup? How was your experience, particularly in terms of performance? Any tips or best practices you could share would be greatly appreciated!
Thanks in advance for your help!
https://redd.it/1i3ukvi
@r_devops
Hi everyone,
it's me again, a newbie with k8s on ec2. I am planning to use the [**aws-efs-csi-driver**](https://github.com/kubernetes-sigs/aws-efs-csi-driver) to connect with efs as my storage backend.
But I am not sure how it actually works. For on-prem VM based k8s cluster, I could have a standard/cluster nfs server and provide a connection to it, so I could easily set up PVC and to be used by the pod.
for ec2 way, currently I have 3 master and 1 worker node (again, currently my subnet is too small and restricted that I have no chance to use EKS....) control plane HA endpoint seem to work properly now with NLB setup even one of the node is down.
The github page mention that you need the IAM role/instance profile to acquire the user permission to access efs. But I don't really understand this part. is it used in eks mode?
I tested with my efs on my worker node, I can mount the efs root to my /mnt/test directory, I have not tried with the access point yet. What would be the best practise to do it?
I installed the efs driver with helm but keep CrashLoopBackOff 1/3. I think it made my pvc in the pending state even my storageclass with efs is kinda completed.
May I know if anyone work with efs as the backend before? How was it? is it performance wise?
Hi everyone,
It's me again, a Kubernetes newbie working on EC2. I’m planning to use the **aws-efs-csi-driver** to connect **EFS** as my storage backend, but I’m having a hard time fully understanding how it works in this setup.
For on-prem Kubernetes clusters running on VMs, I could easily set up a standard NFS server or a cluster NFS and provide access to it, making it simple to create a Persistent Volume Claim (PVC) for use by pods. However, with EC2-based clusters, I'm not sure if the process is the same, especially in terms of integrating with EFS.
# Current Setup:
* I have a cluster with 3 master node**s** and 1 worker node.
* My subnet is small and currently restricted, so I don't have the option of using **EKS** (Amazon Elastic Kubernetes Service) at this point.
* The control plane HA endpoint seems to work fine, with the **NLB (Network Load Balancer)** set up, so even if one node goes down, things continue to function properly.
# Questions on EFS and IAM:
I understand from the GitHub page that the IAM role/instance profile is required to grant permissions for accessing EFS, but I'm unsure about how this works in an EC2-based setup. Is this specifically for EKS mode, or do I need to configure this IAM role even in my EC2-based Kubernetes cluster?
# EFS Testing and Helm Issue:
I have successfully mounted the EFS root to the `/mnt/test` directory on my worker node and verified that it works. However, I haven’t yet tried using an EFS Access Point—would that be the recommended approach for Kubernetes?
I also installed the EFS driver using Helm, but it keeps going into a `CrashLoopBackOff` state (1/3). I suspect this is why my PVC remains in the `Pending` state, even though the StorageClass for EFS seems to be set up correctly.
# Best Practices and Performance:
Has anyone here worked with EFS as a backend storage in a similar setup? How was your experience, particularly in terms of performance? Any tips or best practices you could share would be greatly appreciated!
Thanks in advance for your help!
https://redd.it/1i3ukvi
@r_devops
GitHub
GitHub - kubernetes-sigs/aws-efs-csi-driver: CSI Driver for Amazon EFS https://aws.amazon.com/efs/
CSI Driver for Amazon EFS https://aws.amazon.com/efs/ - kubernetes-sigs/aws-efs-csi-driver
vueframe V3 is here !!!
Hey guys I officially have released V3 of vueframe, adding a bunch of quality of life improvements along with a cleaner and more consistent codebase.
What is vueframe
vueframe is a Vue 3 component library, allowing you to easily import media embed components from platforms such as YouTube and Vimeo into your projects.
heres a github link to project if you wish to check it out + a star would be amazing :)
https://redd.it/1i3tlcf
@r_devops
Hey guys I officially have released V3 of vueframe, adding a bunch of quality of life improvements along with a cleaner and more consistent codebase.
What is vueframe
vueframe is a Vue 3 component library, allowing you to easily import media embed components from platforms such as YouTube and Vimeo into your projects.
heres a github link to project if you wish to check it out + a star would be amazing :)
https://redd.it/1i3tlcf
@r_devops
GitHub
GitHub - vueframe/vueframe: High performance, rich media embed components. For your site, built using Vue.
High performance, rich media embed components. For your site, built using Vue. - vueframe/vueframe
Had an interview today where I was asked 7mins prior via text if could reschedule for a Saturday afternoon lol
Need to preface that this was for a tech lead position for an SRE team.
I didnt end up replying and joined the Zoom call anyways. Only one guy was on the interview. Other guy clearly couldn't make it - ok, but Saturday? lol
Things got off to an awkward start. "Tell me about yourself"
"Uhh ok... but who the hell are you?" (I thought you myself) Guy clearly did not want to be there.
Gave my usual 5min high level intro.. figured theyd go deeper into some or my resume items, but nope!
This guy proceeded to ask me random linux terminal commands on how to use sed, netstat etc... not really related to the position I applied for. Stumbled my way through it while bewildered the entire time. I haven't been "in the weeds" in a while since my team usually does these things.
Interview continues with random questions I feel an entry level candidate would be asked - "What port is HTTP / HTTPS?" ummm urrr ok..?
I wanted to express my experience in system design and architecture/replatforming (which is clearly on my resume) but nothing in that regard.
One hour was up, and he left zero time for questions and said they'd get back to me.. ok thanks for your time!
I feel like they wasted my time to be honest. Almost like they were forced to give me an interview for some metrics of some sort. Im tempted to reach out to recruiter to provide feedback on this whole ordeal.
https://redd.it/1i3x00f
@r_devops
Need to preface that this was for a tech lead position for an SRE team.
I didnt end up replying and joined the Zoom call anyways. Only one guy was on the interview. Other guy clearly couldn't make it - ok, but Saturday? lol
Things got off to an awkward start. "Tell me about yourself"
"Uhh ok... but who the hell are you?" (I thought you myself) Guy clearly did not want to be there.
Gave my usual 5min high level intro.. figured theyd go deeper into some or my resume items, but nope!
This guy proceeded to ask me random linux terminal commands on how to use sed, netstat etc... not really related to the position I applied for. Stumbled my way through it while bewildered the entire time. I haven't been "in the weeds" in a while since my team usually does these things.
Interview continues with random questions I feel an entry level candidate would be asked - "What port is HTTP / HTTPS?" ummm urrr ok..?
I wanted to express my experience in system design and architecture/replatforming (which is clearly on my resume) but nothing in that regard.
One hour was up, and he left zero time for questions and said they'd get back to me.. ok thanks for your time!
I feel like they wasted my time to be honest. Almost like they were forced to give me an interview for some metrics of some sort. Im tempted to reach out to recruiter to provide feedback on this whole ordeal.
https://redd.it/1i3x00f
@r_devops
Reddit
Had an interview today where I was asked 7mins prior via text if could reschedule for a Saturday afternoon lol : r/devops
28 votes, 10 comments. 429K subscribers in the devops community.
Looking for Experts in DevOps and CI/CD Implementation (in Auckland preferably)
I recently invested in a small SaaS startup, and we’re looking to speed up our delivery process while maintaining high-quality standards. Need DevOps talent that work on new features I have planned. Please recommend agencies and talent for this. In Auckland peferably. No big enterprises pls.
https://redd.it/1i3yen8
@r_devops
I recently invested in a small SaaS startup, and we’re looking to speed up our delivery process while maintaining high-quality standards. Need DevOps talent that work on new features I have planned. Please recommend agencies and talent for this. In Auckland peferably. No big enterprises pls.
https://redd.it/1i3yen8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Should I go for it?
Am gonna do a second interview with brooksource this weekend, the role is "jr platform engineer." Recruiter says it involves stuff like aws, cloud, migrating the older system, etc. Pay is $32 an hour. Only thing bugging me is its "contract 12m" to hire if yhe company (53 bank) likes my work. I'm a bit torn, I currently work as systems analyst for another financial company but it involves working on the mainframe and analyzing stuff in tso. I of course want to move away from this and into cloud, and this is a potential chance for me. Idk what to do, should I go for it (I'm a dec 2023 cs grad btw)
https://redd.it/1i3y6j5
@r_devops
Am gonna do a second interview with brooksource this weekend, the role is "jr platform engineer." Recruiter says it involves stuff like aws, cloud, migrating the older system, etc. Pay is $32 an hour. Only thing bugging me is its "contract 12m" to hire if yhe company (53 bank) likes my work. I'm a bit torn, I currently work as systems analyst for another financial company but it involves working on the mainframe and analyzing stuff in tso. I of course want to move away from this and into cloud, and this is a potential chance for me. Idk what to do, should I go for it (I'm a dec 2023 cs grad btw)
https://redd.it/1i3y6j5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Pypi package security
We are a very small team (in a large organisation that had no development team) that’s relied on pip installs from the Internet. To meet cybersecurity requirements around Python packages, we now need to be able to “only use packages from scanned and safe sources” without direct connection to pypi. Is anaconda the only choice? It is 50$/month/seat. I’ve seen jfrog mentioned, but I’m not sure if it is as easy to implement as anaconda. Or any other options? Thanks for your advice!
https://redd.it/1i3ywvf
@r_devops
We are a very small team (in a large organisation that had no development team) that’s relied on pip installs from the Internet. To meet cybersecurity requirements around Python packages, we now need to be able to “only use packages from scanned and safe sources” without direct connection to pypi. Is anaconda the only choice? It is 50$/month/seat. I’ve seen jfrog mentioned, but I’m not sure if it is as easy to implement as anaconda. Or any other options? Thanks for your advice!
https://redd.it/1i3ywvf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to gain in-depth intelligence in Kubernetes
I'm working in industry since last 8 years and have started working on Kubernetes when it was evolving as a Container orchestration platform. I had opportunity to setup kubernetes cluster using KOPS and KubeADM. Have done end-to-end setup from scratch.
Through out this journey I learnt both Docker and kubernetes through implementations and execution and as and when required reading their docs.
But sometimes it feels I still know only 50% of k8s. There are lot of things like writing your own CRDs, Rigorous implementation of security, Autoscaling and Uptime management which I haven't got and opportunity to work with. And to answer real-time questions we must have real-time experience.
So I want to know from you all DevOps experts how you gain expertise in these dark areas of K8S.
https://redd.it/1i41le7
@r_devops
I'm working in industry since last 8 years and have started working on Kubernetes when it was evolving as a Container orchestration platform. I had opportunity to setup kubernetes cluster using KOPS and KubeADM. Have done end-to-end setup from scratch.
Through out this journey I learnt both Docker and kubernetes through implementations and execution and as and when required reading their docs.
But sometimes it feels I still know only 50% of k8s. There are lot of things like writing your own CRDs, Rigorous implementation of security, Autoscaling and Uptime management which I haven't got and opportunity to work with. And to answer real-time questions we must have real-time experience.
So I want to know from you all DevOps experts how you gain expertise in these dark areas of K8S.
https://redd.it/1i41le7
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Rant: Got tasked with setting up WebRTC infrastructure within 4 hours
😂
They used SaaS but one customer required to run WebRTC components on their own cloud infrastructure. So I got tasked with setting this up and expected completion time is 4 hours to set up everything from the scratch (including non existing k8s cluster which is meant to host that).
At least vendor provides some k8s deployment manifests but for me 4 hours would be the time needed to do research about what is required to do and not time to finish this assignment and have working WebRTC stack on our infrastructure 😆
I am waiting for exciting monday
https://redd.it/1i42omh
@r_devops
😂
They used SaaS but one customer required to run WebRTC components on their own cloud infrastructure. So I got tasked with setting this up and expected completion time is 4 hours to set up everything from the scratch (including non existing k8s cluster which is meant to host that).
At least vendor provides some k8s deployment manifests but for me 4 hours would be the time needed to do research about what is required to do and not time to finish this assignment and have working WebRTC stack on our infrastructure 😆
I am waiting for exciting monday
https://redd.it/1i42omh
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Scaling to 6k rps getting 502s in k8s
Im trying to scale my k8s infra but 502 on a small portion(0.1% at 6000rps). After it starts getting 502s, they stay for sometime but only occur in high rps.
At that rps there are 78 server instances created (2cpu 2gb) and on average thry use 0.35cpu and 350mb memroy. Its a nodejs server.
My prometheus has 4cpu and 48gb but it sometimes crashes but comes back up after 1-5mins
There are 12 taefik instances with 2cpu and 4gb and on average use 0.2cpu and 1.2gb.
So its doesnt seem like a resource constraint. I have pgbouncer for psql pooling but it does have logs of max connection reached and reserved for system.
I have nodelocal in the k8s cluster for dns caching as well.
Average response time is 350ms with p90 at 0.9s and p99 at 2.3s
How do I debug this 502?
https://redd.it/1i443hc
@r_devops
Im trying to scale my k8s infra but 502 on a small portion(0.1% at 6000rps). After it starts getting 502s, they stay for sometime but only occur in high rps.
At that rps there are 78 server instances created (2cpu 2gb) and on average thry use 0.35cpu and 350mb memroy. Its a nodejs server.
My prometheus has 4cpu and 48gb but it sometimes crashes but comes back up after 1-5mins
There are 12 taefik instances with 2cpu and 4gb and on average use 0.2cpu and 1.2gb.
So its doesnt seem like a resource constraint. I have pgbouncer for psql pooling but it does have logs of max connection reached and reserved for system.
I have nodelocal in the k8s cluster for dns caching as well.
Average response time is 350ms with p90 at 0.9s and p99 at 2.3s
How do I debug this 502?
https://redd.it/1i443hc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Encountering anomalies when deploying azure update manager dynamic scopes across multiple subscriptions
I'm facing multiple anomolies when deploying azure update manager dynamic scopes linked to maintenance configurations across multiple subscriptions; with the below script (personal details removed) :
```
# Define a hashtable of subscriptions with their names as keys and IDs as values
$subscriptions = @{
"subscription A" = "00000000-0000-0000-0000-000000000000"
"subscription B" = "00000000-0000-0000-0000-000000000000"
# Additional subscriptions......
}
# Ensures you do not inherit an AzContext in your runbook
Disable-AzContextAutosave -Scope Process
# Authenticate with the sys-mi linked to this automation account
az login --identity
az account show
# Install the maintenance azure clie extension without prompting for confirmation (now mentioned in the ADO pipeline)
az extension add --name maintenance --allow-preview true --yes
az extension show --name maintenance
az config set extension.dynamic_install_allow_preview=true
# Mapping between maintenance configurations and their dynamic scope tags
$dynamic_scope_tag_to_mc = @{
mc_ne_dev_arc = @{
mc_config_id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-mc-ne-aum/providers/Microsoft.Maintenance/maintenanceConfigurations/mc_ne_dev_arc"
dynamic_scope_tag_value = "dev-arc"
}
mc_ne_stage_platform = @{
mc_config_id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-mc-ne-aum/providers/Microsoft.Maintenance/maintenanceConfigurations/mc_ne_stage_platform"
dynamic_scope_tag_value = "stage-platform"
}
# Additional maintenance configurations.....
}
# Iterate over each maintenance configuration and its dynamic scope tag
foreach ($scope in $dynamic_scope_tag_to_mc.Keys) {
# Get the maintenance configuration details
$mc_config_id = $dynamic_scope_tag_to_mc[$scope]["mc_config_id"]
$scope_tag_value = $dynamic_scope_tag_to_mc[$scope]["dynamic_scope_tag_value"]
# Iterate over each subscriptions for this maintenance configuration
foreach ($sub in $subscriptions.Keys) {
$subscription_name = $sub
$subscription_id = $subscriptions[$sub]
Write-Output "Subscription name - $($subscription_name)"
Write-Output ""
Write-Output "Subscription - $($subscription_id)"
Write-Output ""
Write-Output "Applying dynamic scope tag '$($scope_tag_value)' to MC >>> $($mc_config_id)"
Write-Output ""
# Deploy the dynamic scope to the maintenance configuration for this subscription
az maintenance assignment create-or-update-subscription `
--maintenance-configuration-id $mc_config_id `
--name "assignment-$($scope_tag_value)" `
--filter-os-types windows linux `
--filter-resource-types "Microsoft.Compute/VirtualMachines" "Microsoft.HybridCompute/machines" `
--filter-tags "{zimcanit-mc-config:[$($scope_tag_value)]}" `
--filter-tags-operator All `
--subscription $subscription_id
}
}
az logout
```
The script is triggered via a runbook within an automation account and does the following:
- Store a list of all subscriptions in my tenant: **$subscriptions**
- Define the dynamic scope tag values to assign per maintenance configuration in a nested hash table object **$dynamic_scope_tag_mc**
- Iteration logic:
- Iterate over every dynamic scope tag value per maintenance configuration id; whilst extracting key attributes for maintenance configuration ID and associated dynamic scope tag value.
- Iterate over every subscription ID per dynamic scope tag value and leverage az cli cmd `az maintenance assignment create-or-update-subscription` to assign cross-subscription dynamic scopes
**Anomolies faced:**
- Some dynamic scope assignments align with my architectural requirements
- Some dynamic scope assignments are duplicated, but the difference is the casing for the os type filter
- Some
I'm facing multiple anomolies when deploying azure update manager dynamic scopes linked to maintenance configurations across multiple subscriptions; with the below script (personal details removed) :
```
# Define a hashtable of subscriptions with their names as keys and IDs as values
$subscriptions = @{
"subscription A" = "00000000-0000-0000-0000-000000000000"
"subscription B" = "00000000-0000-0000-0000-000000000000"
# Additional subscriptions......
}
# Ensures you do not inherit an AzContext in your runbook
Disable-AzContextAutosave -Scope Process
# Authenticate with the sys-mi linked to this automation account
az login --identity
az account show
# Install the maintenance azure clie extension without prompting for confirmation (now mentioned in the ADO pipeline)
az extension add --name maintenance --allow-preview true --yes
az extension show --name maintenance
az config set extension.dynamic_install_allow_preview=true
# Mapping between maintenance configurations and their dynamic scope tags
$dynamic_scope_tag_to_mc = @{
mc_ne_dev_arc = @{
mc_config_id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-mc-ne-aum/providers/Microsoft.Maintenance/maintenanceConfigurations/mc_ne_dev_arc"
dynamic_scope_tag_value = "dev-arc"
}
mc_ne_stage_platform = @{
mc_config_id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-mc-ne-aum/providers/Microsoft.Maintenance/maintenanceConfigurations/mc_ne_stage_platform"
dynamic_scope_tag_value = "stage-platform"
}
# Additional maintenance configurations.....
}
# Iterate over each maintenance configuration and its dynamic scope tag
foreach ($scope in $dynamic_scope_tag_to_mc.Keys) {
# Get the maintenance configuration details
$mc_config_id = $dynamic_scope_tag_to_mc[$scope]["mc_config_id"]
$scope_tag_value = $dynamic_scope_tag_to_mc[$scope]["dynamic_scope_tag_value"]
# Iterate over each subscriptions for this maintenance configuration
foreach ($sub in $subscriptions.Keys) {
$subscription_name = $sub
$subscription_id = $subscriptions[$sub]
Write-Output "Subscription name - $($subscription_name)"
Write-Output ""
Write-Output "Subscription - $($subscription_id)"
Write-Output ""
Write-Output "Applying dynamic scope tag '$($scope_tag_value)' to MC >>> $($mc_config_id)"
Write-Output ""
# Deploy the dynamic scope to the maintenance configuration for this subscription
az maintenance assignment create-or-update-subscription `
--maintenance-configuration-id $mc_config_id `
--name "assignment-$($scope_tag_value)" `
--filter-os-types windows linux `
--filter-resource-types "Microsoft.Compute/VirtualMachines" "Microsoft.HybridCompute/machines" `
--filter-tags "{zimcanit-mc-config:[$($scope_tag_value)]}" `
--filter-tags-operator All `
--subscription $subscription_id
}
}
az logout
```
The script is triggered via a runbook within an automation account and does the following:
- Store a list of all subscriptions in my tenant: **$subscriptions**
- Define the dynamic scope tag values to assign per maintenance configuration in a nested hash table object **$dynamic_scope_tag_mc**
- Iteration logic:
- Iterate over every dynamic scope tag value per maintenance configuration id; whilst extracting key attributes for maintenance configuration ID and associated dynamic scope tag value.
- Iterate over every subscription ID per dynamic scope tag value and leverage az cli cmd `az maintenance assignment create-or-update-subscription` to assign cross-subscription dynamic scopes
**Anomolies faced:**
- Some dynamic scope assignments align with my architectural requirements
- Some dynamic scope assignments are duplicated, but the difference is the casing for the os type filter
- Some
maintenance configurations have no dynamic scopes assigned to them at all
**Questions**
- Is there a way I can dynamically reference my subscriptions within the PowerShell runbook without hardcoding them?
- Is there anything with the iteration logic that needs to be revised given how it currently partially works?
- I refrenced an existing stackoverflow question for inspiration when setting up the original script [How to use New-AzConfigurationAssignment Powershell cmdlet for Dynamic Scope for different subscriptions -Azure update manager](https://stackoverflow.com/questions/78159445/how-to-use-new-azconfigurationassignment-powershell-cmdlet-for-dynamic-scope-for)
https://redd.it/1i49gy9
@r_devops
**Questions**
- Is there a way I can dynamically reference my subscriptions within the PowerShell runbook without hardcoding them?
- Is there anything with the iteration logic that needs to be revised given how it currently partially works?
- I refrenced an existing stackoverflow question for inspiration when setting up the original script [How to use New-AzConfigurationAssignment Powershell cmdlet for Dynamic Scope for different subscriptions -Azure update manager](https://stackoverflow.com/questions/78159445/how-to-use-new-azconfigurationassignment-powershell-cmdlet-for-dynamic-scope-for)
https://redd.it/1i49gy9
@r_devops
Stack Overflow
How to use New-AzConfigurationAssignment Powershell cmdlet for Dynamic Scope for different subscriptions -Azure update manager
I'm failing to set up Dynamic Scopes for my Maintenance Configuration related to VMs (InGuestPatching) with Powershell. I have created a maintenance configuration. Now I want to do the configuration
Deeply curated database of 400+ well-funded, Remote-friendly startups + jobs
And no, this isn't another spreadsheet or pay-to-play directory. I manually curated this database of well-funded startups working on interesting things because I got tired of sifting through the noise of LinkedIn/Twitter. This is totally open & built on Framer. And yes, I know startups aren't for everyone, but these are hopefully the better ones. Let me know what you think and hopefully it's helpful to find some interesting opportunities this year: https://startups.gallery/categories/work-type/remote
https://redd.it/1i4aes6
@r_devops
And no, this isn't another spreadsheet or pay-to-play directory. I manually curated this database of well-funded startups working on interesting things because I got tired of sifting through the noise of LinkedIn/Twitter. This is totally open & built on Framer. And yes, I know startups aren't for everyone, but these are hopefully the better ones. Let me know what you think and hopefully it's helpful to find some interesting opportunities this year: https://startups.gallery/categories/work-type/remote
https://redd.it/1i4aes6
@r_devops
startups.gallery
Top Remote Startups To Work For in 2026 | startups.gallery
Discover top remote startups offering flexibility and innovation. Explore remote roles at startups backed by Y Combinator, Sequoia, and a16z.
Anyone from a DevOps or Cloud-related Startup Successfully Raise $1 Million? How Did You Do It?
Hi everyone,
I’m curious if anyone here has successfully raised $1 million or more for a DevOps or cloud-related startup. If so, I’d love to hear about your journey.
- How did you approach investors?
- What were some key strategies or tactics that worked for you?
- Did you face any particular challenges, and how did you overcome them?
- What kind of milestones or proof did you use to demonstrate traction?
Any insights or advice would be greatly appreciated as I’m looking to learn from others who have been through this process!
Thanks in advance!
Good answer I will select based on more upvote and I will buy a virtual coffee.
Only honest answer please.
https://redd.it/1i4ksbb
@r_devops
Hi everyone,
I’m curious if anyone here has successfully raised $1 million or more for a DevOps or cloud-related startup. If so, I’d love to hear about your journey.
- How did you approach investors?
- What were some key strategies or tactics that worked for you?
- Did you face any particular challenges, and how did you overcome them?
- What kind of milestones or proof did you use to demonstrate traction?
Any insights or advice would be greatly appreciated as I’m looking to learn from others who have been through this process!
Thanks in advance!
Good answer I will select based on more upvote and I will buy a virtual coffee.
Only honest answer please.
https://redd.it/1i4ksbb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do DevOps technologies like Kubernetes, Terraform, Ansible, etc. actually work in a company?
I've been using these technologies in my own Homelab and have been learning how they function, but I struggle to see how they work in an actual company. I can see how they are useful in certain scenarios, but how do they work in a Devops sense? Are people in companies coding programs that are deployed with kubernetes? Is Terraform provisioning desktops for coders? Specific examples of when these technologies are used would be great!
Thank you!
https://redd.it/1i4mcmy
@r_devops
I've been using these technologies in my own Homelab and have been learning how they function, but I struggle to see how they work in an actual company. I can see how they are useful in certain scenarios, but how do they work in a Devops sense? Are people in companies coding programs that are deployed with kubernetes? Is Terraform provisioning desktops for coders? Specific examples of when these technologies are used would be great!
Thank you!
https://redd.it/1i4mcmy
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Leveling Up as a DevOps / SRE / Infra
Hello fellow Infrastructure masters, it's now 2025 and we keep on grinding as usual... I had a bit of a brake from pure engineering and enjoyed a couple of years of architecting solutions..
The landscape is definitely evolving and so do we have to as well. Long gone are days (at least for me), when 7-8 recruiters a day were spamming me for Infrastructure work, be it full-time or contracting, we all know how the market is atm.
Here's the thing: I see the demands of versatility for a proper DevOps (I'd rather use a unified term of "infrastructure") ever rising. I see a depth of Python and Go are very valued.
I'm now on the cross-road of adding an upgrade to my skills, something deeply niche. Let me give you my background:
7 years in Infra, noc --> techops --> sre --> senior devops --> head of infra --> cloud architecture and consulting --> solutions / presales
There's a few things that I care about and a many that I don't.
How niche, or rather, what is the demand of specialised "Cloud Cost Optimisation" or FinOps? How's the DevSecOps landscape (as a niche), what about DataOps? MLOps - I despise statistics and anything related to math-heavy data science type of work..
Maybe there are other unexplored "SomethingOps" niches as well. I'd love to hear ya'll sentiment on the topic and have a healthy discussion.
https://redd.it/1i4w50e
@r_devops
Hello fellow Infrastructure masters, it's now 2025 and we keep on grinding as usual... I had a bit of a brake from pure engineering and enjoyed a couple of years of architecting solutions..
The landscape is definitely evolving and so do we have to as well. Long gone are days (at least for me), when 7-8 recruiters a day were spamming me for Infrastructure work, be it full-time or contracting, we all know how the market is atm.
Here's the thing: I see the demands of versatility for a proper DevOps (I'd rather use a unified term of "infrastructure") ever rising. I see a depth of Python and Go are very valued.
I'm now on the cross-road of adding an upgrade to my skills, something deeply niche. Let me give you my background:
7 years in Infra, noc --> techops --> sre --> senior devops --> head of infra --> cloud architecture and consulting --> solutions / presales
There's a few things that I care about and a many that I don't.
How niche, or rather, what is the demand of specialised "Cloud Cost Optimisation" or FinOps? How's the DevSecOps landscape (as a niche), what about DataOps? MLOps - I despise statistics and anything related to math-heavy data science type of work..
Maybe there are other unexplored "SomethingOps" niches as well. I'd love to hear ya'll sentiment on the topic and have a healthy discussion.
https://redd.it/1i4w50e
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Azure Az-400 Exam Prep.
Where could I find Az-400 exam preparation questions. I have heard the dumps won't work in real exams.
https://redd.it/1i4xgqe
@r_devops
Where could I find Az-400 exam preparation questions. I have heard the dumps won't work in real exams.
https://redd.it/1i4xgqe
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Devops learning portals
Hi folks,
Do you recommend any web places where you can learn devops, sre, linux, network etc but in a way that there are task and you get like „real” enviroment with terminal etc and tasks to solve. Could be paid. Thx
https://redd.it/1i4yn2e
@r_devops
Hi folks,
Do you recommend any web places where you can learn devops, sre, linux, network etc but in a way that there are task and you get like „real” enviroment with terminal etc and tasks to solve. Could be paid. Thx
https://redd.it/1i4yn2e
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to run selenium grid use Jenkins docket
I want to cicd from container jenkin( that have install docker, docker compose, some plugin) then open localhost 8080 to config! Selenium grid, look like container can not connect to host ( selenium grid urd)
I have tried but not success !
Is there any example ,or recommend?
I want all thing is container
https://redd.it/1i4zy44
@r_devops
I want to cicd from container jenkin( that have install docker, docker compose, some plugin) then open localhost 8080 to config! Selenium grid, look like container can not connect to host ( selenium grid urd)
I have tried but not success !
Is there any example ,or recommend?
I want all thing is container
https://redd.it/1i4zy44
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Understanding Redis 7.4+ and ValKey (fork from 7.3)
So if I understand this right, Redis version 7.3 was before their big license change that requires payment for using redis. If you are using 7.4 or higher you have to pay them a fee. But a fork was done at 7.3 called "valkey":
https://valkey.io/
https://github.com/valkey-io/valkey
with the original license that's 100% free. And it's become so popular the linux foundation and others have started contributing to that fork making it arguably better than the main branch of redis. Even AWS elasticache is offering ValKey for 30% less. If this is all true why is anyone still using normal redis? Am I missing something?
https://redd.it/1i51uiz
@r_devops
So if I understand this right, Redis version 7.3 was before their big license change that requires payment for using redis. If you are using 7.4 or higher you have to pay them a fee. But a fork was done at 7.3 called "valkey":
https://valkey.io/
https://github.com/valkey-io/valkey
with the original license that's 100% free. And it's become so popular the linux foundation and others have started contributing to that fork making it arguably better than the main branch of redis. Even AWS elasticache is offering ValKey for 30% less. If this is all true why is anyone still using normal redis? Am I missing something?
https://redd.it/1i51uiz
@r_devops
GitHub
GitHub - valkey-io/valkey: A flexible distributed key-value database that is optimized for caching and other realtime workloads.
A flexible distributed key-value database that is optimized for caching and other realtime workloads. - valkey-io/valkey
Why is DevOps still such a fragmented, exhausting (and ofc costly) mess in 2025?
I have been thinking about this for quite sometime and thought of getting your thoughts. I feel like DevOps was supposed to make life easier for developers, but honestly, it still feels like an endless headache. Every year, there’s a new tool, a new “best practice,” and a new wave of people claiming they have finally cracked the DevOps code… yet here we are, still dealing with the same mess, just with fancier buzzwords.
A few things I keep running into over the years that I have worked with different projects:
1. The never-ending toolchain puzzle – Every company I have worked with has a bloated DevOps stack. Terraform, Kubernetes, Jenkins, ArgoCD, GitHub Actions, Helm, Spinnaker—you name it. It’s like every tool fixes one thing but breaks another, and somehow, the entire setup is still fragile as hell. Instead of simplifying DevOps, we’re just stacking more complexity on top of complexity.
2. Burnout is real – I don’t know a single DevOps engineer who isn’t constantly tired. Between keeping up with cloud providers, maintaining brittle pipelines, dealing with security updates, and being on-call for random failures at 2 AM, it’s no surprise people are leaving the field. We were supposed to be automating things, not babysitting them 24/7.
3. Automation is a lie – Every new trend is supposed to “automate everything,” but in reality, we just end up automating a different kind of chaos and it becomes totally fragmented. IaC is great until Terraform state breaks and you’re in hell. GitOps is cool until you realize drift is inevitable. Pipelines are supposed to “just work,” yet half the time, debugging a failed deploy feels like solving a murder mystery with no clues.
And here’s the kicker: this mess is costing companies millions. There’s actual research backing this up:
The [2024 State of DevOps Report by Puppet](https://www.puppet.com/blog/state-devops-report-2024) talks about how DevOps is still in a weird transition phase, with more companies shifting towards platform engineering but still struggling with inefficiencies.
The **DORA 2024 Accelerate State of DevOps Report** highlights that while elite teams are getting better, the majority are still facing the same bottlenecks we’ve seen for years.
So, I gotta ask—what’s the real solution here? Has anyone actually figured out how to do DevOps without it turning into a soul-sucking nightmare? Or are we all just stuck in an infinite loop of new tools, more YAML, and never-ending on-call rotations?
Would love to hear how others are dealing with this. Maybe I’m just jaded, but damn, it feels like we should be further along by now.
https://redd.it/1i538r1
@r_devops
I have been thinking about this for quite sometime and thought of getting your thoughts. I feel like DevOps was supposed to make life easier for developers, but honestly, it still feels like an endless headache. Every year, there’s a new tool, a new “best practice,” and a new wave of people claiming they have finally cracked the DevOps code… yet here we are, still dealing with the same mess, just with fancier buzzwords.
A few things I keep running into over the years that I have worked with different projects:
1. The never-ending toolchain puzzle – Every company I have worked with has a bloated DevOps stack. Terraform, Kubernetes, Jenkins, ArgoCD, GitHub Actions, Helm, Spinnaker—you name it. It’s like every tool fixes one thing but breaks another, and somehow, the entire setup is still fragile as hell. Instead of simplifying DevOps, we’re just stacking more complexity on top of complexity.
2. Burnout is real – I don’t know a single DevOps engineer who isn’t constantly tired. Between keeping up with cloud providers, maintaining brittle pipelines, dealing with security updates, and being on-call for random failures at 2 AM, it’s no surprise people are leaving the field. We were supposed to be automating things, not babysitting them 24/7.
3. Automation is a lie – Every new trend is supposed to “automate everything,” but in reality, we just end up automating a different kind of chaos and it becomes totally fragmented. IaC is great until Terraform state breaks and you’re in hell. GitOps is cool until you realize drift is inevitable. Pipelines are supposed to “just work,” yet half the time, debugging a failed deploy feels like solving a murder mystery with no clues.
And here’s the kicker: this mess is costing companies millions. There’s actual research backing this up:
The [2024 State of DevOps Report by Puppet](https://www.puppet.com/blog/state-devops-report-2024) talks about how DevOps is still in a weird transition phase, with more companies shifting towards platform engineering but still struggling with inefficiencies.
The **DORA 2024 Accelerate State of DevOps Report** highlights that while elite teams are getting better, the majority are still facing the same bottlenecks we’ve seen for years.
So, I gotta ask—what’s the real solution here? Has anyone actually figured out how to do DevOps without it turning into a soul-sucking nightmare? Or are we all just stuck in an infinite loop of new tools, more YAML, and never-ending on-call rotations?
Would love to hear how others are dealing with this. Maybe I’m just jaded, but damn, it feels like we should be further along by now.
https://redd.it/1i538r1
@r_devops
Puppet by Perforce
The State of DevOps Report 2024: The Evolution of Platform Engineering is Live – Get Your Copy Now | Puppet by Perforce
The State of DevOps Report 2024 continues to focus on the topic of platform engineering and is available to download for free now!