Sad and feeling miserable
I've been in the DevOps space for about 8+ years.
Today I've just been sad and miserable. I feel like I don't know a lot of the newer technologies and feel really behind. I've been trying to catch up on learning Kubernetes and have made some progress. But there are so many other things I just don't know how work such as puppet, ansible, terraform, kubernetes (learning in progress), spinnaker. And I don't even know some if there are other things that I should know.
I'm good at programming and building things, automation etc. I can figure out some of the stuff even at work surrounding these technologies. But I don't have a deep understanding and feel behind and lost at times.
I feel like the best way I've learned is managing my own version of these technologies and doing some project(s). But I don't even know where to start. And when I do start (kubernetes has been a little nice to learn on minikube), I don't know the cost efficient way to do so. For example, I don't even know how to learn Terraform without a cloud provider and it being practical.
I don't know, my headspace is such a mess. I feel alone. I feel worried if tomorrow I lose my job, I'll be homeless. I don't think anyone would hire me or I could even get another DevOps job.
I don't know, just need some advice and help. Feel so hopeless and sad.
https://redd.it/1ehwd9s
@r_devops
I've been in the DevOps space for about 8+ years.
Today I've just been sad and miserable. I feel like I don't know a lot of the newer technologies and feel really behind. I've been trying to catch up on learning Kubernetes and have made some progress. But there are so many other things I just don't know how work such as puppet, ansible, terraform, kubernetes (learning in progress), spinnaker. And I don't even know some if there are other things that I should know.
I'm good at programming and building things, automation etc. I can figure out some of the stuff even at work surrounding these technologies. But I don't have a deep understanding and feel behind and lost at times.
I feel like the best way I've learned is managing my own version of these technologies and doing some project(s). But I don't even know where to start. And when I do start (kubernetes has been a little nice to learn on minikube), I don't know the cost efficient way to do so. For example, I don't even know how to learn Terraform without a cloud provider and it being practical.
I don't know, my headspace is such a mess. I feel alone. I feel worried if tomorrow I lose my job, I'll be homeless. I don't think anyone would hire me or I could even get another DevOps job.
I don't know, just need some advice and help. Feel so hopeless and sad.
https://redd.it/1ehwd9s
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Will people be interested in a super hands-on/practical data security + crypto key short course?
Hello reddit,
I'm a platform/security engineer. I do a lot of preaching on how standardized data encryption + crypto key management could work and how it could simplify platform engineer's life by not having to configure access/permission policies for every single data platforms and then figure out how to align these policies on various platforms. I was wondering if this is something that will be interesting to people where I can walk through the end to end process from creating a key, configure key access, adding it to the client, encrypt the data etc. show how different types of crypto keys could be applicable in different scenarios. I thought I could maybe just create 30 mins course w/ some terraform + data encryption code step by step. Will this be something people are interested?
Appreciate the feedback.
View Poll
https://redd.it/1ei2owa
@r_devops
Hello reddit,
I'm a platform/security engineer. I do a lot of preaching on how standardized data encryption + crypto key management could work and how it could simplify platform engineer's life by not having to configure access/permission policies for every single data platforms and then figure out how to align these policies on various platforms. I was wondering if this is something that will be interesting to people where I can walk through the end to end process from creating a key, configure key access, adding it to the client, encrypt the data etc. show how different types of crypto keys could be applicable in different scenarios. I thought I could maybe just create 30 mins course w/ some terraform + data encryption code step by step. Will this be something people are interested?
Appreciate the feedback.
View Poll
https://redd.it/1ei2owa
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to get ready for a junior/entry level DevOps job
Hello people, I am about to finish my thesis as an electrical engineering student and I would like to get into DevOps. Having no experience in software development since I focused on telecommunications and robotics, I started about a month ago the IBM DevOps and Software Engineering professional certificate on Coursera. Can you help me lay out some goals about what skills to pursue or what certifications to pass so I can have a clear path in mind before I prepare my CV?
Thank you in advance.
https://redd.it/1ei3x48
@r_devops
Hello people, I am about to finish my thesis as an electrical engineering student and I would like to get into DevOps. Having no experience in software development since I focused on telecommunications and robotics, I started about a month ago the IBM DevOps and Software Engineering professional certificate on Coursera. Can you help me lay out some goals about what skills to pursue or what certifications to pass so I can have a clear path in mind before I prepare my CV?
Thank you in advance.
https://redd.it/1ei3x48
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Can't get Chef to play along nicely with API for certificate issuance (fine with Ansible though)
Maybe someone can explain this ... or has an idea
I have the following recipe
# Install openssl
package 'openssl' do
action :install
end
# Install jq
package 'jq' do
action :install
end
# Generate CSR
execute 'generate_csr' do
command <<-EOH
openssl req -new -newkey rsa:2048 -nodes -keyout #{key_path} -out #{csr_path} -subj "/C=#{country}/ST=#{state}/L=#{locality}/O=#{organization}/CN=#{common_name}"
EOH
not_if { ::File.exist?(csr_path) }
end
# Check CSR
execute 'check_csr' do
command "cat #{csr_path}"
action :run
only_if { ::File.exist?(csr_path) }
end
# Send CSR request
execute 'send_csr_request' do
command <<-EOH
curl --location '#{url}' \
--header 'x-api-key: #{api_key}' \
--header 'Content-Type: application/json' \
--data "$(jq -n --arg csr \"$(cat #{csr_path})\" '{profile: {id: \"#{profile_id}\"}, seat: {seat_id: \"#{seat_id}\"}, csr: $csr, attributes: {subject: {common_name: \"#{common_name}\"}}}')" \
>> #{cert_path}
EOH
action :run
only_if { ::File.exist?(csr_path) }
end
The certificate it creates is weirdly formated - it basically seems to be the full json format including headers - for example (gap is intentional obviously)
{"serial_number":"78A16E498xxxxxxxxx","delivery_format":"x509","certificate":"-----BEGIN CERT
FICATE-----\nMIIEdDCCA1ygAwIBAgIUeKFuSYuyqzly34Y7vExa00frLqswDQYJKoZIhvcNAQEL\nBQAwgYsxCzAJBgNVBAYTAlVTMQswCQYDVQQIE (...)
(...) c5LCeO5lueAmuYeEPZsPMkIWEK0wMG\nnHbfpg+ICIwsB4JA3seExi5J7/orrH5L73laWcRsebU
mu+h3wDuXL1SJP3bb9VVP\nyZYUqusTWHGUq2JX8qEd3OhokExj6AiMzsKyeif5K4lRlSOP4TnGTA==\n-----END CERTIFICATE-----\n"}
Even if I use some cmd magic to remove the header, remove the linebreaks and manually make it 'look' like a real cert - the cert is not valid... The characters are fine - so it seems all about formatting.
If I run the same as Ansible work book - for example
tasks:
- name: Install openssl
ansible.builtin.package:
name: openssl
state: present
- name: Install jq
ansible.builtin.package:
name: jq
state: present
- name: Generate CSR
ansible.builtin.command:
cmd: >
openssl req -new -newkey rsa:2048 -nodes
-keyout {{ certificate.key_path }}
-out {{ certificate.csr_path }}
-subj "/C={{ certificate.country }}/ST={{ certificate.state }}/L={{ certificate.locality }}/O={{ certificate.organization }}/CN={{ certificate.common_name }}"
args:
creates: "{{ certificate.csr_path }}"
- name: Check if CSR exists
ansible.builtin.stat:
path: "{{ certificate.csr_path }}"
register: csr_file
- name: Read CSR content
ansible.builtin.slurp:
src: "{{ certificate.csr_path }}"
register: csr_content
when: csr_file.stat.exists
The cert is just fine
-----BEGIN CERTIFICATE-----
MIIEdjCCA16gAwIBAgIUT8P6KVyWLnfhi8LFodI2rfV9NWswDQYJKoZIhvcNAQEL
BQAwgYsxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJHQTEQMA4GA1UEBxMHUm9zd2Vs
bDEOMAwGA1UEERMFMzAwNzUxIDAeBgNVBAkTFzE3MCBDb2NocmFuIEZhcm1zIERy
aXZlMRUwEwYDVQQKEwxSdWRsb2ZmIEluYy4xFDASBgNVBAMTC3J1ZGxvZmYuaWNh
(...)
ZdmaZwM8GSjj+CR7jZJquFK/w2DFn4vaaZWm3uik6VCwfF+VENf7G0W4F6BTIeYW
FKmrB5lEX3vD60pz+rLlTo3e+Mv7sc20sjUmOrdQrO0S7BJAZ8s7Vs+CHEgOiKIq
vOEXJ2p5MWVytZsevoXmHrV5QREKgFrVxXjpsq9N21d+KqL8nkglc4Ix
-----END CERTIFICATE-----
In fact I see the same issue with Puppet and Salt .. for now I just use a bashfile to issue certificates that are being run by Chef etc. - but it is puzzling that Ansible 'gets it right' - but the rest isn't ...
Any takers lol ?
https://redd.it/1ei5zfr
@r_devops
Maybe someone can explain this ... or has an idea
I have the following recipe
# Install openssl
package 'openssl' do
action :install
end
# Install jq
package 'jq' do
action :install
end
# Generate CSR
execute 'generate_csr' do
command <<-EOH
openssl req -new -newkey rsa:2048 -nodes -keyout #{key_path} -out #{csr_path} -subj "/C=#{country}/ST=#{state}/L=#{locality}/O=#{organization}/CN=#{common_name}"
EOH
not_if { ::File.exist?(csr_path) }
end
# Check CSR
execute 'check_csr' do
command "cat #{csr_path}"
action :run
only_if { ::File.exist?(csr_path) }
end
# Send CSR request
execute 'send_csr_request' do
command <<-EOH
curl --location '#{url}' \
--header 'x-api-key: #{api_key}' \
--header 'Content-Type: application/json' \
--data "$(jq -n --arg csr \"$(cat #{csr_path})\" '{profile: {id: \"#{profile_id}\"}, seat: {seat_id: \"#{seat_id}\"}, csr: $csr, attributes: {subject: {common_name: \"#{common_name}\"}}}')" \
>> #{cert_path}
EOH
action :run
only_if { ::File.exist?(csr_path) }
end
The certificate it creates is weirdly formated - it basically seems to be the full json format including headers - for example (gap is intentional obviously)
{"serial_number":"78A16E498xxxxxxxxx","delivery_format":"x509","certificate":"-----BEGIN CERT
FICATE-----\nMIIEdDCCA1ygAwIBAgIUeKFuSYuyqzly34Y7vExa00frLqswDQYJKoZIhvcNAQEL\nBQAwgYsxCzAJBgNVBAYTAlVTMQswCQYDVQQIE (...)
(...) c5LCeO5lueAmuYeEPZsPMkIWEK0wMG\nnHbfpg+ICIwsB4JA3seExi5J7/orrH5L73laWcRsebU
mu+h3wDuXL1SJP3bb9VVP\nyZYUqusTWHGUq2JX8qEd3OhokExj6AiMzsKyeif5K4lRlSOP4TnGTA==\n-----END CERTIFICATE-----\n"}
Even if I use some cmd magic to remove the header, remove the linebreaks and manually make it 'look' like a real cert - the cert is not valid... The characters are fine - so it seems all about formatting.
If I run the same as Ansible work book - for example
tasks:
- name: Install openssl
ansible.builtin.package:
name: openssl
state: present
- name: Install jq
ansible.builtin.package:
name: jq
state: present
- name: Generate CSR
ansible.builtin.command:
cmd: >
openssl req -new -newkey rsa:2048 -nodes
-keyout {{ certificate.key_path }}
-out {{ certificate.csr_path }}
-subj "/C={{ certificate.country }}/ST={{ certificate.state }}/L={{ certificate.locality }}/O={{ certificate.organization }}/CN={{ certificate.common_name }}"
args:
creates: "{{ certificate.csr_path }}"
- name: Check if CSR exists
ansible.builtin.stat:
path: "{{ certificate.csr_path }}"
register: csr_file
- name: Read CSR content
ansible.builtin.slurp:
src: "{{ certificate.csr_path }}"
register: csr_content
when: csr_file.stat.exists
The cert is just fine
-----BEGIN CERTIFICATE-----
MIIEdjCCA16gAwIBAgIUT8P6KVyWLnfhi8LFodI2rfV9NWswDQYJKoZIhvcNAQEL
BQAwgYsxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJHQTEQMA4GA1UEBxMHUm9zd2Vs
bDEOMAwGA1UEERMFMzAwNzUxIDAeBgNVBAkTFzE3MCBDb2NocmFuIEZhcm1zIERy
aXZlMRUwEwYDVQQKEwxSdWRsb2ZmIEluYy4xFDASBgNVBAMTC3J1ZGxvZmYuaWNh
(...)
ZdmaZwM8GSjj+CR7jZJquFK/w2DFn4vaaZWm3uik6VCwfF+VENf7G0W4F6BTIeYW
FKmrB5lEX3vD60pz+rLlTo3e+Mv7sc20sjUmOrdQrO0S7BJAZ8s7Vs+CHEgOiKIq
vOEXJ2p5MWVytZsevoXmHrV5QREKgFrVxXjpsq9N21d+KqL8nkglc4Ix
-----END CERTIFICATE-----
In fact I see the same issue with Puppet and Salt .. for now I just use a bashfile to issue certificates that are being run by Chef etc. - but it is puzzling that Ansible 'gets it right' - but the rest isn't ...
Any takers lol ?
https://redd.it/1ei5zfr
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What do you use your developer portal's for?
In your company, what is the main use-case for developer portal (Like backstage, port, cortex, Roadie) ?
Is it the service template?
Incident management / On-call view?
Is it feature flags? ad-hoc permissions?
Deployment?
Or even security?
anything I'm missing? what do you think is the main use?
https://redd.it/1ei9z8d
@r_devops
In your company, what is the main use-case for developer portal (Like backstage, port, cortex, Roadie) ?
Is it the service template?
Incident management / On-call view?
Is it feature flags? ad-hoc permissions?
Deployment?
Or even security?
anything I'm missing? what do you think is the main use?
https://redd.it/1ei9z8d
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Calculator for determining uptime required of dependencies in order to meet application uptime SLO
https://eason.blog/posts/2024/08/availability-dependencies/
Shows the relationship between an application's uptime and the uptime of it's dependencies. Post includes an interactive calculator you can use to determine what the dependency uptime has to be in order for the application to have a hope of hitting it's SLO. Curious if y'all have implemented policies that take this perspective into account and how that works at your company?
https://redd.it/1eicl70
@r_devops
https://eason.blog/posts/2024/08/availability-dependencies/
Shows the relationship between an application's uptime and the uptime of it's dependencies. Post includes an interactive calculator you can use to determine what the dependency uptime has to be in order for the application to have a hope of hitting it's SLO. Curious if y'all have implemented policies that take this perspective into account and how that works at your company?
https://redd.it/1eicl70
@r_devops
eason.blog
Application Availability Depends on Dependencies · Lee Eason's personal blog
No SaaS application is an island. Learn how to calculate the required uptime of dependencies based on your application uptime requirements. Plus use a handy calculator!
Create a program/script that shows a pop-up message when a specific folder is opened...
Hello, Friends. I'm new to the world of programming, but the boss of the company I work for gave me the following request: Create a method so that when I open a folder, it displays a pop-up message (like those error messages when a program crashes), and I can edit the information in the message.
I tried using .bat with some commands I found on chatgpt, but to no avail. The closest I got was using PowerShell, which was able to monitor changes in the folder, such as when files were created or deleted.
But that's not what we want. I was wondering if there was a method using any kind of programming language, if any of you know it, I'd be happy to help! I'm with DM open, apparently.
Translated with DeepL.com (free version)
https://redd.it/1eidx7o
@r_devops
Hello, Friends. I'm new to the world of programming, but the boss of the company I work for gave me the following request: Create a method so that when I open a folder, it displays a pop-up message (like those error messages when a program crashes), and I can edit the information in the message.
I tried using .bat with some commands I found on chatgpt, but to no avail. The closest I got was using PowerShell, which was able to monitor changes in the folder, such as when files were created or deleted.
But that's not what we want. I was wondering if there was a method using any kind of programming language, if any of you know it, I'd be happy to help! I'm with DM open, apparently.
Translated with DeepL.com (free version)
https://redd.it/1eidx7o
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to evaluate and compare algorithm / app performance across different past datasets and code commits
Our team has implemented some algorithms that run on robots. We capture robot sensor data over several runs per day. Whenever there is major changes in algorithm, we run it in simulation mode over past captured sensor data. We have written python scripts to:
1. Run algorithm over past robot runs data one after another
2. Analyse / visualize the algorithm performance by plotting various graphs in jpg files.
Now we want to automate it further. Here are new requirements:
1. Organize the past data
2. Run latest or select code commit of the system / algorithm against some input or randomly selected data. (preferrably reusing python scripts we already have)
3. Store the performance metric of the run in the database.
4. Check if there are considerable degradation in the performance.
5. Visualize the performance with different custom visualizations / graphs / plots. (preferrably reusing python scripts we already have)
6. Given old data IDs and commit IDs, fetch corresponding run results and provide analysis / visualization.
I was thinking to implement some client server app from scratch. For example server side (say a minimal django app) system can expose some REST APIs to
1. Accept request (containing dataset ID and code commit ID) to checkout the commit and run simulation against specified data
2. Persist the run result in database and graphs images in file system
3. Return old run performance data and graphs image links given say run ID, commit ID or dataset ID for comparison across different runs / commits / datasets
And then we can have some web app build from scratch that can consume these REST endpoints.
But I felt there must be some existing framework to achieve this. However, a quick google did not lead me to anything. I have following doubts:
Q1. Is there any tool to achieve this?
Q2. Does this usecase fit somewhere in dev ops lifecycle? If yes where?
Q3. How this use case is implemented in industry?
I have following guess: Since we have simulation run and performance analysis / visualization scripts ready, we can reuse them somehow to fit in CI/CD pipeline. For example, we can implement points 2 and 3 using some CI/CD tools like Jenkins or Github actions. They can checkout and build specified commit and then run our python scripts to run the simulations and performance analysis and visualization. Requirements 1, 4, 5 and 6 can be implemented from scratch and can work independently from CI/CD tool used. I feel this will have advantage that we will use CI/CD tools for what they are best at: checking out and building app (on ad hoc demand or on every commit) while still allowing us to use our existing python scripts thereby not limiting our customization for analysis and visualization.
Now my question is:
Q4. Does above make sense? Or we should do it all either in some CI/CD tool or from scratch?
https://redd.it/1eieady
@r_devops
Our team has implemented some algorithms that run on robots. We capture robot sensor data over several runs per day. Whenever there is major changes in algorithm, we run it in simulation mode over past captured sensor data. We have written python scripts to:
1. Run algorithm over past robot runs data one after another
2. Analyse / visualize the algorithm performance by plotting various graphs in jpg files.
Now we want to automate it further. Here are new requirements:
1. Organize the past data
2. Run latest or select code commit of the system / algorithm against some input or randomly selected data. (preferrably reusing python scripts we already have)
3. Store the performance metric of the run in the database.
4. Check if there are considerable degradation in the performance.
5. Visualize the performance with different custom visualizations / graphs / plots. (preferrably reusing python scripts we already have)
6. Given old data IDs and commit IDs, fetch corresponding run results and provide analysis / visualization.
I was thinking to implement some client server app from scratch. For example server side (say a minimal django app) system can expose some REST APIs to
1. Accept request (containing dataset ID and code commit ID) to checkout the commit and run simulation against specified data
2. Persist the run result in database and graphs images in file system
3. Return old run performance data and graphs image links given say run ID, commit ID or dataset ID for comparison across different runs / commits / datasets
And then we can have some web app build from scratch that can consume these REST endpoints.
But I felt there must be some existing framework to achieve this. However, a quick google did not lead me to anything. I have following doubts:
Q1. Is there any tool to achieve this?
Q2. Does this usecase fit somewhere in dev ops lifecycle? If yes where?
Q3. How this use case is implemented in industry?
I have following guess: Since we have simulation run and performance analysis / visualization scripts ready, we can reuse them somehow to fit in CI/CD pipeline. For example, we can implement points 2 and 3 using some CI/CD tools like Jenkins or Github actions. They can checkout and build specified commit and then run our python scripts to run the simulations and performance analysis and visualization. Requirements 1, 4, 5 and 6 can be implemented from scratch and can work independently from CI/CD tool used. I feel this will have advantage that we will use CI/CD tools for what they are best at: checking out and building app (on ad hoc demand or on every commit) while still allowing us to use our existing python scripts thereby not limiting our customization for analysis and visualization.
Now my question is:
Q4. Does above make sense? Or we should do it all either in some CI/CD tool or from scratch?
https://redd.it/1eieady
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Resume Suggestions Needed for Entry Level DevOps
Resume Picture: https://imgur.com/a/4z8cu5n
I've been wanting to get into make a shift from Network Security for some time now and have been self studying. Just started applying to jobs about 1 week a go and haven't heard back from anyone (Probably 200 applications sent out). There must be some glaring issues here so I was wondering if there's anything I could learn or do to improve my resume. I've been at my current position for almost 2.5 years now and I kinda just feel like I'm wasting time here so just looking for something that could move me in the right direction. I've tried to do some beginner projects on my GitHub: https://github.com/devshah95/ to help out.
https://redd.it/1eige6c
@r_devops
Resume Picture: https://imgur.com/a/4z8cu5n
I've been wanting to get into make a shift from Network Security for some time now and have been self studying. Just started applying to jobs about 1 week a go and haven't heard back from anyone (Probably 200 applications sent out). There must be some glaring issues here so I was wondering if there's anything I could learn or do to improve my resume. I've been at my current position for almost 2.5 years now and I kinda just feel like I'm wasting time here so just looking for something that could move me in the right direction. I've tried to do some beginner projects on my GitHub: https://github.com/devshah95/ to help out.
https://redd.it/1eige6c
@r_devops
Imgur
Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.
How can I reduce the oncall burden?
Hey everyone,
I'm looking for some advice on how to make on-call duties a bit more bearable. I end up being on call every month for a full week (24/7), and those nighttime pages are killing me!
Would love to hear about how you all manage the on-call burden:
Metrics: What do you track to keep on-call healthy and manageable?
Reducing Burden: Any processes or strategies that work well for you?
Tools: What tools help you monitor and improve your on-call setup?
Team Structure: Does each team handle on-call, or do you use a NOC and have escalation policies?
Thanks a bunch!
https://redd.it/1eil9up
@r_devops
Hey everyone,
I'm looking for some advice on how to make on-call duties a bit more bearable. I end up being on call every month for a full week (24/7), and those nighttime pages are killing me!
Would love to hear about how you all manage the on-call burden:
Metrics: What do you track to keep on-call healthy and manageable?
Reducing Burden: Any processes or strategies that work well for you?
Tools: What tools help you monitor and improve your on-call setup?
Team Structure: Does each team handle on-call, or do you use a NOC and have escalation policies?
Thanks a bunch!
https://redd.it/1eil9up
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What metrics do you use to track your success and influence promotions and pay?
How do you track them? Do you manual monitor them or use in-house or OSS tools?
For example, I keep an eye on cost savings I produce over a time period for services I manage. When my self performance review comes up I use this metric to quantify my performance on keeping costs down. This process needs an improvement.
https://redd.it/1eilq5z
@r_devops
How do you track them? Do you manual monitor them or use in-house or OSS tools?
For example, I keep an eye on cost savings I produce over a time period for services I manage. When my self performance review comes up I use this metric to quantify my performance on keeping costs down. This process needs an improvement.
https://redd.it/1eilq5z
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What do you recommend for integrated logs navigation?
We have a small micro services architecture (5-6 services) running on AWS (lambdas, ec2, s3 mostly). We mostly lean on Sentry, Cloudwatch and FullStory for observability.
I'd really like to be able to aggregate, track, visualize and navigate all of these in a single place for both performance and debugging, with big picture and granularity. Before embarking on an in-house solution, is there a platform you recommend? If in-house, do you have approaches that work for you?
https://redd.it/1eilet5
@r_devops
We have a small micro services architecture (5-6 services) running on AWS (lambdas, ec2, s3 mostly). We mostly lean on Sentry, Cloudwatch and FullStory for observability.
I'd really like to be able to aggregate, track, visualize and navigate all of these in a single place for both performance and debugging, with big picture and granularity. Before embarking on an in-house solution, is there a platform you recommend? If in-house, do you have approaches that work for you?
https://redd.it/1eilet5
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Windows runners
Our CI is built in bash.. until now, most of our jobs were running on Linux runners.
Recently, we have a huge need for windows runners as well… we don’t have the capacity to rewrite everything in powershell and we’re not sure of the alternatives.. the codebase is huge
Has anyone else had this problem? Can you point me in the right direction?
https://redd.it/1eiml9d
@r_devops
Our CI is built in bash.. until now, most of our jobs were running on Linux runners.
Recently, we have a huge need for windows runners as well… we don’t have the capacity to rewrite everything in powershell and we’re not sure of the alternatives.. the codebase is huge
Has anyone else had this problem? Can you point me in the right direction?
https://redd.it/1eiml9d
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How do you layout your resume?
Ive been in devops for 2 years and been apart of layoffs for the second year in a row. I have applied to over 900 jobs. I have tried resume prep services and asked friends. But i am trying to find a job in DevOps for the past few months.
I came from help desk and running help desks and transitioned to DevOps at the end of the stay-at-home phase for the pandemic.
So any tips would be great. I have a year in aws and one in azure. Worked in a startup and at an msp. I genuinely enjoy cloud DevOps and would like to not go back to Help Desk support (even in a hugh tier or managerial role), but that is seeming less possible every day.
https://redd.it/1eiqbnk
@r_devops
Ive been in devops for 2 years and been apart of layoffs for the second year in a row. I have applied to over 900 jobs. I have tried resume prep services and asked friends. But i am trying to find a job in DevOps for the past few months.
I came from help desk and running help desks and transitioned to DevOps at the end of the stay-at-home phase for the pandemic.
So any tips would be great. I have a year in aws and one in azure. Worked in a startup and at an msp. I genuinely enjoy cloud DevOps and would like to not go back to Help Desk support (even in a hugh tier or managerial role), but that is seeming less possible every day.
https://redd.it/1eiqbnk
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Observability Meetup in San Francisco
Hi /devops :-)
I'm hosting an Observability meetup in San Francisco on August 8th, so if you're in the area and want free pizza, beer, and to listen to some cool talks on Observability, stop by!
We'll have speakers from Checkly (Monitoring as code), the co-creator of Hamilton (https://www.tryhamilton.dev/) and Burr (https://github.com/DAGWorks-Inc/burr), and the CEO/Founder of Delta Stream (who is also the creator of ksqlDB).
Should be a solid time :-)
https://redd.it/1eilaii
@r_devops
Hi /devops :-)
I'm hosting an Observability meetup in San Francisco on August 8th, so if you're in the area and want free pizza, beer, and to listen to some cool talks on Observability, stop by!
We'll have speakers from Checkly (Monitoring as code), the co-creator of Hamilton (https://www.tryhamilton.dev/) and Burr (https://github.com/DAGWorks-Inc/burr), and the CEO/Founder of Delta Stream (who is also the creator of ksqlDB).
Should be a solid time :-)
https://redd.it/1eilaii
@r_devops
www.tryhamilton.dev
Try Hamilton | Try Hamilton
Try Hamilton in your browser! <head />
deleting bin log on primary database
Hey all ,
Have a bit of a problem with our current primary mariadb database server reaching 99% disk space
We have quite a lot of binary logs with replication configured to a secondary. My question is would using the command to purge binary logs on the master for all logs apart from the one the secondary is currently reading cause any issues to the integrity of the data on the primary ?
Thanks for any advice
https://redd.it/1eikmft
@r_devops
Hey all ,
Have a bit of a problem with our current primary mariadb database server reaching 99% disk space
We have quite a lot of binary logs with replication configured to a secondary. My question is would using the command to purge binary logs on the master for all logs apart from the one the secondary is currently reading cause any issues to the integrity of the data on the primary ?
Thanks for any advice
https://redd.it/1eikmft
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Proxmoxgk: a shell tool for deploying LXC/QEMU guests, with Cloud-init
Good evening everyone, I've just released a small command line utility for Proxmox v7, 8 to automate the provisioning and deployment of your containers and virtual machines with Cloud-init.
**Key features:**
* Unified configuration of LXC and QEMU/KVM guests via Cloud-init.
* Flexible guest deployment:
* in single or serial mode
* fully automated or with your own presets
* Fast, personalized provisioning of your Proxmox templates
[Presentation on Proxmox forum](https://forum.proxmox.com/threads/proxmox-automator-for-deploy-lxc-and-qemu-guests-with-cloud-init.152183/)
[Github](https://github.com/asdeed/proxmoxgk)
https://redd.it/1eiwlvl
@r_devops
Good evening everyone, I've just released a small command line utility for Proxmox v7, 8 to automate the provisioning and deployment of your containers and virtual machines with Cloud-init.
**Key features:**
* Unified configuration of LXC and QEMU/KVM guests via Cloud-init.
* Flexible guest deployment:
* in single or serial mode
* fully automated or with your own presets
* Fast, personalized provisioning of your Proxmox templates
[Presentation on Proxmox forum](https://forum.proxmox.com/threads/proxmox-automator-for-deploy-lxc-and-qemu-guests-with-cloud-init.152183/)
[Github](https://github.com/asdeed/proxmoxgk)
https://redd.it/1eiwlvl
@r_devops
Proxmox Support Forum
[TUTORIAL] - Proxmox automator for deploy LXC and QEMU guests...
Good evening everyone, I've just released a small command line utility for Proxmox v7, 8 to automate the provisioning and deployment of your containers and virtual machines with Cloud-init.
Key...
Key...
Deploying to cloud (beginner)
Hey 👋🏻...
I am building a project related to scripts to scrape prices from websites and will also learn to deploy it to the cloud..
So I have one Beginner question,
I have two scripts one is in node puppeteer and the other in python selenium,(I am learning and trying both languages)..
So how can I deploy these two scripts to run in the cloud automatically , one after the other daily, and how to check for errors , completion, etc so I can have some logic to retry if they fail..
Do I need to have some Central component to coordinate tasks etc..
Thankyou
https://redd.it/1ej2ga8
@r_devops
Hey 👋🏻...
I am building a project related to scripts to scrape prices from websites and will also learn to deploy it to the cloud..
So I have one Beginner question,
I have two scripts one is in node puppeteer and the other in python selenium,(I am learning and trying both languages)..
So how can I deploy these two scripts to run in the cloud automatically , one after the other daily, and how to check for errors , completion, etc so I can have some logic to retry if they fail..
Do I need to have some Central component to coordinate tasks etc..
Thankyou
https://redd.it/1ej2ga8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Terraform came in clutch
I am currently working on a project and I always had to deploy it manually to the Azure VM. It wasn't cumbersome but as a programmer, repeated boring tasks is definitely not my tea. I had to find some sort of automation for this task. Having practice with Github Actions wasn't enough so what do I do? I tried all sorts of solutions that came into my mind, I wrote bash scripts and setup listeners to check if a new zip file is uploaded(via powershell script) then run the bash script which essentially should have run. The problem was, python ran the bash script in a sub process which invited some really annoying bugs. Hell, I even tried to make a listener in C language to detect if a new file is uploaded, and run the installation bash script but it couldn't do the trick as well.
I almost gave up on this pure automation, until today, I just wanted to learn Terraform because I never really understood it's use case or it's true power but Today, I finally took the courage and started reading the docs. I understood what Terraform really is. After 1 hour of playing around, I thought, let's just use Terraform to do my task. And to my surprise, just after 30 minutes of little adjustments, I was able to finally make a locally hosted CI/CD pipeline using Terraform that deploys the code on the VM.
I understand that this solution to my problem may or may not be the standard or ideal way but definitely worth the effort. Any thoughts on this implementation?
https://redd.it/1ej44cy
@r_devops
I am currently working on a project and I always had to deploy it manually to the Azure VM. It wasn't cumbersome but as a programmer, repeated boring tasks is definitely not my tea. I had to find some sort of automation for this task. Having practice with Github Actions wasn't enough so what do I do? I tried all sorts of solutions that came into my mind, I wrote bash scripts and setup listeners to check if a new zip file is uploaded(via powershell script) then run the bash script which essentially should have run. The problem was, python ran the bash script in a sub process which invited some really annoying bugs. Hell, I even tried to make a listener in C language to detect if a new file is uploaded, and run the installation bash script but it couldn't do the trick as well.
I almost gave up on this pure automation, until today, I just wanted to learn Terraform because I never really understood it's use case or it's true power but Today, I finally took the courage and started reading the docs. I understood what Terraform really is. After 1 hour of playing around, I thought, let's just use Terraform to do my task. And to my surprise, just after 30 minutes of little adjustments, I was able to finally make a locally hosted CI/CD pipeline using Terraform that deploys the code on the VM.
I understand that this solution to my problem may or may not be the standard or ideal way but definitely worth the effort. Any thoughts on this implementation?
https://redd.it/1ej44cy
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
NGINX Configuration Help: URL Cleanup Before Redirect
Hi everyone,
I'm working on cleaning up URLs in my NGINX configuration before redirecting them. Specifically, I want to replace all instances of
Here are the problems I'm encountering:
1. When using the
2. If I remove the
My questions are:
1. How can I ensure that the rewrite rule is applied correctly and the
2. Is there a better way to implement this URL cleanup ?
Thanks in advance for any help!
https://redd.it/1ej5xsl
@r_devops
Hi everyone,
I'm working on cleaning up URLs in my NGINX configuration before redirecting them. Specifically, I want to replace all instances of
%2F with / in the URL. I'm using a rewrite rule to achieve this, but I'm running into some issues. Here's the configuration I'm working with:server {
listen 80;
server_name cleaner.home.localhost;
root /usr/share/nginx/html;
location / {
# Do not apply rewrite if it's already been redirected
if ($request_uri ~* "%2F") {
rewrite ^(.*)%2F(.*)$ $1/$2 last;
}
return 301 https://localhost$request_uri;
}
}
Here are the problems I'm encountering:
1. When using the
last argument with rewrite, I get a 404 error. I suspect this is due to an infinite loop, which triggers NGINX's fail-safe mechanism.2. If I remove the
last argument, the redirect works, but the rewrite rule doesn't seem to be applied at all. It looks like $request_uri is not affected by the rewrite.My questions are:
1. How can I ensure that the rewrite rule is applied correctly and the
%2F characters are replaced with /?2. Is there a better way to implement this URL cleanup ?
Thanks in advance for any help!
https://redd.it/1ej5xsl
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Open Source Platform Orchestrator Kusion v0.12.1 is Out!
What has changed?
* storage backend enhancement, include supporting `path-style` endpoint for AWS S3, new `kusion release unlock` command for better Release management.
* optimize the display of the sensitive information to reduce risk of sensitive information leakage
* support import existing cloud resources and skip their deletion during `kusion destory`.
* workspace `contenxt` support decalre the Kubernetes cluster configs and Terraform Provider credentials.
* support using the `Spec` file as the input for the `kusion preview` and `kusion apply` command
Also more info can be found in our [medium blog](https://medium.com/@kusionstack/kusion-v0-12-1-release-improve-comprehensive-capabilities-and-optimize-user-experience-0375075d8fde).
Please checkout the new release at: [https://github.com/KusionStack/kusion/releases/tag/v0.12.1](https://github.com/KusionStack/kusion/releases/tag/v0.12.1)
Your feedback and suggestions are welcome!
https://redd.it/1ej7cdf
@r_devops
What has changed?
* storage backend enhancement, include supporting `path-style` endpoint for AWS S3, new `kusion release unlock` command for better Release management.
* optimize the display of the sensitive information to reduce risk of sensitive information leakage
* support import existing cloud resources and skip their deletion during `kusion destory`.
* workspace `contenxt` support decalre the Kubernetes cluster configs and Terraform Provider credentials.
* support using the `Spec` file as the input for the `kusion preview` and `kusion apply` command
Also more info can be found in our [medium blog](https://medium.com/@kusionstack/kusion-v0-12-1-release-improve-comprehensive-capabilities-and-optimize-user-experience-0375075d8fde).
Please checkout the new release at: [https://github.com/KusionStack/kusion/releases/tag/v0.12.1](https://github.com/KusionStack/kusion/releases/tag/v0.12.1)
Your feedback and suggestions are welcome!
https://redd.it/1ej7cdf
@r_devops
Medium
Kusion v0.12.1 Release: Improve Comprehensive Capabilities and Optimize User Experience
We are happy to announce that the Kusion v0.12.1 Release has been published!