Avoiding unexpcted overae
For those managing multiple APIs, how do you keep track of usage and avoid unexpected overages?
https://redd.it/1gagvwv
@r_devops
For those managing multiple APIs, how do you keep track of usage and avoid unexpected overages?
https://redd.it/1gagvwv
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I wrote a piece on the evolution in the field of automation we're witnessing nowadays. I will be humbled to get the feedback on it and discuss the topic with the devops community.
hey!
some time ago a thought struck me: what if I started writing about my experiences from a day-to-day work as a data engineer? I have a knack for automating stuff so I genuinely wanted to focus on this topic.
I enjoy discussing with fellow thinkers about the topics of automation, technology, and artificial intelligence. I hope that showcasing my thought process and point of view via a longer text will allow people that find this interesting to reach out to me and/or provide some feedback, ideally to discuss the subjects I stir.
I've been recently thinking a lot about the progress we're witnessing in the field of generative AI, especially in a broader context of evolving automation—it's not just gears and gadgets anymore. I'm persuaded we're stepping into the third era of automation: intelligence, after automating physical labor and calculation. It's an exciting, inevitable, and challenging journey.
the link below will take you to the piece I've prepared to organize how I think about the automation evolution and how to find my way in the changing world (no LLM participated in the writing process :) )
🔗 https://toolongautomated.substack.com/p/automation-unbound
I dive into the following topics:
👉 the three eras of automation: physical labor, calculation, and intelligence.
👉 automation in our daily lives: whether we like it or not, automation is everywhere.
👉 lessons from history: what the past teaches us about adapting to a world increasingly shaped by machines.
I'd be humbled to hear your feedback on the piece, and hope to have some discussion about the subjects:
1. are you afraid and/or skeptical about progressing automation and AI?
2. do you enjoy discussing this subject or are you rather reluctant to do that?
3. if an artifact (a.k.a. indirect intelligence) is created by what I call direct intelligence (human) and that artifact appears to be a synthetic being, then should we call this artifact direct intelligence?
https://redd.it/1gahf5n
@r_devops
hey!
some time ago a thought struck me: what if I started writing about my experiences from a day-to-day work as a data engineer? I have a knack for automating stuff so I genuinely wanted to focus on this topic.
I enjoy discussing with fellow thinkers about the topics of automation, technology, and artificial intelligence. I hope that showcasing my thought process and point of view via a longer text will allow people that find this interesting to reach out to me and/or provide some feedback, ideally to discuss the subjects I stir.
I've been recently thinking a lot about the progress we're witnessing in the field of generative AI, especially in a broader context of evolving automation—it's not just gears and gadgets anymore. I'm persuaded we're stepping into the third era of automation: intelligence, after automating physical labor and calculation. It's an exciting, inevitable, and challenging journey.
the link below will take you to the piece I've prepared to organize how I think about the automation evolution and how to find my way in the changing world (no LLM participated in the writing process :) )
🔗 https://toolongautomated.substack.com/p/automation-unbound
I dive into the following topics:
👉 the three eras of automation: physical labor, calculation, and intelligence.
👉 automation in our daily lives: whether we like it or not, automation is everywhere.
👉 lessons from history: what the past teaches us about adapting to a world increasingly shaped by machines.
I'd be humbled to hear your feedback on the piece, and hope to have some discussion about the subjects:
1. are you afraid and/or skeptical about progressing automation and AI?
2. do you enjoy discussing this subject or are you rather reluctant to do that?
3. if an artifact (a.k.a. indirect intelligence) is created by what I call direct intelligence (human) and that artifact appears to be a synthetic being, then should we call this artifact direct intelligence?
https://redd.it/1gahf5n
@r_devops
Substack
Automation unbound
From ancient tools to AI-powered minds, discover how automation is reshaping human potential—and how to navigate the third era of its evolution.
Detect and fix bugs early with AI
Just read an article about Early - an AI tool designed to catch bugs before they become a problem. I'm curious about how this could impact our daily coding practices and overall project timelines.
Do you think integrating AI like this can enhance our productivity and code quality? Have any of you had experiences with similar tools that you found beneficial or challenging?
https://redd.it/1gajuke
@r_devops
Just read an article about Early - an AI tool designed to catch bugs before they become a problem. I'm curious about how this could impact our daily coding practices and overall project timelines.
Do you think integrating AI like this can enhance our productivity and code quality? Have any of you had experiences with similar tools that you found beneficial or challenging?
https://redd.it/1gajuke
@r_devops
The New Stack
Meet Early: The AI That Catches Bugs Before They Bite
Israeli startup Early harnesses generative AI to automate unit testing, helping developers catch costly bugs earlier and save precious coding time.
database devops schema changes
how do you guys do database schema changes in your team
your devops owns it or devs
are your schema changes tracked using flyway/other tool first in dev db and then same moved to prod
in ours prod db is separate and sql file changes are applied manually and no schema change due to db team review process,approvals in prod.
https://redd.it/1gakqi1
@r_devops
how do you guys do database schema changes in your team
your devops owns it or devs
are your schema changes tracked using flyway/other tool first in dev db and then same moved to prod
in ours prod db is separate and sql file changes are applied manually and no schema change due to db team review process,approvals in prod.
https://redd.it/1gakqi1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Doing certifications makes me feel like an idiot, does everyone experience this ?
So I have been working in the industry for maybe 8 years total and 5 years in my current full stack developer role (dev / testing / deployment all in one role). However I have been told I need to complete a industry certified exam If I want to go for promotion.
At work we have a 4 day training event on for ISTQB syllabus 4 so thought that would be a good one to go to as i do lots of the testing for the team and id say im fairly good at it. Its only 2 days in an I feel like an idiot having done about 5 mock exams im averaging 40-50% which is terrible when you need 70% for a pass.
Im just having real issues in two places
The I have no idea to this question it has never come up / will never come up in my Job how would I know
Terminology being used in exam and in company meaning different things.
For example we were talking about testing and executing lines, this is referring to the "lines" in a logic flow diagram not executing lines of code or what our team calls Units tests are referred to as component tests in the exam, what our team calls smoke tests are referred to as system integration tests and our acceptance tests would actually be called regression tests based on the syllabus.
It just really annoying and has sort of angered me that I have been able to do full penetration testing plans, setup tests environments with test data, been involved with full end to end tests across multiple services and even made our teams first ever AWS S3 conmectivity tests for connecting to cloud services but can not pass a Foundation level Certification Exam on testing.
https://redd.it/1gaiit4
@r_devops
So I have been working in the industry for maybe 8 years total and 5 years in my current full stack developer role (dev / testing / deployment all in one role). However I have been told I need to complete a industry certified exam If I want to go for promotion.
At work we have a 4 day training event on for ISTQB syllabus 4 so thought that would be a good one to go to as i do lots of the testing for the team and id say im fairly good at it. Its only 2 days in an I feel like an idiot having done about 5 mock exams im averaging 40-50% which is terrible when you need 70% for a pass.
Im just having real issues in two places
The I have no idea to this question it has never come up / will never come up in my Job how would I know
Terminology being used in exam and in company meaning different things.
For example we were talking about testing and executing lines, this is referring to the "lines" in a logic flow diagram not executing lines of code or what our team calls Units tests are referred to as component tests in the exam, what our team calls smoke tests are referred to as system integration tests and our acceptance tests would actually be called regression tests based on the syllabus.
It just really annoying and has sort of angered me that I have been able to do full penetration testing plans, setup tests environments with test data, been involved with full end to end tests across multiple services and even made our teams first ever AWS S3 conmectivity tests for connecting to cloud services but can not pass a Foundation level Certification Exam on testing.
https://redd.it/1gaiit4
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How much should I get paid
Some friend is asking me to do some terraform IaC for its company. However, I’m not sure how much it costs. Could you give an advice about the price of the following work or what I have to consider to give a reasonable price:
- create a terraform module for a product they made on azure cloud
- implement an azure DevOps pipeline to deploy infrastructure changes on azure (CD/CI)
Thanks for your help
https://redd.it/1gaqpli
@r_devops
Some friend is asking me to do some terraform IaC for its company. However, I’m not sure how much it costs. Could you give an advice about the price of the following work or what I have to consider to give a reasonable price:
- create a terraform module for a product they made on azure cloud
- implement an azure DevOps pipeline to deploy infrastructure changes on azure (CD/CI)
Thanks for your help
https://redd.it/1gaqpli
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Pivoting into cloud engineering may be tough...
Hey DevOps folks,
After running my first workshop, *A Day in the Life of a Cloud Engineer*, it hit me just how frustrating this career path has become for many of you. The **outsourcing of entry-level cloud roles** has made it feel like no matter how many certifications you earn or skills you build, companies will still look past you. It’s disheartening, and worse, it leaves a lot of smart and capable professionals wondering if they’ll ever get a real chance to enter this space.
That’s why I’ve put together a **free workshop series** to help you overcome these challenges. We’ll focus on:
* **Key skills that employers actually care about** so you can focus your energy
* **Building your first cloud project** to prove you can solve real problems
* **Navigating interview techniques** to stand out, even in this competitive market
If this resonates with you, check the link in my profile to join. And if you’re navigating these struggles too, connect with me on LinkedIn—I’d love to chat and help however I can!
https://redd.it/1garq3g
@r_devops
Hey DevOps folks,
After running my first workshop, *A Day in the Life of a Cloud Engineer*, it hit me just how frustrating this career path has become for many of you. The **outsourcing of entry-level cloud roles** has made it feel like no matter how many certifications you earn or skills you build, companies will still look past you. It’s disheartening, and worse, it leaves a lot of smart and capable professionals wondering if they’ll ever get a real chance to enter this space.
That’s why I’ve put together a **free workshop series** to help you overcome these challenges. We’ll focus on:
* **Key skills that employers actually care about** so you can focus your energy
* **Building your first cloud project** to prove you can solve real problems
* **Navigating interview techniques** to stand out, even in this competitive market
If this resonates with you, check the link in my profile to join. And if you’re navigating these struggles too, connect with me on LinkedIn—I’d love to chat and help however I can!
https://redd.it/1garq3g
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Record your terminal history to create executable runbooks
I am building Savvy as a new kind of terminal recording tool that lets you edit, run and share the recordings in a way that Asciinema does not support.
It also has local redaction to avoid sharing sensitive data, such as API tokens, PII, customer names, etc. Example runbook: https://app.getsavvy.so/runbook/rb\_b5dd5fb97a12b144/How-To-Retrieve-and-Decode-a-Kubernetes-Secret
What are some tools y'all are using to create/store runbooks ?
https://redd.it/1gasf1s
@r_devops
I am building Savvy as a new kind of terminal recording tool that lets you edit, run and share the recordings in a way that Asciinema does not support.
It also has local redaction to avoid sharing sensitive data, such as API tokens, PII, customer names, etc. Example runbook: https://app.getsavvy.so/runbook/rb\_b5dd5fb97a12b144/How-To-Retrieve-and-Decode-a-Kubernetes-Secret
What are some tools y'all are using to create/store runbooks ?
https://redd.it/1gasf1s
@r_devops
GitHub
GitHub - getsavvyinc/savvy-cli: Automatically capture and surface your team's tribal knowledge
Automatically capture and surface your team's tribal knowledge - getsavvyinc/savvy-cli
How much time do you spend fixing issues?
I'm considering going for devops, I have a background as a backend developer. My question is how much(maybe in %) of your time do you spend fixing issues and how much do you spend actually deploying new infrastructure, configuring and other typical devops tasks. Thanks
https://redd.it/1gavxyk
@r_devops
I'm considering going for devops, I have a background as a backend developer. My question is how much(maybe in %) of your time do you spend fixing issues and how much do you spend actually deploying new infrastructure, configuring and other typical devops tasks. Thanks
https://redd.it/1gavxyk
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
GitOps vs dynamic updates to K8s objects
I am a bit new to GitOps and wondering what everyone thinks about programmatic creation and updates to Kubernetes objects when the application is otherwise managed by FluxCD, for instance. Is it really an antipattern?
In detail:
We have a central team managed Kubernetes cluster, where we can deploy our applications through GitOps. Now, we are building a platform (i.e., common stuff for many similar applications) that would programmatically interact with the kube-apiserver to update ConfigMaps, fire up Jobs, for starters. This is to decouple the business applications from the target environment.
Do you think we should not do it? I know that we technically can do it, it has worked in a PoC environment, but the central team says we should not do it, because it is against the GitOps principles. What do you all think?
(We could use HPA, KEDA, sidecars so that we can avoid live kube-apiserver interactions, but should we? Especially if we can implement the functionality with basic k8s objects.)
https://redd.it/1gawqbt
@r_devops
I am a bit new to GitOps and wondering what everyone thinks about programmatic creation and updates to Kubernetes objects when the application is otherwise managed by FluxCD, for instance. Is it really an antipattern?
In detail:
We have a central team managed Kubernetes cluster, where we can deploy our applications through GitOps. Now, we are building a platform (i.e., common stuff for many similar applications) that would programmatically interact with the kube-apiserver to update ConfigMaps, fire up Jobs, for starters. This is to decouple the business applications from the target environment.
Do you think we should not do it? I know that we technically can do it, it has worked in a PoC environment, but the central team says we should not do it, because it is against the GitOps principles. What do you all think?
(We could use HPA, KEDA, sidecars so that we can avoid live kube-apiserver interactions, but should we? Especially if we can implement the functionality with basic k8s objects.)
https://redd.it/1gawqbt
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Video | What is Crossplane + Demo 🍭 (Day 5 in 30 Days Of CNCF Projects)
Watch here: https://youtu.be/C8yUfpmnosw
https://redd.it/1gayhco
@r_devops
Watch here: https://youtu.be/C8yUfpmnosw
https://redd.it/1gayhco
@r_devops
YouTube
30 Days Of CNCF Projects | Day 5: What is Crossplane + Demo 🍭
Links:
- Demo GitHub - https://github.com/guymenahem/how-to-devops-tools/tree/main/crossplane
- Crossplane Website - https://www.crossplane.io/
What's Next:
1. Like & comment the video
2. Connect on Linkedin ➡️ - https://www.linkedin.com/in/guy-menahem/…
- Demo GitHub - https://github.com/guymenahem/how-to-devops-tools/tree/main/crossplane
- Crossplane Website - https://www.crossplane.io/
What's Next:
1. Like & comment the video
2. Connect on Linkedin ➡️ - https://www.linkedin.com/in/guy-menahem/…
The biggest compliment i've ever received.
Earlier this year, I was working on a proof of concept involving the installation of an LDAP server and authentication via SSH. For that, I needed to enable SSH password authentication [I can already hear you typing. I KNOW!!\] to make it work. I ran into a lot of issues with the latest Ubuntu and felt like I was banging my head against the wall until I finally found the solution. I decided to share my findings on superuser.com to help anyone else who might encounter the same problem.
Fast forward to today, [I check my email once every 3-4 days; currently, I have over 2,000 unread emails\], but one in particular caught my attention. I received this particular email 2 days ago, It reads:
I'm deeply touched. I've never received an upvote via email before. Thank you, "Denis K"—you've made my day!
Email exchange.
Unread mail counter.
https://redd.it/1gaysnm
@r_devops
Earlier this year, I was working on a proof of concept involving the installation of an LDAP server and authentication via SSH. For that, I needed to enable SSH password authentication [I can already hear you typing. I KNOW!!\] to make it work. I ran into a lot of issues with the latest Ubuntu and felt like I was banging my head against the wall until I finally found the solution. I decided to share my findings on superuser.com to help anyone else who might encounter the same problem.
Fast forward to today, [I check my email once every 3-4 days; currently, I have over 2,000 unread emails\], but one in particular caught my attention. I received this particular email 2 days ago, It reads:
Hi! I'm not a `superuser.com` wbsite user and I can't write a DM to you, but I found your mail and I've just want to say thank you for your answer! I spend 2 hours on troubleshooting why I can't log into server ssh vias password... Again thanks and have a nice day (or night) whenever you'll read that xDI'm deeply touched. I've never received an upvote via email before. Thank you, "Denis K"—you've made my day!
Email exchange.
Unread mail counter.
https://redd.it/1gaysnm
@r_devops
Super User
SSH does not allow password authentication
I have SSH access to my server using a certificate.
I have configured a user on the same server that would connect using a password, but using Putty, the server only seems to offer certificate logi...
I have configured a user on the same server that would connect using a password, but using Putty, the server only seems to offer certificate logi...
Cloud Exit Assessment: How to Evaluate the Risks of Leaving the Cloud
Dear all,
**I intend this post more as a discussion starter, but I welcome any comments, criticisms, or opposing views.**
I would like to draw your attention for a moment to the topic of 'cloud exit.' While this may seem unusual in a DevOps community, I believe most organizations lack an understanding of the vendor lock-in they encounter with a cloud-first strategy, and there are limited tools available on the market to assess these risks.
Although there are limited articles and research on this topic, you might be familiar with it from the mini-series of articles by DHH about leaving the cloud:
[https://world.hey.com/dhh/why-we-re-leaving-the-cloud-654b47e0](https://world.hey.com/dhh/why-we-re-leaving-the-cloud-654b47e0)
[https://world.hey.com/dhh/x-celebrates-60-savings-from-cloud-exit-7cc26895](https://world.hey.com/dhh/x-celebrates-60-savings-from-cloud-exit-7cc26895)
(a little self-promotion, but (ISC)² also found my topic suggestion to be worthy: [https://www.isc2.org/Insights/2024/04/Cloud-Exit-Strategies-Avoiding-Vendor-Lock-in](https://www.isc2.org/Insights/2024/04/Cloud-Exit-Strategies-Avoiding-Vendor-Lock-in))
It's not widely known, but in the European Union, the European Banking Authority (EBA) is responsible for establishing a uniform set of rules to regulate and supervise banking across all member states. In 2019, the EBA published the "Guidelines on Outsourcing Arrangements" technical document, which sets the baseline for financial institutions wanting to move to the cloud. This baseline includes the requirement that organizations must be prepared for a cloud exit in case of specific incidents or triggers.
Due to unfavorable market conditions as a cloud security freelancer, I've had more time over the last couple of months, which is why I started building a unified cloud exit assessment solution that helps organizations understand the risks associated with their cloud landscape and supports them in better understanding the risks, challenges and constraints of a potential cloud exit. The solution is still in its early stages (I’ve built it without VC funding or other investors), but I would be happy to share it with you for your review and feedback.
The 'assessment engine' is based on the following building blocks:
1. **Define Scope & Exit Strategy type:** For Microsoft Azure, the scope can be a resource group, while for AWS, it can be an AWS account and region.
2. **Build Resource Inventory:** List the used resources/services.
3. **Build Cost Inventory:** Identify the associated costs of the used resources/services.
4. **Perform Risk Assessment:** Apply a pre-defined rule set to examine the resources and complexity within the defined scope.
5. **Conduct Alternative Technology Analysis:** Evaluate the available alternative technologies on the market.
6. **Develop Report (Exit Strategy/Exit Plan):** Create a report based on regulatory requirements.
I've created a lighweight version of the assessment engine and you can try it on your own:
[https://exitcloud.io/](https://exitcloud.io/)
(No registration or credit card required)
Example report - EU:
[https://report.eu.exitcloud.io/737d5f09-3e54-4777-bdc1-059f5f5b2e1c/index.html](https://report.eu.exitcloud.io/737d5f09-3e54-4777-bdc1-059f5f5b2e1c/index.html)
(for users who do not want to test it on their own infrastructure, but are interested in the output report \*)
*\* the example report used the 'Migration to Alternate Cloud' exit strategy, which is why you can find only cloud-related alternative technologies.*
To avoid any misunderstandings, here are a few notes:
* The lightweight version was built on Microsoft Azure because it was the fastest and simplest way to set it up. (Yes, a bit ironic…)
* I have no preference for any particular cloud service provider; each has its own advantages and disadvantages.
* I am neither a frontend nor a hardcore backend developer, so please excuse me if the aforementioned lightweight version contains some 'hacks.'
* I’m not
Dear all,
**I intend this post more as a discussion starter, but I welcome any comments, criticisms, or opposing views.**
I would like to draw your attention for a moment to the topic of 'cloud exit.' While this may seem unusual in a DevOps community, I believe most organizations lack an understanding of the vendor lock-in they encounter with a cloud-first strategy, and there are limited tools available on the market to assess these risks.
Although there are limited articles and research on this topic, you might be familiar with it from the mini-series of articles by DHH about leaving the cloud:
[https://world.hey.com/dhh/why-we-re-leaving-the-cloud-654b47e0](https://world.hey.com/dhh/why-we-re-leaving-the-cloud-654b47e0)
[https://world.hey.com/dhh/x-celebrates-60-savings-from-cloud-exit-7cc26895](https://world.hey.com/dhh/x-celebrates-60-savings-from-cloud-exit-7cc26895)
(a little self-promotion, but (ISC)² also found my topic suggestion to be worthy: [https://www.isc2.org/Insights/2024/04/Cloud-Exit-Strategies-Avoiding-Vendor-Lock-in](https://www.isc2.org/Insights/2024/04/Cloud-Exit-Strategies-Avoiding-Vendor-Lock-in))
It's not widely known, but in the European Union, the European Banking Authority (EBA) is responsible for establishing a uniform set of rules to regulate and supervise banking across all member states. In 2019, the EBA published the "Guidelines on Outsourcing Arrangements" technical document, which sets the baseline for financial institutions wanting to move to the cloud. This baseline includes the requirement that organizations must be prepared for a cloud exit in case of specific incidents or triggers.
Due to unfavorable market conditions as a cloud security freelancer, I've had more time over the last couple of months, which is why I started building a unified cloud exit assessment solution that helps organizations understand the risks associated with their cloud landscape and supports them in better understanding the risks, challenges and constraints of a potential cloud exit. The solution is still in its early stages (I’ve built it without VC funding or other investors), but I would be happy to share it with you for your review and feedback.
The 'assessment engine' is based on the following building blocks:
1. **Define Scope & Exit Strategy type:** For Microsoft Azure, the scope can be a resource group, while for AWS, it can be an AWS account and region.
2. **Build Resource Inventory:** List the used resources/services.
3. **Build Cost Inventory:** Identify the associated costs of the used resources/services.
4. **Perform Risk Assessment:** Apply a pre-defined rule set to examine the resources and complexity within the defined scope.
5. **Conduct Alternative Technology Analysis:** Evaluate the available alternative technologies on the market.
6. **Develop Report (Exit Strategy/Exit Plan):** Create a report based on regulatory requirements.
I've created a lighweight version of the assessment engine and you can try it on your own:
[https://exitcloud.io/](https://exitcloud.io/)
(No registration or credit card required)
Example report - EU:
[https://report.eu.exitcloud.io/737d5f09-3e54-4777-bdc1-059f5f5b2e1c/index.html](https://report.eu.exitcloud.io/737d5f09-3e54-4777-bdc1-059f5f5b2e1c/index.html)
(for users who do not want to test it on their own infrastructure, but are interested in the output report \*)
*\* the example report used the 'Migration to Alternate Cloud' exit strategy, which is why you can find only cloud-related alternative technologies.*
To avoid any misunderstandings, here are a few notes:
* The lightweight version was built on Microsoft Azure because it was the fastest and simplest way to set it up. (Yes, a bit ironic…)
* I have no preference for any particular cloud service provider; each has its own advantages and disadvantages.
* I am neither a frontend nor a hardcore backend developer, so please excuse me if the aforementioned lightweight version contains some 'hacks.'
* I’m not
Hey
Why we're leaving the cloud
Basecamp has had one foot in the cloud for well over a decade, and HEY has been running there exclusively since it was launched two years ago. We've run extensively in both Amazon's cloud and Google's cloud. We've run on bare virtual machines, we've run on…
trying to convince anyone that the cloud is good or bad.
* Since a cloud exit depends on an enormous number of factors and there can be many dependencies for an application (especially in an enterprise environment), my goal is not to promise a solution that solves everything with just a Next/Next/Finish approach.
Many Thanks,
Bence.
https://redd.it/1gayf4t
@r_devops
* Since a cloud exit depends on an enormous number of factors and there can be many dependencies for an application (especially in an enterprise environment), my goal is not to promise a solution that solves everything with just a Next/Next/Finish approach.
Many Thanks,
Bence.
https://redd.it/1gayf4t
@r_devops
Reddit
From the devops community on Reddit: Cloud Exit Assessment: How to Evaluate the Risks of Leaving the Cloud
Explore this post and more from the devops community
Video: What is Crossplane + Demo 🍭
Watch here - https://youtu.be/C8yUfpmnosw
https://redd.it/1gayjvc
@r_devops
Watch here - https://youtu.be/C8yUfpmnosw
https://redd.it/1gayjvc
@r_devops
YouTube
30 Days Of CNCF Projects | Day 5: What is Crossplane + Demo 🍭
Links:
- Demo GitHub - https://github.com/guymenahem/how-to-devops-tools/tree/main/crossplane
- Crossplane Website - https://www.crossplane.io/
What's Next:
1. Like & comment the video
2. Connect on Linkedin ➡️ - https://www.linkedin.com/in/guy-menahem/…
- Demo GitHub - https://github.com/guymenahem/how-to-devops-tools/tree/main/crossplane
- Crossplane Website - https://www.crossplane.io/
What's Next:
1. Like & comment the video
2. Connect on Linkedin ➡️ - https://www.linkedin.com/in/guy-menahem/…
Using ServiceConnection env variables
Hi there,
I've been trying to wrap my head around this. I'm fairly new to devops, so far I've been placing variables (such as tenantid, clientid etc) in the scripts themselves.
Then figured out a way to create 1 variables.yaml file per tenant, so that made things a bit nicer already.
Now I've run into something, I cant seem to get to work.
If I understand correctly, I should be able to extract the info such as tenantid, clientid, but also the accesstoken from the Service Connection I've configured for the Project in DevOps, using these $env: parameters
$env:AZURE_TENANT_ID
$env:AZURE_CLIENT_ID
$env:AZURE_ACCESS_TOKEN
I've modified my main.yaml with to set addSpnToEnvironment to true.
Ive added them as arguments to the script line.
Yet still when running the pipeline, the script returns these variables as empty.
The App Registration has API permissions for Directory.Read.All and Application.Read.All
So I believe that should be sufficient.
Can anyone please help me along? I'm starting to chase my own tail right now, ending up in circles with things I've already tried :)
Purpose of the script: Create a test script to figure out how to send emails from DevOps pipelines, using graph api. In the end we want to use this for all sorts of matters of automated tasks (clean up inactive devices, verify specific SAML settings for enterprise apps, whatever else you can think off that you can script which would reduce the daily workload of repetitive tasks).
Right now the PS1 is a bit of a mess, because of a full day of testing, modifying etc.
MAIN.YAML:
trigger: none
schedules:
- cron: "0 0 1 " # Run at midnight on the first day of every month
displayName: Run once a month
branches:
include:
- main
always: true
pool:
vmImage: 'windows-latest'
steps:
- task: AzureCLI@2
inputs:
azureSubscription: 'Repo-EntraID'
scriptType: 'ps'
addSpnToEnvironment: true
scriptLocation: 'inlineScript'
inlineScript: |
# Call the SendMailMessage script with the environment variables
.\SendMailMessage\SendMailMessage.ps1 -AccessToken $env:AZUREACCESSTOKEN -TenantId $env:AZURETENANTID -ClientId $env:AZURECLIENTID
displayName: 'Send Email using Microsoft Graph and Service Connection'
SendMailMessage.ps1
param (
string$TenantId,
string$ClientId,
string$AccessToken
)
# Convert the access token to a secure string
Write-Host "Converting access token to secure string..."
$secureAccessToken = ConvertTo-SecureString $AccessToken -AsPlainText -Force
# Parameters for the email
$EmailSender = 'servicepunt@<domainname>'
$Recipient = '<my own mailaddress>'
$Subject = 'DevOps mail'
$Body = 'This is a mail from DevOps MDK'
# Show parameters
Write-Host "Starting script execution..."
Write-Host "From: $EmailSender"
Write-Host "To: $Recipient"
Write-Host "Subject: $Subject"
Write-Host "Body: $Body"
Write-Host "TenantID: $TenantId"
Write-Host "ClientID: $ClientId"
Write-Host "TenantID env: $env:AZURETENANTID"
Write-Host "ClientID env: $env:AZURECLIENTID"
# Check if AccessToken is empty
Write-Host "Checking if AccessToken is empty..."
if (string::IsNullOrWhiteSpace($AccessToken)) {
Write-Error "AccessToken is empty. Please check your service connection and ensure it has the necessary permissions."
exit 1 # Exit the script with a non-zero status code
}
Write-Host "Connecting to Microsoft Graph..."
Connect-MgGraph -AccessToken $secureAccessToken -NoWelcome
# Prepare headers for further API calls
Write-Host "Preparing headers for API calls..."
$header = @{
'Authorization' = "Bearer $AccessToken"
}
# Verify connection to Microsoft Graph
Write-Host "Verifying connection to Microsoft
Hi there,
I've been trying to wrap my head around this. I'm fairly new to devops, so far I've been placing variables (such as tenantid, clientid etc) in the scripts themselves.
Then figured out a way to create 1 variables.yaml file per tenant, so that made things a bit nicer already.
Now I've run into something, I cant seem to get to work.
If I understand correctly, I should be able to extract the info such as tenantid, clientid, but also the accesstoken from the Service Connection I've configured for the Project in DevOps, using these $env: parameters
$env:AZURE_TENANT_ID
$env:AZURE_CLIENT_ID
$env:AZURE_ACCESS_TOKEN
I've modified my main.yaml with to set addSpnToEnvironment to true.
Ive added them as arguments to the script line.
Yet still when running the pipeline, the script returns these variables as empty.
The App Registration has API permissions for Directory.Read.All and Application.Read.All
So I believe that should be sufficient.
Can anyone please help me along? I'm starting to chase my own tail right now, ending up in circles with things I've already tried :)
Purpose of the script: Create a test script to figure out how to send emails from DevOps pipelines, using graph api. In the end we want to use this for all sorts of matters of automated tasks (clean up inactive devices, verify specific SAML settings for enterprise apps, whatever else you can think off that you can script which would reduce the daily workload of repetitive tasks).
Right now the PS1 is a bit of a mess, because of a full day of testing, modifying etc.
MAIN.YAML:
trigger: none
schedules:
- cron: "0 0 1 " # Run at midnight on the first day of every month
displayName: Run once a month
branches:
include:
- main
always: true
pool:
vmImage: 'windows-latest'
steps:
- task: AzureCLI@2
inputs:
azureSubscription: 'Repo-EntraID'
scriptType: 'ps'
addSpnToEnvironment: true
scriptLocation: 'inlineScript'
inlineScript: |
# Call the SendMailMessage script with the environment variables
.\SendMailMessage\SendMailMessage.ps1 -AccessToken $env:AZUREACCESSTOKEN -TenantId $env:AZURETENANTID -ClientId $env:AZURECLIENTID
displayName: 'Send Email using Microsoft Graph and Service Connection'
SendMailMessage.ps1
param (
string$TenantId,
string$ClientId,
string$AccessToken
)
# Convert the access token to a secure string
Write-Host "Converting access token to secure string..."
$secureAccessToken = ConvertTo-SecureString $AccessToken -AsPlainText -Force
# Parameters for the email
$EmailSender = 'servicepunt@<domainname>'
$Recipient = '<my own mailaddress>'
$Subject = 'DevOps mail'
$Body = 'This is a mail from DevOps MDK'
# Show parameters
Write-Host "Starting script execution..."
Write-Host "From: $EmailSender"
Write-Host "To: $Recipient"
Write-Host "Subject: $Subject"
Write-Host "Body: $Body"
Write-Host "TenantID: $TenantId"
Write-Host "ClientID: $ClientId"
Write-Host "TenantID env: $env:AZURETENANTID"
Write-Host "ClientID env: $env:AZURECLIENTID"
# Check if AccessToken is empty
Write-Host "Checking if AccessToken is empty..."
if (string::IsNullOrWhiteSpace($AccessToken)) {
Write-Error "AccessToken is empty. Please check your service connection and ensure it has the necessary permissions."
exit 1 # Exit the script with a non-zero status code
}
Write-Host "Connecting to Microsoft Graph..."
Connect-MgGraph -AccessToken $secureAccessToken -NoWelcome
# Prepare headers for further API calls
Write-Host "Preparing headers for API calls..."
$header = @{
'Authorization' = "Bearer $AccessToken"
}
# Verify connection to Microsoft Graph
Write-Host "Verifying connection to Microsoft
Graph..."
try {
$graphProfileUrl = "https://graph.microsoft.com/v1.0/me"
$profileResponse = Invoke-RestMethod -Uri $graphProfileUrl -Method Get -Headers $header
Write-Host "Successfully connected to Microsoft Graph. User profile information retrieved:"
Write-Host "User Display Name: $($profileResponse.displayName)"
} catch {
Write-Error "Failed to connect to Microsoft Graph with the provided AccessToken: $"
exit 1 # Exit the script with a non-zero status code
}
# Microsoft Graph API URL for sending mail
$mailSendUrl = "https://graph.microsoft.com/v1.0/users/$EmailSender/sendMail"
# Compose Email
Write-Host "Composing email..."
$emailBody = @{
message = @{
subject = $Subject
body = @{
contentType = "Text"
content = $Body
}
toRecipients = @(
@{
emailAddress = @{
address = $Recipient
}
}
)
from = @{ # Specify the sender
emailAddress = @{
address = $EmailSender
}
}
}
}
# Send Email using Microsoft Graph API
Write-Host "Sending email using Microsoft Graph API..."
try {
$response = Invoke-RestMethod -Uri $mailSendUrl -Method Post -Headers $header -Body ($emailBody | ConvertTo-Json) -ContentType "application/json"
if ($response.StatusCode -ge 200 -and $response.StatusCode -lt 300) {
Write-Host "Email sent successfully."
} else {
Write-Host "Failed to send email with status code: $($response.StatusCode)"
}
} catch {
Write-Error "An error occurred while sending the email: $"
}
https://redd.it/1gb2835
@r_devops
try {
$graphProfileUrl = "https://graph.microsoft.com/v1.0/me"
$profileResponse = Invoke-RestMethod -Uri $graphProfileUrl -Method Get -Headers $header
Write-Host "Successfully connected to Microsoft Graph. User profile information retrieved:"
Write-Host "User Display Name: $($profileResponse.displayName)"
} catch {
Write-Error "Failed to connect to Microsoft Graph with the provided AccessToken: $"
exit 1 # Exit the script with a non-zero status code
}
# Microsoft Graph API URL for sending mail
$mailSendUrl = "https://graph.microsoft.com/v1.0/users/$EmailSender/sendMail"
# Compose Email
Write-Host "Composing email..."
$emailBody = @{
message = @{
subject = $Subject
body = @{
contentType = "Text"
content = $Body
}
toRecipients = @(
@{
emailAddress = @{
address = $Recipient
}
}
)
from = @{ # Specify the sender
emailAddress = @{
address = $EmailSender
}
}
}
}
# Send Email using Microsoft Graph API
Write-Host "Sending email using Microsoft Graph API..."
try {
$response = Invoke-RestMethod -Uri $mailSendUrl -Method Post -Headers $header -Body ($emailBody | ConvertTo-Json) -ContentType "application/json"
if ($response.StatusCode -ge 200 -and $response.StatusCode -lt 300) {
Write-Host "Email sent successfully."
} else {
Write-Host "Failed to send email with status code: $($response.StatusCode)"
}
} catch {
Write-Error "An error occurred while sending the email: $"
}
https://redd.it/1gb2835
@r_devops
Why should I use ArgoCD and not Terraform only?
Hey everyone,
I'm digging into the Gitops topic at the moment, just to understand the use-cases where it's useful, when not ideal etc.
Currently, I have fully terraformed infrastructures. That includes multiple Kubernetes projects, each project multiple environments, each environment for each project on a dedicated AWS account.
All of it is deployed through Github actions, using terraform. My build stage deploys docker images on github registry (or aws ecr). Then, Terraform applies modules one after the other (network config, then cluster config, then application config). The image id is passed from the build to the terraform and is input as a variable, so terraform detects the diff and apply it.
Using HPA/PDB/Karpenter, we manager to have our environments running at all time, even when faulty image is deployed (pods are not all rolled out). Pipeline fails, so new image is not deployed.
This setup works fine, and we're happy about it.
What would ArgoCD bring to the table that I'm missing?
What are the scenarios, where our deployment wouldn't be as good as an ArgoCD one?
Thanks!
https://redd.it/1gb3rwn
@r_devops
Hey everyone,
I'm digging into the Gitops topic at the moment, just to understand the use-cases where it's useful, when not ideal etc.
Currently, I have fully terraformed infrastructures. That includes multiple Kubernetes projects, each project multiple environments, each environment for each project on a dedicated AWS account.
All of it is deployed through Github actions, using terraform. My build stage deploys docker images on github registry (or aws ecr). Then, Terraform applies modules one after the other (network config, then cluster config, then application config). The image id is passed from the build to the terraform and is input as a variable, so terraform detects the diff and apply it.
Using HPA/PDB/Karpenter, we manager to have our environments running at all time, even when faulty image is deployed (pods are not all rolled out). Pipeline fails, so new image is not deployed.
This setup works fine, and we're happy about it.
What would ArgoCD bring to the table that I'm missing?
What are the scenarios, where our deployment wouldn't be as good as an ArgoCD one?
Thanks!
https://redd.it/1gb3rwn
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Using zstd compression with BuildKit - decompresses 60%* faster
Last week I did a bit of a deep dive into BuildKit and Containerd to learn a little about the alternative compression methods for building images.
Each layer of an image pushed to a registry by Docker is compressed with `gzip` compression. This is also the default for `buildx build`, but we have a little more control with `buildx` and can select either `gzip`, `zstd`, or `estargz`.
I plan to do an additional deep dive into `estargz` specifically because it is a bit of a special use-case. Zstandard though, is another interesting option that I think more people need to be aware of and possibly start using.
>What is wrong with Gzip?
Gzip is an old but gold standard. It's great but it suffers from legacy choices that we don't dare change now for reliability and compatibility. The biggest issue is `gzip` is a single-threaded application.
When *building* an image with gzip, your builds can be substantially slower due to the fact that `gzip` just wont be able to take advantage of multiple cores. This is likely not something you would have noticed without a comparison though.
When `pulling` an image, whether locally or as part of a deployment, the images layers need to be extracted, and this is the most critical point. Faster decompression means faster deployments.
`gzip` is single-threaded but there is a parallel implementation of `gzip` called `pigz`. Containerd will attempt to use `pigz` for *decompression* if it is available on the host system. Unlike `gzip` and `zstd` which both have native Go implementations built into Containerd, interestingly it will reach out for an external `pigz` binary.
For compatibility and legacy reasons, Docker/Containerd has not implemented `pigz` for compression. The compression of `pigz` is essentially the same as `gzip` but scales in speed with the number of cores.
There is however, another compression method `zstd` which is natively supported, multi-threaded by default, and most importantly, decompresses even faster than `pigz`.
>How do I use `zstd`?
docker buildx build . --output type=image,name=<registry>/<namespace>/<repository>:<tag>,compression=<compression method>,oci-mediatypes=true,platform=linux/amd64
When using the `docker buildx build` (or `depot build` for depot users) you can specify the `--output` flag with a `compression` value of `zstd`.
>How much better is zstd than gzip?
To really answer this question will require knowledge of your hardware, and depend on if we are talking about the builder or the host machine. In either case, the tldr is more cores == better.
I ran some synthetic benchmarks on a 16 core vm just to get an idea of the differences. You can see the fancy graphs and full writeup in the [blog post](https://depot.dev/blog/building-images-gzip-vs-zstd).
Skipping to just the [decompression comparison](https://depot.dev/blog/building-images-gzip-vs-zstd#comparison-of-decompression-times) portion, there is a roughly 50% difference in speed going from `gzip`, to `pigz`, to `zstd` at every step.
|Decompression Method|Time (ms)|
|:-|:-|
|gzip|25341|
|pigz|14259|
|zstd|6108|
Meaning, even if `pigz` is installed on your host machine now, which is not a given, you are still giving up a 50% speed increase if you haven't switched to `zstd` (on a 16 core machine, it may be more or less depending).
Are you wondering how long it took to compress these images? Let's leave out `pigz` since it can't actually be used by Docker.
|Compression Method|Time (ms)|
|:-|:-|
|gzip|163014|
|zstd|14455|
|That is 90% faster compression. 90%... Nine followed by a zero.||
But you are thinking. There must be a trade-off in compression ratio. Let's check. The image we are compressing is 5.18GB uncompressed.
|Compression Method|Compressed Size (GB)|
|:-|:-|
|gzip|1.5|
|zstd|1.32|
Nope. 90% faster than gzip, smaller file, 60% faster to decompress.
# Conclusion
Zstandard is nearly universally a better choice in today's world, but it's always worth running a benchmark of your own using your own data and your
Last week I did a bit of a deep dive into BuildKit and Containerd to learn a little about the alternative compression methods for building images.
Each layer of an image pushed to a registry by Docker is compressed with `gzip` compression. This is also the default for `buildx build`, but we have a little more control with `buildx` and can select either `gzip`, `zstd`, or `estargz`.
I plan to do an additional deep dive into `estargz` specifically because it is a bit of a special use-case. Zstandard though, is another interesting option that I think more people need to be aware of and possibly start using.
>What is wrong with Gzip?
Gzip is an old but gold standard. It's great but it suffers from legacy choices that we don't dare change now for reliability and compatibility. The biggest issue is `gzip` is a single-threaded application.
When *building* an image with gzip, your builds can be substantially slower due to the fact that `gzip` just wont be able to take advantage of multiple cores. This is likely not something you would have noticed without a comparison though.
When `pulling` an image, whether locally or as part of a deployment, the images layers need to be extracted, and this is the most critical point. Faster decompression means faster deployments.
`gzip` is single-threaded but there is a parallel implementation of `gzip` called `pigz`. Containerd will attempt to use `pigz` for *decompression* if it is available on the host system. Unlike `gzip` and `zstd` which both have native Go implementations built into Containerd, interestingly it will reach out for an external `pigz` binary.
For compatibility and legacy reasons, Docker/Containerd has not implemented `pigz` for compression. The compression of `pigz` is essentially the same as `gzip` but scales in speed with the number of cores.
There is however, another compression method `zstd` which is natively supported, multi-threaded by default, and most importantly, decompresses even faster than `pigz`.
>How do I use `zstd`?
docker buildx build . --output type=image,name=<registry>/<namespace>/<repository>:<tag>,compression=<compression method>,oci-mediatypes=true,platform=linux/amd64
When using the `docker buildx build` (or `depot build` for depot users) you can specify the `--output` flag with a `compression` value of `zstd`.
>How much better is zstd than gzip?
To really answer this question will require knowledge of your hardware, and depend on if we are talking about the builder or the host machine. In either case, the tldr is more cores == better.
I ran some synthetic benchmarks on a 16 core vm just to get an idea of the differences. You can see the fancy graphs and full writeup in the [blog post](https://depot.dev/blog/building-images-gzip-vs-zstd).
Skipping to just the [decompression comparison](https://depot.dev/blog/building-images-gzip-vs-zstd#comparison-of-decompression-times) portion, there is a roughly 50% difference in speed going from `gzip`, to `pigz`, to `zstd` at every step.
|Decompression Method|Time (ms)|
|:-|:-|
|gzip|25341|
|pigz|14259|
|zstd|6108|
Meaning, even if `pigz` is installed on your host machine now, which is not a given, you are still giving up a 50% speed increase if you haven't switched to `zstd` (on a 16 core machine, it may be more or less depending).
Are you wondering how long it took to compress these images? Let's leave out `pigz` since it can't actually be used by Docker.
|Compression Method|Time (ms)|
|:-|:-|
|gzip|163014|
|zstd|14455|
|That is 90% faster compression. 90%... Nine followed by a zero.||
But you are thinking. There must be a trade-off in compression ratio. Let's check. The image we are compressing is 5.18GB uncompressed.
|Compression Method|Compressed Size (GB)|
|:-|:-|
|gzip|1.5|
|zstd|1.32|
Nope. 90% faster than gzip, smaller file, 60% faster to decompress.
# Conclusion
Zstandard is nearly universally a better choice in today's world, but it's always worth running a benchmark of your own using your own data and your
own hardware to ensure you are optimizing for your specific situation. In our tests, we saw a [60% decompression speed increase](https://depot.dev/blog/building-images-gzip-vs-zstd#conclusion) and that's ignoring that *massive* savings in the build stage where we are going from a single threaded application to a multi-threaded one.
https://redd.it/1gb4e98
@r_devops
https://redd.it/1gb4e98
@r_devops
Re: Container orchestration vs. VM orchestration
Hello devops! I wanted to start a new post in the same area as:
https://www.reddit.com/r/devops/comments/1bshdqx/containerorchestrationvsvmorchestrationin/
but ask a little different question. Does anyone have a favorite way to do VM orchestration as if they were pods and have a kubectl like cli tool for it?
Things I want are:
1. No container, no Docker file, I want my code to run directly on the VM.
2. Just a simple bash script that goes in startupscript. Here is pulumi example for GCP:
jammy = "projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240208"
computeinstance = gcp.compute.Instance(
"aa-aug-23-2024",
machinetype="e2-micro",
zone=zone,
metadatastartupscript=startupscript,
metadata={
"enable-oslogin": "false",
"ssh-keys": "thekey",
},
bootdisk=gcp.compute.InstanceBootDiskArgs(initializeparams=gcp.compute.InstanceBootDiskInitializeParamsArgs(
image=jammy,
size=30,
type="pd-ssd",
)
),
3. Be able to list all my running vms (as it they were pods) and get logs, spin up more, spin down to less, etc.
Is anyone doing this and is it catching on as a real kubectl alternative? I feel like I would have to hack together pulumi logic or specific aws/gcp cli commands and there isn't really this "back to vm" movement yet. Or is Nomad the tool for this? What tool is out there that is really trying to make this happen?
https://redd.it/1gb3of0
@r_devops
Hello devops! I wanted to start a new post in the same area as:
https://www.reddit.com/r/devops/comments/1bshdqx/containerorchestrationvsvmorchestrationin/
but ask a little different question. Does anyone have a favorite way to do VM orchestration as if they were pods and have a kubectl like cli tool for it?
Things I want are:
1. No container, no Docker file, I want my code to run directly on the VM.
2. Just a simple bash script that goes in startupscript. Here is pulumi example for GCP:
jammy = "projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240208"
computeinstance = gcp.compute.Instance(
"aa-aug-23-2024",
machinetype="e2-micro",
zone=zone,
metadatastartupscript=startupscript,
metadata={
"enable-oslogin": "false",
"ssh-keys": "thekey",
},
bootdisk=gcp.compute.InstanceBootDiskArgs(initializeparams=gcp.compute.InstanceBootDiskInitializeParamsArgs(
image=jammy,
size=30,
type="pd-ssd",
)
),
3. Be able to list all my running vms (as it they were pods) and get logs, spin up more, spin down to less, etc.
Is anyone doing this and is it catching on as a real kubectl alternative? I feel like I would have to hack together pulumi logic or specific aws/gcp cli commands and there isn't really this "back to vm" movement yet. Or is Nomad the tool for this? What tool is out there that is really trying to make this happen?
https://redd.it/1gb3of0
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community