3 YOE A bit quiet 4 Months into my new role . What to focus on?
For reasons out of my control, and my manager knows, I don't have much work going on. My manager has outlined my eventual responsibilities, but this is some time away. You might think this is the best thing ever, but it is driving me insane. I've always been used to putting out fires constantly, albeit in interesting lines work but I was always severely underpaid.
Fast forward to now, I am paid double compared to previous role, but the caveat is I don't have much to do. The standard suggestion would be to get a cert of some sort, but I have 4 AWS certs and recently did the CKA so I am bored out of my mind and I cannot go through doing another multiple choice exam/Preperation.
I am strong in AWS, Kubernetes, Networking, Docker, did a lot of Python some time ago but It dried up. What should I focus on? Should I try and learn Azure etc? did a lot of CICD previously, including Jenkins, GitLab, AWS pipelines etc etc.
TLDR - Happy with salary, but not much work at the moment. Worried about skill atrophy and also sick of studying for certs.
https://redd.it/1dyfy9j
@r_devops
For reasons out of my control, and my manager knows, I don't have much work going on. My manager has outlined my eventual responsibilities, but this is some time away. You might think this is the best thing ever, but it is driving me insane. I've always been used to putting out fires constantly, albeit in interesting lines work but I was always severely underpaid.
Fast forward to now, I am paid double compared to previous role, but the caveat is I don't have much to do. The standard suggestion would be to get a cert of some sort, but I have 4 AWS certs and recently did the CKA so I am bored out of my mind and I cannot go through doing another multiple choice exam/Preperation.
I am strong in AWS, Kubernetes, Networking, Docker, did a lot of Python some time ago but It dried up. What should I focus on? Should I try and learn Azure etc? did a lot of CICD previously, including Jenkins, GitLab, AWS pipelines etc etc.
TLDR - Happy with salary, but not much work at the moment. Worried about skill atrophy and also sick of studying for certs.
https://redd.it/1dyfy9j
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Passed KCNA. But have a few questions!
So, I passed my KCNA exam yesterday, I just received an email saying that I achieved the passing score and I can download my cert from the portal, however can’t see my score on the portal even after 24 hours of finishing the exam. Also, now that I have passed this exam could ya’ll guide me on which cert should I prepare for now? I have 3 weeks before my college starts.
https://redd.it/1dyi8u8
@r_devops
So, I passed my KCNA exam yesterday, I just received an email saying that I achieved the passing score and I can download my cert from the portal, however can’t see my score on the portal even after 24 hours of finishing the exam. Also, now that I have passed this exam could ya’ll guide me on which cert should I prepare for now? I have 3 weeks before my college starts.
https://redd.it/1dyi8u8
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How much do you care about the cloud infra costs that your company incurs?
As an SRE, architect, or software developer, how much do you care about the cloud costs that the products you build and support incur? Do you think much about cloud cost when designing systems? Do you ever look at the cost once a product/feature is in production?
https://redd.it/1dyj1zf
@r_devops
As an SRE, architect, or software developer, how much do you care about the cloud costs that the products you build and support incur? Do you think much about cloud cost when designing systems? Do you ever look at the cost once a product/feature is in production?
https://redd.it/1dyj1zf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Improve website performance using cache
Hello everyone
I have this ecommerce website developed with Nuxt2 and the API using Symfony framework (PHP), all hosted in AWS using beanstalk, and also using Cloudflare for the protection.
The performance of Nuxt SSR is not so great, so my idea was to add cache on top of it. In another project (self-hosted) I have done cache invalidation using cache tags, and for that I used Varnish. I was trying to avoid Varnish and just use Cloudflare, but just found out that cache invalidation just works with the enterprise plan, and that's out of our budget. I also though instead of using Cloudflare just make use of Cloudfront using it's WAF (for what I read not as good as cloudflare's), but also Cloudfront don't support cache tag invalidation out of the box
So now a little bit stuck with ideas, so my thoughts are:
1) Create some sort of middleware that stores the tags and the urls where they were used maybe on Redis? and that do the cache invalidation of those urls? (I believe cloudflare only allow 30 per request so could need many API request to accomplish it
2) Place Varnish in AWS infrastructure? (Cloudflare -> Varnish -> ELB)
3) Ditch cloudflare for some other CDN solution?
Any feedback would be very welcome
https://redd.it/1dyj7f3
@r_devops
Hello everyone
I have this ecommerce website developed with Nuxt2 and the API using Symfony framework (PHP), all hosted in AWS using beanstalk, and also using Cloudflare for the protection.
The performance of Nuxt SSR is not so great, so my idea was to add cache on top of it. In another project (self-hosted) I have done cache invalidation using cache tags, and for that I used Varnish. I was trying to avoid Varnish and just use Cloudflare, but just found out that cache invalidation just works with the enterprise plan, and that's out of our budget. I also though instead of using Cloudflare just make use of Cloudfront using it's WAF (for what I read not as good as cloudflare's), but also Cloudfront don't support cache tag invalidation out of the box
So now a little bit stuck with ideas, so my thoughts are:
1) Create some sort of middleware that stores the tags and the urls where they were used maybe on Redis? and that do the cache invalidation of those urls? (I believe cloudflare only allow 30 per request so could need many API request to accomplish it
2) Place Varnish in AWS infrastructure? (Cloudflare -> Varnish -> ELB)
3) Ditch cloudflare for some other CDN solution?
Any feedback would be very welcome
https://redd.it/1dyj7f3
@r_devops
Amazon
Tag-based invalidation in Amazon CloudFront | Amazon Web Services
In this post, we demonstrate how to implement tag-based invalidation in Amazon CloudFront with Lambda@Edge, Amazon DynamoDB, AWS Lambda, and AWS Step Functions. This post provides you with a reference architecture and sample code artifacts to help you deploy…
Need advice for reducing the size a monolith repository
I have a big monolith (\~75GB) git repository for a particular desktop application. The repo has several gradle modules tightly coupled to each other in terms of their dependencies and exports. All the modules have Unit Tests which use test data (XML, properties, etc.) for their validation. The Unit Test data for all tests is around 30GB.
Apart from this, there are several automation tests in a separate folder within the repo. The automation tests also use certain test data (XML, ZIP etc) for their import/validation processes. The automation test data occupies another 35GB of disk space.
I need advice on how to go about reducing the size of the repository. Should I go about splitting the repository as microservices? Microservices for this seems to be challenging due to the tight coupling of code across modules. I still don't want to eliminate this option.
Extracting the automation test data and putting it into an S3 bucket/Artifactory seems to be a viable approach to reduce automation test data. And extracting the Unit Test data seems to be very challenging as thousands of tests need to be refactored in order to fetch the test data from another source.
Has anyone else faced this situation where their mono repos are bloated? Our entire infrastructure is hosted on-prem with in-house CI/CD pipelines. The tight coupling of modules within the repo, also the tight coupling of the CI/CD infrastructure with the repository layout seems to be challenging to address this issue.
Appreciate any inputs/advice on how to go about this. Thanks!
https://redd.it/1dyn8iu
@r_devops
I have a big monolith (\~75GB) git repository for a particular desktop application. The repo has several gradle modules tightly coupled to each other in terms of their dependencies and exports. All the modules have Unit Tests which use test data (XML, properties, etc.) for their validation. The Unit Test data for all tests is around 30GB.
Apart from this, there are several automation tests in a separate folder within the repo. The automation tests also use certain test data (XML, ZIP etc) for their import/validation processes. The automation test data occupies another 35GB of disk space.
I need advice on how to go about reducing the size of the repository. Should I go about splitting the repository as microservices? Microservices for this seems to be challenging due to the tight coupling of code across modules. I still don't want to eliminate this option.
Extracting the automation test data and putting it into an S3 bucket/Artifactory seems to be a viable approach to reduce automation test data. And extracting the Unit Test data seems to be very challenging as thousands of tests need to be refactored in order to fetch the test data from another source.
Has anyone else faced this situation where their mono repos are bloated? Our entire infrastructure is hosted on-prem with in-house CI/CD pipelines. The tight coupling of modules within the repo, also the tight coupling of the CI/CD infrastructure with the repository layout seems to be challenging to address this issue.
Appreciate any inputs/advice on how to go about this. Thanks!
https://redd.it/1dyn8iu
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to upskill myself?v
Hi all. I am with a big 4 company working as a QA engineer. My project got over where I was doing automation using Java, testng, selenium webdriver. Before that, Python & robot framework. I have 10 years of experience doing the above. Now they are saying QA alone not gonna fly. I have to upskill myself. I am looking at devops as one of the options. So how do I go about upskilling myself in devops domai? I did cka certification..
https://redd.it/1dynzm0
@r_devops
Hi all. I am with a big 4 company working as a QA engineer. My project got over where I was doing automation using Java, testng, selenium webdriver. Before that, Python & robot framework. I have 10 years of experience doing the above. Now they are saying QA alone not gonna fly. I have to upskill myself. I am looking at devops as one of the options. So how do I go about upskilling myself in devops domai? I did cka certification..
https://redd.it/1dynzm0
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Github Actions docker tag strategy
I am currently setting up GitHub Actions and got some interesting issues.
I have two workflows:
1. pr-open.yml: Triggered during a pull request. Builds a Docker image and pushes it to the registry.
2. merge.yml: Triggered during a merge. Pulls the image and deploys it.
At first, I tagged the image with the Git commit SHA. But, during a 'merge and squash,' a new commit SHA is created and merge.yml doesn't know which image to pull.
Then I thought of using the pull request number as the Docker tag. But during the merge, I can't retrieve the pull request number. What is the best practice for Docker tagging in this case?
https://redd.it/1dyp77j
@r_devops
I am currently setting up GitHub Actions and got some interesting issues.
I have two workflows:
1. pr-open.yml: Triggered during a pull request. Builds a Docker image and pushes it to the registry.
2. merge.yml: Triggered during a merge. Pulls the image and deploys it.
At first, I tagged the image with the Git commit SHA. But, during a 'merge and squash,' a new commit SHA is created and merge.yml doesn't know which image to pull.
Then I thought of using the pull request number as the Docker tag. But during the merge, I can't retrieve the pull request number. What is the best practice for Docker tagging in this case?
https://redd.it/1dyp77j
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Python Has Too Many Package Managers
#Python Has Too Many Package Managers
Note: I wrote this article
https://redd.it/1dytcq7
@r_devops
#Python Has Too Many Package Managers
Note: I wrote this article
https://redd.it/1dytcq7
@r_devops
DuBlog
Python has too many package managers
Overview of Python's Package management ecosystem in 2024
what should i learn first
im a spring boot dev, i dont know much about linux only the basics of the basics
i work with docker ( docker files - docker compose ...) and i want to start learning about devops
and the process of deploying the apps that i create
what should i learn fast because it is very overwhelming cuz of the amount of tools that exists
i dont want to master one thing i just wanna grasp a little bit of everything cuz i believe mastering one thing isnt the optimal way of learning ...
so please if u can guide me with a road map of what to learn i would appreciate it
https://redd.it/1dyu4tx
@r_devops
im a spring boot dev, i dont know much about linux only the basics of the basics
i work with docker ( docker files - docker compose ...) and i want to start learning about devops
and the process of deploying the apps that i create
what should i learn fast because it is very overwhelming cuz of the amount of tools that exists
i dont want to master one thing i just wanna grasp a little bit of everything cuz i believe mastering one thing isnt the optimal way of learning ...
so please if u can guide me with a road map of what to learn i would appreciate it
https://redd.it/1dyu4tx
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Can I use M1 MacBook Air for DEVOPS
Like to run 1 or 2 VMs
And all basic DevOps tasks and all
I have a budget of 60k to 75k, Please let me know your suggestions.
https://redd.it/1dyvayg
@r_devops
Like to run 1 or 2 VMs
And all basic DevOps tasks and all
I have a budget of 60k to 75k, Please let me know your suggestions.
https://redd.it/1dyvayg
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Best Practices for Managing a Large Number of Subscriptions?
I manage around 14 Azure subscriptions and it's expected to keep growing. Most of them were created by developers before I joined the team so were built via Click-Ops. I'm trying to push the move to IaC.
Originally I had the idea to create a repo for each subscription but it's proved to be quite tedious to configure and most aren't being utilised anyway. I now have a new idea of a factory: A single pipeline with branches for each of our common templates. With the factory, a developer could run the factory pipeline, select the "App Service Plan" branch, enter in the parameters required (subscription name, name of the project, etc) and it will just spit out an App Service Plan to the chosen subscription.
I think this would be a great experience for the developers as it would then be all GUI based but it then means the infrastructures aren't actually recorded in code but are just a handful of templates that are frequently used to push things out.
I was wondering what more experienced people think of this idea - Would it be considered bad practice from an auditability perspective? I am really struggling to find anything about IaC best practices in general so anything you can share would be great.
Thank you!
https://redd.it/1dykmdb
@r_devops
I manage around 14 Azure subscriptions and it's expected to keep growing. Most of them were created by developers before I joined the team so were built via Click-Ops. I'm trying to push the move to IaC.
Originally I had the idea to create a repo for each subscription but it's proved to be quite tedious to configure and most aren't being utilised anyway. I now have a new idea of a factory: A single pipeline with branches for each of our common templates. With the factory, a developer could run the factory pipeline, select the "App Service Plan" branch, enter in the parameters required (subscription name, name of the project, etc) and it will just spit out an App Service Plan to the chosen subscription.
I think this would be a great experience for the developers as it would then be all GUI based but it then means the infrastructures aren't actually recorded in code but are just a handful of templates that are frequently used to push things out.
I was wondering what more experienced people think of this idea - Would it be considered bad practice from an auditability perspective? I am really struggling to find anything about IaC best practices in general so anything you can share would be great.
Thank you!
https://redd.it/1dykmdb
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Seeking Advice: Backend Services for (Flutter) Mobile Apps
# Hello Fellow Developers,
I'm reaching out to gather some insights on backend services for Flutter mobile apps. There's an overwhelming amount of information available, and I would greatly appreciate some clarity on a few points.
Specifically, I'm interested in the differences between using Firebase and a self-hosted solution (such as using AWS).
Firebase: It offers a lot of out-of-the-box solutions, which can be very appealing. However, I've heard that it can get quite costly, especially when it comes to downloading files. Would this be a significant concern for my clients?
Self-Hosted Solutions: On the other hand, these can potentially offer more control and scalability. But I'm curious about the additional effort required to set up and maintain such a solution. Is it worth it to offer this to my clients?
For context, I'm looking to develop apps for businesses, I am trying to provide them the most value possible, and I am wondering if it's worth it to offer self-hosted solutions or stick to Firebase. I am not concerned about anything but value for them.
I'd love to hear your experiences and recommendations regarding these backend options. Does it really matter which one I choose, given the specifics of my situation? Any feedback or advice would be greatly appreciated!
Thank you in advance for your help!
https://redd.it/1dytmdj
@r_devops
# Hello Fellow Developers,
I'm reaching out to gather some insights on backend services for Flutter mobile apps. There's an overwhelming amount of information available, and I would greatly appreciate some clarity on a few points.
Specifically, I'm interested in the differences between using Firebase and a self-hosted solution (such as using AWS).
Firebase: It offers a lot of out-of-the-box solutions, which can be very appealing. However, I've heard that it can get quite costly, especially when it comes to downloading files. Would this be a significant concern for my clients?
Self-Hosted Solutions: On the other hand, these can potentially offer more control and scalability. But I'm curious about the additional effort required to set up and maintain such a solution. Is it worth it to offer this to my clients?
For context, I'm looking to develop apps for businesses, I am trying to provide them the most value possible, and I am wondering if it's worth it to offer self-hosted solutions or stick to Firebase. I am not concerned about anything but value for them.
I'd love to hear your experiences and recommendations regarding these backend options. Does it really matter which one I choose, given the specifics of my situation? Any feedback or advice would be greatly appreciated!
Thank you in advance for your help!
https://redd.it/1dytmdj
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
I want to treat my IIS logging, like I do my container logging, with a log collector?
hi,
I'm thinking that instead of my webapp logging drectly to ElasticSearch/ES it ought to write to stdout, and then I have a log collector (logstash/fluentd?) for each of my sites (100+) shipping those logs to ES. I've got something like this working by shipping to eventlog, and shipping those with winlogbeat, but it doesn't feel right, not least because my Windows event-logs/discs are spinning trying to keep up with the events per second I want to ship. Is this right approach, or should I write to stdout/stderr and have a different collector do this shipping for me without my discs spinning so much. thanks,
https://redd.it/1dyyj4u
@r_devops
hi,
I'm thinking that instead of my webapp logging drectly to ElasticSearch/ES it ought to write to stdout, and then I have a log collector (logstash/fluentd?) for each of my sites (100+) shipping those logs to ES. I've got something like this working by shipping to eventlog, and shipping those with winlogbeat, but it doesn't feel right, not least because my Windows event-logs/discs are spinning trying to keep up with the events per second I want to ship. Is this right approach, or should I write to stdout/stderr and have a different collector do this shipping for me without my discs spinning so much. thanks,
https://redd.it/1dyyj4u
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Azure Container App deployed from GHCR with Github Actions does not make a new revision
I am trying to build a container then deploy it to ACA via Github Actions, I use a sha tag not
Please see my run here:https://github.com/r-Techsupport/hyde/actions/runs/9850292558/job/27195481063
https://redd.it/1dyznki
@r_devops
I am trying to build a container then deploy it to ACA via Github Actions, I use a sha tag not
latest and the run says "Your container app hyde has been created and deployed! Congrats!" with all of the correct image names and tags but in Azure I see no evidence of a new revision being made. Please see my run here:https://github.com/r-Techsupport/hyde/actions/runs/9850292558/job/27195481063
https://redd.it/1dyznki
@r_devops
GitHub
Publish Docker amd64 image · r-Techsupport/hyde@9c645f9
A web editor and CMS for Jekyll/git static sites. Contribute to r-Techsupport/hyde development by creating an account on GitHub.
Flyway for database migration
So I am currently learning about how to adapt flyway with our current technology stack, as of now we use a local developed tool for database migration but since it was developed 10 years aga, it is showing signs of limitations and no one wants to touch the original src code so we opted in looking for a tool that do the same job.. so far flyway is the first option we have...just to clarify are we able to save versions of schemas and access those older version for testing? As of now all I am seeing is that you to manually do it since flyway is a versioned migration and accessing your older version will be harder than it is when done manually... Is that correct?
https://redd.it/1dyygv3
@r_devops
So I am currently learning about how to adapt flyway with our current technology stack, as of now we use a local developed tool for database migration but since it was developed 10 years aga, it is showing signs of limitations and no one wants to touch the original src code so we opted in looking for a tool that do the same job.. so far flyway is the first option we have...just to clarify are we able to save versions of schemas and access those older version for testing? As of now all I am seeing is that you to manually do it since flyway is a versioned migration and accessing your older version will be harder than it is when done manually... Is that correct?
https://redd.it/1dyygv3
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Should I keep my CCNP current or let it go
I'm wanting to spend my final years in tech (like my last 20 or so years, lol) working with cloud and DevOps. I like the environment better than the network teams I've been on through the years, I like working with code more, honestly, I like the cloud more. That being said, it's come time to renew my CCNP and I'm running out of time to do it with CEs. I'm honestly thinking of just letting it go. I'm starting to really hate Cisco and the money-grabbing thing they've become anyway. Is it important to keep if I want to make this transition? I'm thinking of focusing more on AWS certs if I'm wanting to show potential employers certifications.
https://redd.it/1dz1iuk
@r_devops
I'm wanting to spend my final years in tech (like my last 20 or so years, lol) working with cloud and DevOps. I like the environment better than the network teams I've been on through the years, I like working with code more, honestly, I like the cloud more. That being said, it's come time to renew my CCNP and I'm running out of time to do it with CEs. I'm honestly thinking of just letting it go. I'm starting to really hate Cisco and the money-grabbing thing they've become anyway. Is it important to keep if I want to make this transition? I'm thinking of focusing more on AWS certs if I'm wanting to show potential employers certifications.
https://redd.it/1dz1iuk
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Going from 30 to 30 Million SLOs Observability Meetup
Hey everyone!
If you're in London (UK) this week and are interested in Observability, make sure you drop by the Observability Engineering Meetup on Thursday, July 11th.
Alex, Senior Site Reliability Engineer at Google, will present the evolution of Service Level Objectives (SLOs) for the GCE Compute API over the past eight years. He'll start with the initial 30 SLOs, move through a phase with around a thousand, and end with millions of per-customer SLOs. He'll share anecdotes, techniques for handling low-QPS (continuous over discrete metrics), and strategies for aggregating data to enhance leadership visibility. He'll also give practical tips for running and improving this system in production.
You can RSVP here: https://www.meetup.com/observability\_engineering/events/301637095/
See you there :D
https://redd.it/1dz17p7
@r_devops
Hey everyone!
If you're in London (UK) this week and are interested in Observability, make sure you drop by the Observability Engineering Meetup on Thursday, July 11th.
Alex, Senior Site Reliability Engineer at Google, will present the evolution of Service Level Objectives (SLOs) for the GCE Compute API over the past eight years. He'll start with the initial 30 SLOs, move through a phase with around a thousand, and end with millions of per-customer SLOs. He'll share anecdotes, techniques for handling low-QPS (continuous over discrete metrics), and strategies for aggregating data to enhance leadership visibility. He'll also give practical tips for running and improving this system in production.
You can RSVP here: https://www.meetup.com/observability\_engineering/events/301637095/
See you there :D
https://redd.it/1dz17p7
@r_devops
Meetup
Going from 30 to 30 Million SLOs, Thu, Jul 11, 2024, 6:00 PM | Meetup
This event is sponsored by [PagerDuty](https://www.pagerduty.com/)
Hey everyone!
We're back with another edition of the Observability Engineering London meetup. This time
Hey everyone!
We're back with another edition of the Observability Engineering London meetup. This time
Got charged for DB2 after free trial credits ended - no invoice was sent before.
In April this year, I started a DB2 instance purely for exploration as I was looking for jobs, and DB2 was one of the required elements for that job. Soon after, I forgot about it, and the instance kept running.
Fast forward to this month, and I see a charge of $140 on my Amex from IBM Canada. I instantly realized something was left running, so I logged into IBM Cloud, hopped on chat support, and got the instance deleted. I inquired about a refund as it was a genuine mistake, and the agent asked me to create a ticket. While logged in, I noticed that there were no invoices issued for April and May. They billed me for June and issued me an invoice in July after I got charged.
The ticket I created was declined for a refund as the analyst said she couldn't do anything about it since I entered my card details and upgraded to a pay-as-you-go account. I argued with her about why I was not issued an invoice for April and May. I have $0.00 invoices from GCP. Aren't they legally inclined to issue an invoice for services being used? I checked my audit logs and showed her that I hadn't logged in since the day after I created my account, except now to create the ticket. I insisted on talking to a senior agent, but it doesn't look like she is going to comply, and I have another $40 charge coming up next month for usage of 8 days in July. The support staff seems to be outsourced to India, and from the conversation, it doesn't appear like they are going to issue a refund or credit.
Is there an escalation system at IBM support, or am I left for dead? I am considering disputing the charge with Amex. What would be a strong reason to win this dispute? I don't care if my IBM account gets banned; I just want to limit my losses. It doesn't look like a lot, but for a person searching for a job, it hurts.
https://redd.it/1dz5587
@r_devops
In April this year, I started a DB2 instance purely for exploration as I was looking for jobs, and DB2 was one of the required elements for that job. Soon after, I forgot about it, and the instance kept running.
Fast forward to this month, and I see a charge of $140 on my Amex from IBM Canada. I instantly realized something was left running, so I logged into IBM Cloud, hopped on chat support, and got the instance deleted. I inquired about a refund as it was a genuine mistake, and the agent asked me to create a ticket. While logged in, I noticed that there were no invoices issued for April and May. They billed me for June and issued me an invoice in July after I got charged.
The ticket I created was declined for a refund as the analyst said she couldn't do anything about it since I entered my card details and upgraded to a pay-as-you-go account. I argued with her about why I was not issued an invoice for April and May. I have $0.00 invoices from GCP. Aren't they legally inclined to issue an invoice for services being used? I checked my audit logs and showed her that I hadn't logged in since the day after I created my account, except now to create the ticket. I insisted on talking to a senior agent, but it doesn't look like she is going to comply, and I have another $40 charge coming up next month for usage of 8 days in July. The support staff seems to be outsourced to India, and from the conversation, it doesn't appear like they are going to issue a refund or credit.
Is there an escalation system at IBM support, or am I left for dead? I am considering disputing the charge with Amex. What would be a strong reason to win this dispute? I don't care if my IBM account gets banned; I just want to limit my losses. It doesn't look like a lot, but for a person searching for a job, it hurts.
https://redd.it/1dz5587
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Load balancing Airbyte workloads across multiple Kubernetes clusters
How do you load balance long-running Kubernetes workloads across multiple clusters?
At Airbyte, as part of supporting multiple geographic regions for data replication workloads, we adopted a control-plane/data-plane architecture. A control-plane orchestrates data movement workloads across multiple data-planes. Technically speaking, each plane is a Kubernetes cluster.
Our solution to load-balancing workloads across multiple data-planes is to push down the responsibility to the data-plane itself. We enqueue workloads in a single job queue and let the data-planes compete for jobs to process if they have capacity to do so. This has the benefit of treating capacity as a problem that is local to a cluster, removes the complexity of planning ahead for available resources, and keeps operations simple when facing cluster downtime.
https://redd.it/1dz4tih
@r_devops
How do you load balance long-running Kubernetes workloads across multiple clusters?
At Airbyte, as part of supporting multiple geographic regions for data replication workloads, we adopted a control-plane/data-plane architecture. A control-plane orchestrates data movement workloads across multiple data-planes. Technically speaking, each plane is a Kubernetes cluster.
Our solution to load-balancing workloads across multiple data-planes is to push down the responsibility to the data-plane itself. We enqueue workloads in a single job queue and let the data-planes compete for jobs to process if they have capacity to do so. This has the benefit of treating capacity as a problem that is local to a cluster, removes the complexity of planning ahead for available resources, and keeps operations simple when facing cluster downtime.
https://redd.it/1dz4tih
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Gitlab - Syft/grype: Are there any GOOD resources to learn how to set up?
I'm new to Devops. I, along with a coworker, am tasked with getting container vulnerability scanning and SBOMs generation set up. I've been looking for a decent video or webpage that goes over the implementation of syft and grype but have failed to do so. Even the one on posted on the documentation section refers to a video that I don't think helped me much. Could be that I just don't understand what exactly I am supposed to take from our AWS EKS images/containers to input into the .gitlab-ci.yml file. Does anyone have any tips and/or sites they can refer me to so I can get a better understanding of the steps involved? And before you ask, no, we don't have the option of using an alternative. This is what THEY want and paid for (Gitlab ultimate).
https://redd.it/1dz2vwc
@r_devops
I'm new to Devops. I, along with a coworker, am tasked with getting container vulnerability scanning and SBOMs generation set up. I've been looking for a decent video or webpage that goes over the implementation of syft and grype but have failed to do so. Even the one on posted on the documentation section refers to a video that I don't think helped me much. Could be that I just don't understand what exactly I am supposed to take from our AWS EKS images/containers to input into the .gitlab-ci.yml file. Does anyone have any tips and/or sites they can refer me to so I can get a better understanding of the steps involved? And before you ask, no, we don't have the option of using an alternative. This is what THEY want and paid for (Gitlab ultimate).
https://redd.it/1dz2vwc
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Cloud Deploy Skaffold overwriting Terraform
Hello, does anyone have experience using Cloud Deploy / Skaffold in conjunction with Terraform?
I'm setting up a Cloud Deploy pipeline for the first time (previously had a simple Cloud Build setup for deployments). However, I noticed that my server configuration defined in Terraform (e.g. scaling, service account, etc.) is being overwritten by new Cloud Deploy releases.
Question: Is there a way for Cloud Deploy / Skaffold to only update the container's image while leaving all other parts of the configuration alone, to be managed by Terraform?
apiVersion: skaffold/v3alpha1
kind: Config
deploy:
cloudrun: {}
profiles:
- name: development
manifests:
rawYaml:
- run-development.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
spec:
template:
spec:
containers:
- image: image
I can migrate all the config to skaffold, but I'd prefer to keep it in Terraform.
https://redd.it/1dzafii
@r_devops
Hello, does anyone have experience using Cloud Deploy / Skaffold in conjunction with Terraform?
I'm setting up a Cloud Deploy pipeline for the first time (previously had a simple Cloud Build setup for deployments). However, I noticed that my server configuration defined in Terraform (e.g. scaling, service account, etc.) is being overwritten by new Cloud Deploy releases.
Question: Is there a way for Cloud Deploy / Skaffold to only update the container's image while leaving all other parts of the configuration alone, to be managed by Terraform?
skaffold.yaml:apiVersion: skaffold/v3alpha1
kind: Config
deploy:
cloudrun: {}
profiles:
- name: development
manifests:
rawYaml:
- run-development.yaml
run-development.yamlapiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
spec:
template:
spec:
containers:
- image: image
I can migrate all the config to skaffold, but I'd prefer to keep it in Terraform.
https://redd.it/1dzafii
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community