To many pipelines in synapses and cant create ARM template as reaching max size
To many pipelines in synapses and can’t create ARM template as reaching max size.
We created PowerShell scripts to parametrize but this is causing issues between environments but causing many issues
Also looked a synapse extension which may help.
Any suggests on best practices or clean solutions?
https://redd.it/13yclrg
@r_devops
To many pipelines in synapses and can’t create ARM template as reaching max size.
We created PowerShell scripts to parametrize but this is causing issues between environments but causing many issues
Also looked a synapse extension which may help.
Any suggests on best practices or clean solutions?
https://redd.it/13yclrg
@r_devops
Reddit
r/devops on Reddit: To many pipelines in synapses and cant create ARM template as reaching max size
Posted by u/degzs - No votes and no comments
Running tests against a different repository (CI/CD)
So i'll be upfront in that my CI/CD/DevOps knowledge is minimal and my weakest area. But im actually an automation architect that is working on implementing a test framework for a set of microservices.
Right now I have around 60-70 API tests that run (Via Playwright), however due to how the codebases are structured (They are all over the place) this set up services I have opted to keep all the tests together in a separate codebase specifically for automated testing.
However I want the ability to run tests against this repo when another repo (Lets say Service A) does a commit or merge request (or maybe nightly).
Obviously I realize this would be easier if the test code existed on the service repo's themselves.....but I would be managing 8+ different test repo's because of all the microservices, it just doesn't make sense.
The code exists on Gitlab and I think we use Azure or Teamcity for our CI/CD pipelines. Is this even possible?
I imagine I could use docker maybe to containerize the test repo and pull that down or something?
https://redd.it/13ydfvi
@r_devops
So i'll be upfront in that my CI/CD/DevOps knowledge is minimal and my weakest area. But im actually an automation architect that is working on implementing a test framework for a set of microservices.
Right now I have around 60-70 API tests that run (Via Playwright), however due to how the codebases are structured (They are all over the place) this set up services I have opted to keep all the tests together in a separate codebase specifically for automated testing.
However I want the ability to run tests against this repo when another repo (Lets say Service A) does a commit or merge request (or maybe nightly).
Obviously I realize this would be easier if the test code existed on the service repo's themselves.....but I would be managing 8+ different test repo's because of all the microservices, it just doesn't make sense.
The code exists on Gitlab and I think we use Azure or Teamcity for our CI/CD pipelines. Is this even possible?
I imagine I could use docker maybe to containerize the test repo and pull that down or something?
https://redd.it/13ydfvi
@r_devops
Reddit
r/devops on Reddit: Running tests against a different repository (CI/CD)
Posted by u/mercfh85 - No votes and no comments
Tool to natively query cloud resources
Hey folks,
I wanted to share a tool I built. It lets you effortlessly query your cloud resources using natural language. No more dealing with complex syntax or clunky frameworks like the AWS console and kubectl commands.
Here's what it can do:
Ask questions like "Which users lack MFA on AWS?" or "How many publicly accessible APIs are there?"
Get quick answers without writing code or navigating complicated interfaces.
Seamlessly integrate with various cloud providers and K8s for comprehensive coverage.
It's perfect for non-technical team members, too.
Here is a preview: https://youtube.com/shorts/FN4UXhegOXE
DM me if you want to give it a try or have feedback
https://redd.it/13yek07
@r_devops
Hey folks,
I wanted to share a tool I built. It lets you effortlessly query your cloud resources using natural language. No more dealing with complex syntax or clunky frameworks like the AWS console and kubectl commands.
Here's what it can do:
Ask questions like "Which users lack MFA on AWS?" or "How many publicly accessible APIs are there?"
Get quick answers without writing code or navigating complicated interfaces.
Seamlessly integrate with various cloud providers and K8s for comprehensive coverage.
It's perfect for non-technical team members, too.
Here is a preview: https://youtube.com/shorts/FN4UXhegOXE
DM me if you want to give it a try or have feedback
https://redd.it/13yek07
@r_devops
YouTube
Gen2 Native Queries
- Ask questions like "Which users lack MFA on AWS?" or "How many publicly accessible APIs are there?"- Get quick answers without writing code or navigating c...
Database mirroring.
Hi everyone,
I have two mssql server instances (docket instances) in two different machines, and i have an asp.net core webapi that is currently using only one db the thing is i want to make boths dbs synchronized and i don't know if this can be done in the db settings or from my asp.net core app.
Thanks in advance.
https://redd.it/13yfmg1
@r_devops
Hi everyone,
I have two mssql server instances (docket instances) in two different machines, and i have an asp.net core webapi that is currently using only one db the thing is i want to make boths dbs synchronized and i don't know if this can be done in the db settings or from my asp.net core app.
Thanks in advance.
https://redd.it/13yfmg1
@r_devops
Reddit
r/devops on Reddit: Database mirroring.
Posted by u/Mysterious_Low9967 - No votes and 4 comments
DevSecCon24 FREE DevSecOps Converence
***FREE VIRTUAL CONFERENCE FOR DEVSECOPS***
📢 Calling all developers! 🚀
DevSecCon24 is just around the corner, and you don't want to miss these incredible sessions that will revolutionize your approach to secure coding and DevSecOps. Check out these must-attend sessions:
🔑 Keynote: "Human vs AI: How to ship secure code" by Joseph Katsioloudes (This topic is 🔥 hot 🔥 right now!)
🎤 "Container Security - Strengthening the Heart of Your Operations" by Siddhant Khisty & Kunal Verma
🎤 "SciFi to Reality: Use of AI in DevSecOps" by Sandip Dholakia
⚡ Lightning talk: "Security Testing During Ideation: A Hackathon Perspective" by Keith McDuffee
🎤 "Defending Your Cloud Native Apps Against the Serverless Top 10" by Raz Probstein
🎤 "Securing GitOps Pipelines: Open Source, Vendors, and Getting Things Done" by James Berthoty
🎤 "Tales from the real-world: Building cloud security programs that can actually shift left" by Jiong Liu & Sriya Potham
These sessions will equip you with cutting-edge insights, practical strategies, and innovative approaches to strengthen your code security and enhance your DevSecOps practices.
Don't miss out on this incredible opportunity to learn from industry experts and connect with fellow developers. Grab your FREE ticket now.
Got any questions? Feel free to DM us, check out our website, and follow us on social media! Register now!
https://redd.it/13yigir
@r_devops
***FREE VIRTUAL CONFERENCE FOR DEVSECOPS***
📢 Calling all developers! 🚀
DevSecCon24 is just around the corner, and you don't want to miss these incredible sessions that will revolutionize your approach to secure coding and DevSecOps. Check out these must-attend sessions:
🔑 Keynote: "Human vs AI: How to ship secure code" by Joseph Katsioloudes (This topic is 🔥 hot 🔥 right now!)
🎤 "Container Security - Strengthening the Heart of Your Operations" by Siddhant Khisty & Kunal Verma
🎤 "SciFi to Reality: Use of AI in DevSecOps" by Sandip Dholakia
⚡ Lightning talk: "Security Testing During Ideation: A Hackathon Perspective" by Keith McDuffee
🎤 "Defending Your Cloud Native Apps Against the Serverless Top 10" by Raz Probstein
🎤 "Securing GitOps Pipelines: Open Source, Vendors, and Getting Things Done" by James Berthoty
🎤 "Tales from the real-world: Building cloud security programs that can actually shift left" by Jiong Liu & Sriya Potham
These sessions will equip you with cutting-edge insights, practical strategies, and innovative approaches to strengthen your code security and enhance your DevSecOps practices.
Don't miss out on this incredible opportunity to learn from industry experts and connect with fellow developers. Grab your FREE ticket now.
Got any questions? Feel free to DM us, check out our website, and follow us on social media! Register now!
https://redd.it/13yigir
@r_devops
Snyk
DevSecCon: Developing AI Trust | Register for Free | Oct '24 | Snyk
Don't miss this DevSecCon event in October '24. Register for free to learn about the latest AI and security advancements and experience Snyk in action.
Apprenticeships
I started taking a full stack developer course online last year and I’m starting to job search in this industry. Does anyone know of companies with apprenticeship programs or maybe companies that train new hires? I feel pretty comfortable with HTML, CSS and JavaScript but I’m no expert and still can learn a lot more.
https://redd.it/13yjgj2
@r_devops
I started taking a full stack developer course online last year and I’m starting to job search in this industry. Does anyone know of companies with apprenticeship programs or maybe companies that train new hires? I feel pretty comfortable with HTML, CSS and JavaScript but I’m no expert and still can learn a lot more.
https://redd.it/13yjgj2
@r_devops
Reddit
r/devops on Reddit: Apprenticeships
Posted by u/Emendozav10 - No votes and 1 comment
Querying Kubernetes Pods with Non-Empty Host Paths using Selefra GPT
## Introduction:
In the world of container orchestration, Kubernetes has become the de facto standard for managing containerized applications at scale. As organizations increasingly adopt Kubernetes, ensuring the security and proper configuration of their clusters is crucial. In this article, we will demonstrate how to use Selefra GPT, a powerful policy-as-code tool, to query Kubernetes pods with non-empty host paths.
**Understanding Selefra GPT:**
Selefra GPT is an open-source policy-as-code software that leverages the power of GPT models for infrastructure analysis in multi-cloud and SaaS environments, as well as Kubernetes clusters. By using Selefra GPT, organizations can gain valuable insights into their infrastructure's security posture and make informed decisions to enhance their overall security.
**Querying Kubernetes Pods with Non-Empty Host Paths:**
A common requirement in managing Kubernetes clusters is to identify pods with specific configurations, such as those with non-empty host paths. Selefra GPT enables users to define policies using SQL and YAML syntax, making it easier to express complex rules and perform targeted queries. By utilizing Selefra GPT, you can efficiently query pods with non-empty host paths and gain insights into your cluster's configuration.
**Customizing Policies for Kubernetes:**
One of the key benefits of Selefra GPT is the flexibility to customize policies according to your organization's specific requirements and compliance standards. You can create policies for various aspects of your Kubernetes environment, such as ensuring proper resource utilization, implementing access controls, or monitoring container configurations, and manage those policies to align with your security objectives.
**Continuous Monitoring of Kubernetes Clusters:**
Kubernetes environments are dynamic, with resources being created, updated, and deleted frequently. Selefra GPT enables continuous monitoring by regularly analyzing your Kubernetes clusters and detecting any deviations from defined policies. This proactive approach ensures that configuration issues are promptly identified and addressed, reducing the window of vulnerability.
**Remediation and Compliance:**
Once configuration issues are identified, Selefra GPT provides actionable insights and recommendations to remediate them. You can prioritize your efforts based on the severity of the issues and follow the recommended steps to mitigate risks. Furthermore, Selefra GPT helps maintain compliance with industry standards and regulations by continuously evaluating your Kubernetes environment against defined policies.
## Install
First, you need to install Selefra by executing the following command:
brew tap selera/tap
brew install selefra/tap/selefra
mkdir selefra-demo & cd selefra-demo & selefra init
## Choose provider
Next, choose the Kubernetes provider in the shell:
[Use arrows to move, Space to select, and enter to complete the selection]
[ ] AWS
[ ] azure
[ ] GCP
[✔] k8s # We choose Kubernetes installation
## Configuration
**Configure Kubernetes:**
Please refer to the [document](https://www.selefra.io/docs/providers-connector/kubernetes) to configure your Kubernetes connection in advance.
**Configure Selefra:**
After initialization, you will get a `selefra.yaml` file. Configure this file to use the GPT functionality:
selefra:
name: selefra-demo
cli_version: latest
openai_api_key: <Your Openai Api Key>
openai_mode: gpt-3.5
openai_limit: 10
providers:
- name: k8s
source: k8s
version: latest
## Running
You can use environment variables to store the `openai_api_key`, `openai_mode`, and `openai_limit` parameters. Then, execute the following command to start the GPT analysis:
selefra gpt "Help me to query the host path is not null pods."
Finally, you will receive results indicating the pods with non-empty host paths.
##
## Introduction:
In the world of container orchestration, Kubernetes has become the de facto standard for managing containerized applications at scale. As organizations increasingly adopt Kubernetes, ensuring the security and proper configuration of their clusters is crucial. In this article, we will demonstrate how to use Selefra GPT, a powerful policy-as-code tool, to query Kubernetes pods with non-empty host paths.
**Understanding Selefra GPT:**
Selefra GPT is an open-source policy-as-code software that leverages the power of GPT models for infrastructure analysis in multi-cloud and SaaS environments, as well as Kubernetes clusters. By using Selefra GPT, organizations can gain valuable insights into their infrastructure's security posture and make informed decisions to enhance their overall security.
**Querying Kubernetes Pods with Non-Empty Host Paths:**
A common requirement in managing Kubernetes clusters is to identify pods with specific configurations, such as those with non-empty host paths. Selefra GPT enables users to define policies using SQL and YAML syntax, making it easier to express complex rules and perform targeted queries. By utilizing Selefra GPT, you can efficiently query pods with non-empty host paths and gain insights into your cluster's configuration.
**Customizing Policies for Kubernetes:**
One of the key benefits of Selefra GPT is the flexibility to customize policies according to your organization's specific requirements and compliance standards. You can create policies for various aspects of your Kubernetes environment, such as ensuring proper resource utilization, implementing access controls, or monitoring container configurations, and manage those policies to align with your security objectives.
**Continuous Monitoring of Kubernetes Clusters:**
Kubernetes environments are dynamic, with resources being created, updated, and deleted frequently. Selefra GPT enables continuous monitoring by regularly analyzing your Kubernetes clusters and detecting any deviations from defined policies. This proactive approach ensures that configuration issues are promptly identified and addressed, reducing the window of vulnerability.
**Remediation and Compliance:**
Once configuration issues are identified, Selefra GPT provides actionable insights and recommendations to remediate them. You can prioritize your efforts based on the severity of the issues and follow the recommended steps to mitigate risks. Furthermore, Selefra GPT helps maintain compliance with industry standards and regulations by continuously evaluating your Kubernetes environment against defined policies.
## Install
First, you need to install Selefra by executing the following command:
brew tap selera/tap
brew install selefra/tap/selefra
mkdir selefra-demo & cd selefra-demo & selefra init
## Choose provider
Next, choose the Kubernetes provider in the shell:
[Use arrows to move, Space to select, and enter to complete the selection]
[ ] AWS
[ ] azure
[ ] GCP
[✔] k8s # We choose Kubernetes installation
## Configuration
**Configure Kubernetes:**
Please refer to the [document](https://www.selefra.io/docs/providers-connector/kubernetes) to configure your Kubernetes connection in advance.
**Configure Selefra:**
After initialization, you will get a `selefra.yaml` file. Configure this file to use the GPT functionality:
selefra:
name: selefra-demo
cli_version: latest
openai_api_key: <Your Openai Api Key>
openai_mode: gpt-3.5
openai_limit: 10
providers:
- name: k8s
source: k8s
version: latest
## Running
You can use environment variables to store the `openai_api_key`, `openai_mode`, and `openai_limit` parameters. Then, execute the following command to start the GPT analysis:
selefra gpt "Help me to query the host path is not null pods."
Finally, you will receive results indicating the pods with non-empty host paths.
##
Conclusion:
Managing and securing Kubernetes environments is vital for organizations that rely on containerized applications. Selefra GPT offers advanced analytics and policy-as-code capabilities to analyze, identify, and remediate configuration issues in Kubernetes clusters. By leveraging the power of machine learning and policy automation, Selefra GPT enables organizations to enhance their infrastructure security and build robust defenses against potential threats.
## Thanks for reading
We encourage you to try Selefra and experience a faster, more efficient security analysis and resolution process. For more information about Selefra, please visit our official:
* Website: [**https://www.selefra.io/**](https://www.selefra.io/)
* GitHub: [**https://github.com/selefra/selefra**](https://github.com/selefra/selefra)
* Twitter: [**https://twitter.com/SelefraCorp**](https://twitter.com/SelefraCorp)
https://redd.it/13y7u6t
@r_devops
Managing and securing Kubernetes environments is vital for organizations that rely on containerized applications. Selefra GPT offers advanced analytics and policy-as-code capabilities to analyze, identify, and remediate configuration issues in Kubernetes clusters. By leveraging the power of machine learning and policy automation, Selefra GPT enables organizations to enhance their infrastructure security and build robust defenses against potential threats.
## Thanks for reading
We encourage you to try Selefra and experience a faster, more efficient security analysis and resolution process. For more information about Selefra, please visit our official:
* Website: [**https://www.selefra.io/**](https://www.selefra.io/)
* GitHub: [**https://github.com/selefra/selefra**](https://github.com/selefra/selefra)
* Twitter: [**https://twitter.com/SelefraCorp**](https://twitter.com/SelefraCorp)
https://redd.it/13y7u6t
@r_devops
Agile in ADO: Who should own the feature?
Assuming the object structure is "Epic > Feature > Story" and features can contain stories owned by different teams, who should the feature be assigned to? I think the majority would say a product owner should own the feature, but if there are multiple teams involved, which PO should it be assigned to? Whichever PO's team(s) have the most stories in the feature? Technically this question applies to any solution; who should own the object that contains the engineers'/developers' stories? How do you make this decision within your org?
https://redd.it/13xknup
@r_devops
Assuming the object structure is "Epic > Feature > Story" and features can contain stories owned by different teams, who should the feature be assigned to? I think the majority would say a product owner should own the feature, but if there are multiple teams involved, which PO should it be assigned to? Whichever PO's team(s) have the most stories in the feature? Technically this question applies to any solution; who should own the object that contains the engineers'/developers' stories? How do you make this decision within your org?
https://redd.it/13xknup
@r_devops
Reddit
r/devops on Reddit: Agile in ADO: Who should own the feature?
Posted by u/geekyadam - No votes and 2 comments
What are your thougths on QCON?
I just learned about QCON today. I've always attended AWS conferences. Do you think QCON has more technical presentations? Sometimes I feel the sessions in AWS conferences are more of marketing. Also, I've never successfully attended great technical sessions in AWS because they are always full. The simple ones always have huge spaces. So what do you think about QCON?
https://redd.it/13ypfux
@r_devops
I just learned about QCON today. I've always attended AWS conferences. Do you think QCON has more technical presentations? Sometimes I feel the sessions in AWS conferences are more of marketing. Also, I've never successfully attended great technical sessions in AWS because they are always full. The simple ones always have huge spaces. So what do you think about QCON?
https://redd.it/13ypfux
@r_devops
Reddit
r/devops on Reddit: What are your thougths on QCON?
Posted by u/Oxffff0000 - No votes and no comments
Kubernetes, angular frontend serving by nginx, nginx.conf proxypass to spring boot backend api
Hello everyone!
The title of the topic speaks for itself. So the question is it possible or not to implement following and is it good solution or not.
Environment:
1. GCP/GKE
2. Two namespaces: frontend-ns and backend-ns - not my decision and it cannot be changed. If both apps were deployed in same namespace I believe that the problem could very easily be eliminated with the use of Ingress object. But since you can't call backend service from another namespace in Ingress object - it's not an option;
3. In fronted namespace we have a deployment with Angular app serving by Nginx;
4. In backend namespace we have Spring boot app with multiple api endpoints;
5. Here is part of my nginx.conf
location /api/test {proxy\pass https://mybackendsvc.backend-ns.svc.cluster.local:8080;}
6) For Angular frontend app I have Ingress object with following host: https://myfrontendsvc.com
7) With proxy path setting mentioned above it will be possible to request api like that??
curl https://myfrontendsvc.com/api/test
8) Should I have something specific in Ingress object of my frontend besides setting for frontend to make solution mentioned above work:
\- host: myfrontendsvc.com
http:
paths:
\- backend:
service:
name: myfrontendsvc
port:
number: 8080
path: /
pathType: ImplementationSpecific
https://redd.it/13ytd32
@r_devops
Hello everyone!
The title of the topic speaks for itself. So the question is it possible or not to implement following and is it good solution or not.
Environment:
1. GCP/GKE
2. Two namespaces: frontend-ns and backend-ns - not my decision and it cannot be changed. If both apps were deployed in same namespace I believe that the problem could very easily be eliminated with the use of Ingress object. But since you can't call backend service from another namespace in Ingress object - it's not an option;
3. In fronted namespace we have a deployment with Angular app serving by Nginx;
4. In backend namespace we have Spring boot app with multiple api endpoints;
5. Here is part of my nginx.conf
location /api/test {proxy\pass https://mybackendsvc.backend-ns.svc.cluster.local:8080;}
6) For Angular frontend app I have Ingress object with following host: https://myfrontendsvc.com
7) With proxy path setting mentioned above it will be possible to request api like that??
curl https://myfrontendsvc.com/api/test
8) Should I have something specific in Ingress object of my frontend besides setting for frontend to make solution mentioned above work:
\- host: myfrontendsvc.com
http:
paths:
\- backend:
service:
name: myfrontendsvc
port:
number: 8080
path: /
pathType: ImplementationSpecific
https://redd.it/13ytd32
@r_devops
Does GitHub/GitLab/Azure DevOps/etc use their own product to develop/deploy?
Does GitHub/GitLab/Azure DevOps/etc use their own product to develop/deploy?
e.g. GitHub has its own organization on github.com and uses that to develop & deploy
https://redd.it/13yyjtw
@r_devops
Does GitHub/GitLab/Azure DevOps/etc use their own product to develop/deploy?
e.g. GitHub has its own organization on github.com and uses that to develop & deploy
https://redd.it/13yyjtw
@r_devops
GitHub
GitHub · Change is constant. GitHub keeps you ahead.
Join the world's most widely adopted, AI-powered developer platform where millions of developers, businesses, and the largest open source community build software that advances humanity.
Should I buy a MacBook Air M1 8gb + 256 gb?
Hi, I have a fixed budget and within that i am getting a few options, a MacBook Air M1 8gb + 256gb, Dell Inspiron i7 16gb + 512gb and Asus vivobook s14 OLED i7 16gb + 512gb.
I am a college student and I will be using it for casual stuff and programming, while programming I use vscode, a few tabs on browser and docker, I currently have a 8gb device and when I run a open source programming locally that uses docker, my device lags a lot, i plan on using more such projects and doing devops in future, will the 8gn ram and 256gb SSD MacBook be not enough? As i can't purchase twh 16gb + 512gb version.
Should I go with the other options because of lack of ram?
https://redd.it/13z1q5s
@r_devops
Hi, I have a fixed budget and within that i am getting a few options, a MacBook Air M1 8gb + 256gb, Dell Inspiron i7 16gb + 512gb and Asus vivobook s14 OLED i7 16gb + 512gb.
I am a college student and I will be using it for casual stuff and programming, while programming I use vscode, a few tabs on browser and docker, I currently have a 8gb device and when I run a open source programming locally that uses docker, my device lags a lot, i plan on using more such projects and doing devops in future, will the 8gn ram and 256gb SSD MacBook be not enough? As i can't purchase twh 16gb + 512gb version.
Should I go with the other options because of lack of ram?
https://redd.it/13z1q5s
@r_devops
Reddit
r/devops on Reddit: Should I buy a MacBook Air M1 8gb + 256 gb?
Posted by u/Master-Ooooogway - No votes and 3 comments
Friendly Reminder: Do not trust Oracle Cloud. If it's too good to be true, it probably isn't.
I was very amazed by their always-free services and they looked very shiny to me. A1 Flex is 4 OCPUs and 24 GB of RAM, for free, and you let me choose which region to host this..? oh my god Oracle you are too generous! Cheap Google only offers 1 poor CPU, 768 RAM, and forces your VM to be in the US. Screw Google, you are my new best bud forever!
But.. There is a catch, and that is: You won't indeed be charged by that, but your account will be cancelled randomly without any reason. It sounds weird, but this happened to me. In fact, it happened to a lot of people too:
https://armin.su/oracle-cloud-and-loss-of-data-in-kubernetes-cluster-198d88181829?gi=d475a8d827a1
Too sad that I didn't really read about these termination issues. Oracle is a big name in the industry for me, and even though this was my first interaction with their services, I didn't have in mind they could be such a c*nt for no reason. dumb me hosted 2 test websites on their cloud but didn't bother to have a local backup for them because... it's OrAcLe dude.
My account had 18 days left in trial. I wake up in the morning, and I find this email:
>Your Oracle Cloud Free Trial has expired
DEAR CUSTOMER,
>
>
Your Oracle Cloud Free Trial promotion ended on Saturday, June 3, 2023 12:38 a.m. Coordinated Universal Time (UTC).
The data and cloud account content that you created during the Free Trial period can be retrieved until Sunday, July 02, 2023. For instructions, visit Information Center for Administrators on My Oracle Support and scroll to the bottom of the page to view "Additional Termination Instructions for your Cloud Service".
Your access is limited to Always Free Services only. Your Always Free resources will remain available to you as long as you actively use your account. Your other resources will be reclaimed unless you upgrade to a paid account.
Upgrade to a paid account to have access to all Oracle Cloud Services, customer support and other benefits of paid services. Oracle Cloud offers Pay As You Go billing.
They gave me 0 reason why this happened. When I visited their " Information Center for Administrators " and tried to log in, they refused my credentials which I'm sure 100% is correct. When I logged in to my OCI, all my VMs are gone, and I cannot create anything new, including the "always-free" ones.
I contacted their support, and oh boy, brace yourself for this rudeness:
https://imgur.com/gallery/jLLcU1u
Agent (precisely, a bot) just pasted an automated response that does not help at all and closed the session.
When I checked other people who had this issue before, I see the dates of their problems to be in 2021. That's 2 years from now and this issue is still happening. What does that mean? It means it is not a bug in the system. This is a systematic process done by Oracle for some internal corporate BS we are yet to know.
The bottom line is:
Don't repeat my mistake and go to Oracle blindly. They offer so much good stuff for free, and you won't be charged for it, but you also won't have them because you are going be get cancelled. And, when you do, don't expect understanding support to handle your case. When it's gone, it's really gone.
https://redd.it/13z8rlb
@r_devops
I was very amazed by their always-free services and they looked very shiny to me. A1 Flex is 4 OCPUs and 24 GB of RAM, for free, and you let me choose which region to host this..? oh my god Oracle you are too generous! Cheap Google only offers 1 poor CPU, 768 RAM, and forces your VM to be in the US. Screw Google, you are my new best bud forever!
But.. There is a catch, and that is: You won't indeed be charged by that, but your account will be cancelled randomly without any reason. It sounds weird, but this happened to me. In fact, it happened to a lot of people too:
https://armin.su/oracle-cloud-and-loss-of-data-in-kubernetes-cluster-198d88181829?gi=d475a8d827a1
Too sad that I didn't really read about these termination issues. Oracle is a big name in the industry for me, and even though this was my first interaction with their services, I didn't have in mind they could be such a c*nt for no reason. dumb me hosted 2 test websites on their cloud but didn't bother to have a local backup for them because... it's OrAcLe dude.
My account had 18 days left in trial. I wake up in the morning, and I find this email:
>Your Oracle Cloud Free Trial has expired
DEAR CUSTOMER,
>
>
Your Oracle Cloud Free Trial promotion ended on Saturday, June 3, 2023 12:38 a.m. Coordinated Universal Time (UTC).
The data and cloud account content that you created during the Free Trial period can be retrieved until Sunday, July 02, 2023. For instructions, visit Information Center for Administrators on My Oracle Support and scroll to the bottom of the page to view "Additional Termination Instructions for your Cloud Service".
Your access is limited to Always Free Services only. Your Always Free resources will remain available to you as long as you actively use your account. Your other resources will be reclaimed unless you upgrade to a paid account.
Upgrade to a paid account to have access to all Oracle Cloud Services, customer support and other benefits of paid services. Oracle Cloud offers Pay As You Go billing.
They gave me 0 reason why this happened. When I visited their " Information Center for Administrators " and tried to log in, they refused my credentials which I'm sure 100% is correct. When I logged in to my OCI, all my VMs are gone, and I cannot create anything new, including the "always-free" ones.
I contacted their support, and oh boy, brace yourself for this rudeness:
https://imgur.com/gallery/jLLcU1u
Agent (precisely, a bot) just pasted an automated response that does not help at all and closed the session.
When I checked other people who had this issue before, I see the dates of their problems to be in 2021. That's 2 years from now and this issue is still happening. What does that mean? It means it is not a bug in the system. This is a systematic process done by Oracle for some internal corporate BS we are yet to know.
The bottom line is:
Don't repeat my mistake and go to Oracle blindly. They offer so much good stuff for free, and you won't be charged for it, but you also won't have them because you are going be get cancelled. And, when you do, don't expect understanding support to handle your case. When it's gone, it's really gone.
https://redd.it/13z8rlb
@r_devops
Medium
Oracle Cloud and loss of data in Kubernetes Cluster
They offer 24GB RAM, 200GB SSD and 4 core cpu for free with a catch
API Gateway + Lambdas vs standard containers
Hey guys. I started working in this company recently and they have this habit of deploying everything into AWS as Lambdas, using API Gateways, for everything. They say it's because of cost.
I come from a world of developing your own solution, putting into a container and deploying in something Iike EKS, having Kubernetes managing the pods, etc...
I'm not a devops expert but I would like to understand their approach, whether it's something common in other companies as well. I understand that a service that runs on demand only is cheaper using Lambdas than to have it running 24/7, but they do this for literally everything.
https://redd.it/13zbl07
@r_devops
Hey guys. I started working in this company recently and they have this habit of deploying everything into AWS as Lambdas, using API Gateways, for everything. They say it's because of cost.
I come from a world of developing your own solution, putting into a container and deploying in something Iike EKS, having Kubernetes managing the pods, etc...
I'm not a devops expert but I would like to understand their approach, whether it's something common in other companies as well. I understand that a service that runs on demand only is cheaper using Lambdas than to have it running 24/7, but they do this for literally everything.
https://redd.it/13zbl07
@r_devops
Reddit
r/devops on Reddit: API Gateway + Lambdas vs standard containers
Posted by u/MarcCDB - No votes and 1 comment
LF a Tutorial on various DB migrations between Accounts, Cloud Providers, Major Versions etc
Hey,
I see aim currently lacking knowledge to perform no-downtime migrations of databases between accounts / providers / major versions.
Im interested in real example tutorials I could follow for postgres and mysql that look like practice exams for CKA.
Anyone is familiar with such a thing ?
https://redd.it/13zecwc
@r_devops
Hey,
I see aim currently lacking knowledge to perform no-downtime migrations of databases between accounts / providers / major versions.
Im interested in real example tutorials I could follow for postgres and mysql that look like practice exams for CKA.
Anyone is familiar with such a thing ?
https://redd.it/13zecwc
@r_devops
Reddit
r/devops on Reddit: LF a Tutorial on various DB migrations between Accounts, Cloud Providers, Major Versions etc
Posted by u/random_devops - No votes and no comments
New job alert 🚨
After being unceremoniously fired from my previous job of SysAdmin, I am happy to announce that I have been offered a new job by the CEO.
It is a CSP for Google Cloud and I will be working as a Cloud Consultant mostly helping clients adopt DevOps practices and ensure their software development goes well.
I have a couple of cloud certs (21) in total and I also have worked as a DevOps Engineer before so I do understand the whole concept.
My question was; how is the work-life balance of a consultant? what’s something that I should know of before I accept this job offer?
I’ll really apply your advice and honesty.
https://redd.it/13zewth
@r_devops
After being unceremoniously fired from my previous job of SysAdmin, I am happy to announce that I have been offered a new job by the CEO.
It is a CSP for Google Cloud and I will be working as a Cloud Consultant mostly helping clients adopt DevOps practices and ensure their software development goes well.
I have a couple of cloud certs (21) in total and I also have worked as a DevOps Engineer before so I do understand the whole concept.
My question was; how is the work-life balance of a consultant? what’s something that I should know of before I accept this job offer?
I’ll really apply your advice and honesty.
https://redd.it/13zewth
@r_devops
Reddit
r/devops on Reddit: New job alert 🚨
Posted by u/Honest_Priest - No votes and no comments
is it even possible to dockerize window build environments?
i'm currently working in a shop that's mostly linux but we're started to support Windows builds.
having to get all devs to install their dependencies on each workstation has been painful so i started looking into docker. Docker initially sounded like a solution to all of our problems, but i started to encounter problems such as not being able to do builds that use multiple CPUs.
i'm slowly getting the hang of it and for the most part most Linux docker docker environments are easy to setup, especially on the CI.
but about windows build environments?
is it even possible to run a window docker build environment on a mac/linux laptop?
we've started setting up runners on our CI. I thought that I could use our 10 linux computers to setup to run containerized Windows Build Environments, but apparently that's not possible and it requires windows computers..
https://redd.it/13zod9t
@r_devops
i'm currently working in a shop that's mostly linux but we're started to support Windows builds.
having to get all devs to install their dependencies on each workstation has been painful so i started looking into docker. Docker initially sounded like a solution to all of our problems, but i started to encounter problems such as not being able to do builds that use multiple CPUs.
i'm slowly getting the hang of it and for the most part most Linux docker docker environments are easy to setup, especially on the CI.
but about windows build environments?
is it even possible to run a window docker build environment on a mac/linux laptop?
we've started setting up runners on our CI. I thought that I could use our 10 linux computers to setup to run containerized Windows Build Environments, but apparently that's not possible and it requires windows computers..
https://redd.it/13zod9t
@r_devops
Reddit
r/devops on Reddit: is it even possible to dockerize window build environments?
Posted by u/aspiring_game_dev - No votes and no comments
Starting my first mid-level role, moving on from junior - how should I prepare and what should I expect?
Recently interviewed for a mid level Site Reliability Engineer role, all went well and I’ll be hopefully starting in 2 months.
I’m currently a Cloud Engineer, specifically on the security team of the platform - so I’m very well versed with things such as GuardDuty, Config, CloudTrail etc - but lack skills and experience in core services such as EC2, ECS, Lambda etc.
I did mention this to them in the interview and they seemed very chilled out and accommodating about it, reassuring me that they’ll be hiring DevOps engineers of all ranges of skills, and don’t expect us all to know and use the same services, and that they’re comfortable in my ability to learn these other services as I already know how to use the security-based ones.
However - I am still nervous as I’m moving from a junior role into a mid level role, so I know the support I’ll have will be much less and I’ll need to be able to stand on my own two feet more.
I should also mention - my Python is pretty average, and my Terraform is probably below par. This is because where I currently work is huge, with 90% of our environments and infrastructure already built, meaning rather than building new things all we really do is maintain what we have by making small changes and additions. Meanwhile the place I’ll be joining is much smaller and will involve greenfield work (which is why I think this will be much more beneficial for me and my development, although I am however - still anxious).
What should I expect when moving from junior to mid-level? And how can I prepare/upskill myself?
https://redd.it/13zn9v0
@r_devops
Recently interviewed for a mid level Site Reliability Engineer role, all went well and I’ll be hopefully starting in 2 months.
I’m currently a Cloud Engineer, specifically on the security team of the platform - so I’m very well versed with things such as GuardDuty, Config, CloudTrail etc - but lack skills and experience in core services such as EC2, ECS, Lambda etc.
I did mention this to them in the interview and they seemed very chilled out and accommodating about it, reassuring me that they’ll be hiring DevOps engineers of all ranges of skills, and don’t expect us all to know and use the same services, and that they’re comfortable in my ability to learn these other services as I already know how to use the security-based ones.
However - I am still nervous as I’m moving from a junior role into a mid level role, so I know the support I’ll have will be much less and I’ll need to be able to stand on my own two feet more.
I should also mention - my Python is pretty average, and my Terraform is probably below par. This is because where I currently work is huge, with 90% of our environments and infrastructure already built, meaning rather than building new things all we really do is maintain what we have by making small changes and additions. Meanwhile the place I’ll be joining is much smaller and will involve greenfield work (which is why I think this will be much more beneficial for me and my development, although I am however - still anxious).
What should I expect when moving from junior to mid-level? And how can I prepare/upskill myself?
https://redd.it/13zn9v0
@r_devops
Reddit
r/devops on Reddit: Starting my first mid-level role, moving on from junior - how should I prepare and what should I expect?
Posted by u/deadassmf - No votes and 1 comment
GitHub Actions Boilerplate generator
gabo - gabo short for GitHub Actions Boilerplate is for ease-of-generation of GitHub actions boilerplate with good timeouts, path filters, and concurrency preventions.
https://redd.it/13zsf83
@r_devops
gabo - gabo short for GitHub Actions Boilerplate is for ease-of-generation of GitHub actions boilerplate with good timeouts, path filters, and concurrency preventions.
https://redd.it/13zsf83
@r_devops
GitHub
GitHub - ashishb/gabo: GitHub Actions Boilerplate Generator
GitHub Actions Boilerplate Generator. Contribute to ashishb/gabo development by creating an account on GitHub.
How do I become a truly good DevOps/Cloud/etc. engineer?
A while ago I posted on this sub talking about my struggles as a junior at work and that I wasn’t learning much, documentation sucked, I couldn’t get a hold of seniors/mids for help and the fact that my team and managers has had a lot of turnover.
Since then I’ve been assigned a new manager and while he has his flaws in my view he has been a massive help in developing me further. A lot of this is also because he is newer to learning things in our tech stack but obviously his 20 YoE makes him pick it up much much faster than someone mg age and thus it’s like a learning experience for both of us.
Before my last post I was at best fulfilling smaller tickets like creating new projects, namespaces, etc. requests that our customers needed. Since then though I’ve been involved in diagnostic work on our systems, built multiple clusters including production ones, improving documentation, demoing our systems for customers, integrating other tools with our systems involving teams in other countries, and also have been asked for help with finding a few some solutions by the mid level developers and did so successfully.
I’m glad I’ve stuck it out because I’m enjoying the work and learning but I really want to take the next leap into becoming a good or great engineer. How can I do that? I’m already ahead of schedule on my objectives for promotion this year to a mid level role from my manager.
One thing I’m worried is that while I’ve learned a lot about some things there are other tools or stacks that I barely use even though they’re integral to our systems. Basically I’m worried that my knowledge is very surface level?
Metrics:
2 YoE( B.S. in CS )
Tech stack/tools:
Openshift
Kubernetes
Docker
Jenkins
Ansible
Python?
https://redd.it/13ztxn8
@r_devops
A while ago I posted on this sub talking about my struggles as a junior at work and that I wasn’t learning much, documentation sucked, I couldn’t get a hold of seniors/mids for help and the fact that my team and managers has had a lot of turnover.
Since then I’ve been assigned a new manager and while he has his flaws in my view he has been a massive help in developing me further. A lot of this is also because he is newer to learning things in our tech stack but obviously his 20 YoE makes him pick it up much much faster than someone mg age and thus it’s like a learning experience for both of us.
Before my last post I was at best fulfilling smaller tickets like creating new projects, namespaces, etc. requests that our customers needed. Since then though I’ve been involved in diagnostic work on our systems, built multiple clusters including production ones, improving documentation, demoing our systems for customers, integrating other tools with our systems involving teams in other countries, and also have been asked for help with finding a few some solutions by the mid level developers and did so successfully.
I’m glad I’ve stuck it out because I’m enjoying the work and learning but I really want to take the next leap into becoming a good or great engineer. How can I do that? I’m already ahead of schedule on my objectives for promotion this year to a mid level role from my manager.
One thing I’m worried is that while I’ve learned a lot about some things there are other tools or stacks that I barely use even though they’re integral to our systems. Basically I’m worried that my knowledge is very surface level?
Metrics:
2 YoE( B.S. in CS )
Tech stack/tools:
Openshift
Kubernetes
Docker
Jenkins
Ansible
Python?
https://redd.it/13ztxn8
@r_devops
Reddit
r/devops on Reddit: How do I become a truly good DevOps/Cloud/etc. engineer?
Posted by u/Scared-Loquat-7933 - No votes and no comments