ai based recommendation engine
Hi guys,
maybe this is a dumb question but: how difficult is it to write an (ai based) recommendation engine, that recommends a handful of content pieces to a user after the user has made a few entries. (it should adapt based on how the user consumes said content pieces)
is it possible to do this with a small team 1-3 devs in weeks/months or is this a completely impossible task unless you have millions of dollars and a big company.
(also it needs to have an ai element)
Thank you !
https://redd.it/11mrevu
@r_devops
Hi guys,
maybe this is a dumb question but: how difficult is it to write an (ai based) recommendation engine, that recommends a handful of content pieces to a user after the user has made a few entries. (it should adapt based on how the user consumes said content pieces)
is it possible to do this with a small team 1-3 devs in weeks/months or is this a completely impossible task unless you have millions of dollars and a big company.
(also it needs to have an ai element)
Thank you !
https://redd.it/11mrevu
@r_devops
Reddit
r/devops on Reddit: ai based recommendation engine
Posted by u/ph0g_ - No votes and 4 comments
Update: Datadog Outage
https://status.datadoghq.com/
Well everyone, the nightmare is nearing an end as DD Eng worked tirelessly through the day/night for close to a day straight on what has been an anxiety inducing day for everyone involved. A full post-mortem will be coming later but the main gist is below...
"At 06:00 UTC on March 8th, 2023 the Datadog platform started experiencing widespread issues across multiple products and regions . The web application was unavailable or intermittently loading, and data ingestion & monitor evaluation were delayed.
We will share a more detailed analysis post-recovery, but at a very high level:
A system update on a number of hosts controlling our compute clusters caused a subset of these hosts to lose network connectivity.
As a result a number of the corresponding clusters entered unhealthy states and caused failures in a number of the internal services, datastores and applications hosted on these clusters."
Data is being backfilled as we speak and we're back to fully operational. All things considered, this was a disaster, but we got through it. I know everyone (sorta rightfully) likes to shit on us for our AEs/CSMs and the price, but I know eng is doing their best because goddamn it was a long night for them trying to get us back to our usual flavor of "just working". And yes, for everyone who asks, we do in fact use our own software and it did in fact help us figure out what was going on.
Signed, a sales engineer who has to give a demo today and pray not too many hard questions get asked.
https://redd.it/11mt2eg
@r_devops
https://status.datadoghq.com/
Well everyone, the nightmare is nearing an end as DD Eng worked tirelessly through the day/night for close to a day straight on what has been an anxiety inducing day for everyone involved. A full post-mortem will be coming later but the main gist is below...
"At 06:00 UTC on March 8th, 2023 the Datadog platform started experiencing widespread issues across multiple products and regions . The web application was unavailable or intermittently loading, and data ingestion & monitor evaluation were delayed.
We will share a more detailed analysis post-recovery, but at a very high level:
A system update on a number of hosts controlling our compute clusters caused a subset of these hosts to lose network connectivity.
As a result a number of the corresponding clusters entered unhealthy states and caused failures in a number of the internal services, datastores and applications hosted on these clusters."
Data is being backfilled as we speak and we're back to fully operational. All things considered, this was a disaster, but we got through it. I know everyone (sorta rightfully) likes to shit on us for our AEs/CSMs and the price, but I know eng is doing their best because goddamn it was a long night for them trying to get us back to our usual flavor of "just working". And yes, for everyone who asks, we do in fact use our own software and it did in fact help us figure out what was going on.
Signed, a sales engineer who has to give a demo today and pray not too many hard questions get asked.
https://redd.it/11mt2eg
@r_devops
Datadoghq
Datadog US1 Status
Welcome to Datadog US1's home for real-time and historical data on system performance.
How the hell do you reference an artifact to download from another pipeline in Github Actions?
I've got two pipelines, one is called **Build.yml**
- name: Archive WebAppContent
run: Compress-Archive -Path '${{ env.RUNNER_TEMP }}\WebAppContent' -DestinationPath './drop/drop.zip'
- name: Upload artifact
uses: actions/upload-artifact@v2
with:
name: drop
path: './drop/drop.zip'
another is called **Deploy.yml**
- name: Download Drop
uses: actions/download-artifact@v3
with:
name: drop
path: './drop/drop.zip'
- name: Deploy to staging # deploys to uat-01-staging
id: deploy-to-staging
uses: azure/webapps-deploy@v2
with:
app-name: 'webapp'
slot-name: 'staging'
azure-tenant-id: ${{ secrets.AZURE_TENANT_ID }}
azure-subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
azure-client-id: ${{ secrets.AZURE_CLIENT_ID }}
azure-client-secret: ${{ secrets.AZURE_CLIENT_SECRET }}
package: ${{ github.workspace }}/drop/drop.zip
How the hell do I get the second pipeline to find the location of the artifact created in the **build.yaml** pipeline and use it for my azure deployment in the second pipeline? I've scoured the internet and can't find any clear answer about why my artifact is not going to the correct place/why the deploy pipeline can't find it.
Note that both pipelines are within the same repository.
Thank you for your help.
https://redd.it/11mv3o9
@r_devops
I've got two pipelines, one is called **Build.yml**
- name: Archive WebAppContent
run: Compress-Archive -Path '${{ env.RUNNER_TEMP }}\WebAppContent' -DestinationPath './drop/drop.zip'
- name: Upload artifact
uses: actions/upload-artifact@v2
with:
name: drop
path: './drop/drop.zip'
another is called **Deploy.yml**
- name: Download Drop
uses: actions/download-artifact@v3
with:
name: drop
path: './drop/drop.zip'
- name: Deploy to staging # deploys to uat-01-staging
id: deploy-to-staging
uses: azure/webapps-deploy@v2
with:
app-name: 'webapp'
slot-name: 'staging'
azure-tenant-id: ${{ secrets.AZURE_TENANT_ID }}
azure-subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
azure-client-id: ${{ secrets.AZURE_CLIENT_ID }}
azure-client-secret: ${{ secrets.AZURE_CLIENT_SECRET }}
package: ${{ github.workspace }}/drop/drop.zip
How the hell do I get the second pipeline to find the location of the artifact created in the **build.yaml** pipeline and use it for my azure deployment in the second pipeline? I've scoured the internet and can't find any clear answer about why my artifact is not going to the correct place/why the deploy pipeline can't find it.
Note that both pipelines are within the same repository.
Thank you for your help.
https://redd.it/11mv3o9
@r_devops
Reddit
r/devops on Reddit: How the hell do you reference an artifact to download from another pipeline in Github Actions?
Posted by u/Bill_Smoke - No votes and no comments
How to people organize their Repos?
Our dev team are wondering what the best practice is for organizing GitHub repos around VS projects. I am responsible for all the DB stuff (i.e. SQL Server, SSIS, SSAS, SSRS etc). Is it best practice to create one repo for all these DB related VS solutions or create a separate repo for each one?
https://redd.it/11mvqxv
@r_devops
Our dev team are wondering what the best practice is for organizing GitHub repos around VS projects. I am responsible for all the DB stuff (i.e. SQL Server, SSIS, SSAS, SSRS etc). Is it best practice to create one repo for all these DB related VS solutions or create a separate repo for each one?
https://redd.it/11mvqxv
@r_devops
Reddit
r/devops on Reddit: How to people organize their Repos?
Posted by u/alloowishus - No votes and 1 comment
A 0.6 release of UI for Apache Kafka w/ cluster configuration wizard & ODD Platform integration is out!
Hi redditors!
Today I'm delighted to bring you the latest 0.6 release of UI for Apache Kafka, packed with new features and enhancements!
This version offers:
- A configuration wizard that simplifies cluster setup (right in web UI!). Now we can launch the app via AWS AMI image and setup a cluster on the go
- Integration with OpenDataDiscovery Platform to gain deeper insight into your metadata changes
- Support for protobuf imports & file references
Other minor, yet significant, enhancements include:
- Embedded Avro embedded serde plugin
- Improved ISR display on Topic overview (now you can view it per partition!)
And a cherry on top? Now we’re able to work around kafka ACL errors so you won’t need to confront pesky permission issues when using the app.
Don’t wait, the update is already available on github & @ AWS Marketplace!
Full changelog: https://github.com/provectus/kafka-ui/releases/tag/v0.6.0
Thanks to everyone who just started and continued to contribute!
In the next release, we'll focus a bit on expanding our RBAC possibilities (support for LDAP and universal OAuth providers) and some Wizard features!
https://redd.it/11mxpbj
@r_devops
Hi redditors!
Today I'm delighted to bring you the latest 0.6 release of UI for Apache Kafka, packed with new features and enhancements!
This version offers:
- A configuration wizard that simplifies cluster setup (right in web UI!). Now we can launch the app via AWS AMI image and setup a cluster on the go
- Integration with OpenDataDiscovery Platform to gain deeper insight into your metadata changes
- Support for protobuf imports & file references
Other minor, yet significant, enhancements include:
- Embedded Avro embedded serde plugin
- Improved ISR display on Topic overview (now you can view it per partition!)
And a cherry on top? Now we’re able to work around kafka ACL errors so you won’t need to confront pesky permission issues when using the app.
Don’t wait, the update is already available on github & @ AWS Marketplace!
Full changelog: https://github.com/provectus/kafka-ui/releases/tag/v0.6.0
Thanks to everyone who just started and continued to contribute!
In the next release, we'll focus a bit on expanding our RBAC possibilities (support for LDAP and universal OAuth providers) and some Wizard features!
https://redd.it/11mxpbj
@r_devops
GitHub
Release 0.6.0 · provectus/kafka-ui
Within this release, we introduced new features like a cluster configuration wizard and integration with the OpenDataDiscovery Platform.
New Features
Cluster configuration wizard
With DYNAMIC_CONFI...
New Features
Cluster configuration wizard
With DYNAMIC_CONFI...
RMM/UEM
Good morning everyone,
I've done quite a bit of Googling regarding this but haven't gotten very far. Short of taking advantage of all the free trials, which I will soon, it's hard to tell the difference from app to app.
With CMMC compliance on the horizon I need to remotely manage around 10 linux machines and 10 macs spread out across the states. Ideally I will be able to self host the central server but most of the options I have come across are cloud based.
Any suggestions or guidance is deeply appreciated.
Pros:
Opensource (TacticalRMM was all I found but there were some glaring concerns)
Can manage both Mac and Linux machines
Hosted on site
CIS/NIST configuration templates are a major plus
https://redd.it/11mvso9
@r_devops
Good morning everyone,
I've done quite a bit of Googling regarding this but haven't gotten very far. Short of taking advantage of all the free trials, which I will soon, it's hard to tell the difference from app to app.
With CMMC compliance on the horizon I need to remotely manage around 10 linux machines and 10 macs spread out across the states. Ideally I will be able to self host the central server but most of the options I have come across are cloud based.
Any suggestions or guidance is deeply appreciated.
Pros:
Opensource (TacticalRMM was all I found but there were some glaring concerns)
Can manage both Mac and Linux machines
Hosted on site
CIS/NIST configuration templates are a major plus
https://redd.it/11mvso9
@r_devops
Reddit
r/devops on Reddit: RMM/UEM
Posted by u/tasteydad - 1 vote and no comments
SUSE Elemental Toolkit
Has anybody used Elemental Toolkit? Seems to provide a good tool set for k8s cluster lifecycle management, including OS build and maintenance
https://redd.it/11mtxom
@r_devops
Has anybody used Elemental Toolkit? Seems to provide a good tool set for k8s cluster lifecycle management, including OS build and maintenance
https://redd.it/11mtxom
@r_devops
Elemental toolkit
Documentation
Elemental toolkit documentation website
Who uses Signoz in production
Just want to see how your experience has been so far. Things like upgrading. Resource consumption. Disk space. All that other stuff. Ease of operations ( for context, I’m looking for something that doesn’t require a whole lot of operations as I’d rather just pay for cloud at that point )
https://redd.it/11n272q
@r_devops
Just want to see how your experience has been so far. Things like upgrading. Resource consumption. Disk space. All that other stuff. Ease of operations ( for context, I’m looking for something that doesn’t require a whole lot of operations as I’d rather just pay for cloud at that point )
https://redd.it/11n272q
@r_devops
Reddit
r/devops on Reddit: Who uses Signoz in production
Posted by u/adelowo - No votes and no comments
Is HashiCorp Certified: Terraform Associate (002) Worth It?
I have an upcoming internship this Summer in a DevOps role. I have never used Terraform first hand, but I do know it will be a tool I'll be using on the job. Is it worth pursuing an associate certification in order to prepare? Does anyone have any experience with this cert? How does it stack up time wise to prepare for?
https://redd.it/11n2k0z
@r_devops
I have an upcoming internship this Summer in a DevOps role. I have never used Terraform first hand, but I do know it will be a tool I'll be using on the job. Is it worth pursuing an associate certification in order to prepare? Does anyone have any experience with this cert? How does it stack up time wise to prepare for?
https://redd.it/11n2k0z
@r_devops
Reddit
r/devops on Reddit: Is HashiCorp Certified: Terraform Associate (002) Worth It?
Posted by u/alienboy19 - No votes and 4 comments
Save $ on public S3 buckets using VPC endpoints via SQL
The cost savings of routing the traffic of public S3 buckets through VPC endpoints instead of NAT gateways in AWS can be quite large. NAT gateways are the default. We wrote a guide on how to do this with r/iasql using a couple of queries: https://iasql.com/blog/save-s3-vpc/
https://redd.it/11n5d6u
@r_devops
The cost savings of routing the traffic of public S3 buckets through VPC endpoints instead of NAT gateways in AWS can be quite large. NAT gateways are the default. We wrote a guide on how to do this with r/iasql using a couple of queries: https://iasql.com/blog/save-s3-vpc/
https://redd.it/11n5d6u
@r_devops
Iasql
Save $ on public S3 buckets using VPC endpoints via SQL | IaSQL
Are you using S3 buckets as part of your cloud deployments? How are you accessing them?
whats your development process for github actions and how are you testing them?
so, I have been getting in deep with github actions: terraform with a remote backend, automated testing, linting, automated building, etc.
and I am finding the development process to be slow. 3-4 minutes per iteration and I am iterating a lot because I am learning and small changes are more likely to succeed. but waiting for push, waiting for it to get picked up, waiting for the entire workflow to run is slow when I am making incremental changes. Plus its eating into my GA budget.
I know once my pipelines are all set, I shouldn't touch it much, but I'd love a more responsive, local environment for testing these workflows.
https://redd.it/11n4cn1
@r_devops
so, I have been getting in deep with github actions: terraform with a remote backend, automated testing, linting, automated building, etc.
and I am finding the development process to be slow. 3-4 minutes per iteration and I am iterating a lot because I am learning and small changes are more likely to succeed. but waiting for push, waiting for it to get picked up, waiting for the entire workflow to run is slow when I am making incremental changes. Plus its eating into my GA budget.
I know once my pipelines are all set, I shouldn't touch it much, but I'd love a more responsive, local environment for testing these workflows.
https://redd.it/11n4cn1
@r_devops
Reddit
r/devops on Reddit: whats your development process for github actions and how are you testing them?
Posted by u/thegainsfairy - 1 vote and 2 comments
when companies provide you with a laptop as an employee and it comes with pre installed software, how does that software get installed?
I might be in the wrong subreddit.. but I'm curious:
My company recently got acquired by a much bigger company, and during that process, the parent company provided all new employees with a laptop, so they shipped 100+ laptops to employees, and you go through a setup process with the IT team, to assign the laptop to oneself.
Usually, there is some software already installed on the laptop after setup. I'm curious how the parent company creates these identical laptop setups for 100+ people...
Is it manual? Do they use a snapshot of an existing setup and then apply that to all laptops? Is there a company that provides this as a service?
Any info would be great, or directions to the right subreddit.
Thank you
https://redd.it/11n7zdb
@r_devops
I might be in the wrong subreddit.. but I'm curious:
My company recently got acquired by a much bigger company, and during that process, the parent company provided all new employees with a laptop, so they shipped 100+ laptops to employees, and you go through a setup process with the IT team, to assign the laptop to oneself.
Usually, there is some software already installed on the laptop after setup. I'm curious how the parent company creates these identical laptop setups for 100+ people...
Is it manual? Do they use a snapshot of an existing setup and then apply that to all laptops? Is there a company that provides this as a service?
Any info would be great, or directions to the right subreddit.
Thank you
https://redd.it/11n7zdb
@r_devops
Reddit
r/devops on Reddit: when companies provide you with a laptop as an employee and it comes with pre installed software, how does…
Posted by u/SimonFOOTBALL - No votes and 4 comments
Deploying CLIs to developer machines
We have some internal tools for interfacing with our Kubernetes clusters and other internal systems. They're all CLIs, some Bash scripts and Rust binaries, and we're looking to have them regularly built and deployed onto developers' machines (Linux and OSX).
Is there an existing solution for this ?
https://redd.it/11n39ie
@r_devops
We have some internal tools for interfacing with our Kubernetes clusters and other internal systems. They're all CLIs, some Bash scripts and Rust binaries, and we're looking to have them regularly built and deployed onto developers' machines (Linux and OSX).
Is there an existing solution for this ?
https://redd.it/11n39ie
@r_devops
Reddit
r/devops on Reddit: Deploying CLIs to developer machines
Posted by u/sionescu - No votes and 19 comments
Proxy Basic Auth Replacement Best Practice for Cloud Native / OIDC / Vault
What would be the up-to-date, cloud native, best practice for replacement of e.g. haProxy with ACLs and Basic Auth, with something like Envoy (it has RBAC) + JWT + Hashi Vault and/or OIDC provider like Okta/AD?
I want to secure web endpoints, which don't support auth natively. Current solution is haProxy with network ACLs and Basic Auth, but I want actual identity check (not network-based), ideally tied to an identity provider (in my case AD) with either rotating token or at least password stored in Vault (and I do realize that I might be mixing stuff here - AD and pwd/token being mutually exclusive, so either is fine, but I want to be able to auth with another software as well, not just human - not sure how to go about that with AD).
I've seen a solution with Envoy+something (I don't remember, maybe traefik?)+OpenPolicyAgent+Okta in K8s env. It was ugly :-D. I want something independent of k8s, so I can place it in front of a historical service running on a VM, and secure it while it's being migrated and ideally doesn't require 3 containers to implement :-D.
Thanks for any suggestions and pointers!
https://redd.it/11nas9j
@r_devops
What would be the up-to-date, cloud native, best practice for replacement of e.g. haProxy with ACLs and Basic Auth, with something like Envoy (it has RBAC) + JWT + Hashi Vault and/or OIDC provider like Okta/AD?
I want to secure web endpoints, which don't support auth natively. Current solution is haProxy with network ACLs and Basic Auth, but I want actual identity check (not network-based), ideally tied to an identity provider (in my case AD) with either rotating token or at least password stored in Vault (and I do realize that I might be mixing stuff here - AD and pwd/token being mutually exclusive, so either is fine, but I want to be able to auth with another software as well, not just human - not sure how to go about that with AD).
I've seen a solution with Envoy+something (I don't remember, maybe traefik?)+OpenPolicyAgent+Okta in K8s env. It was ugly :-D. I want something independent of k8s, so I can place it in front of a historical service running on a VM, and secure it while it's being migrated and ideally doesn't require 3 containers to implement :-D.
Thanks for any suggestions and pointers!
https://redd.it/11nas9j
@r_devops
Reddit
r/devops on Reddit: Proxy Basic Auth Replacement Best Practice for Cloud Native / OIDC / Vault
Posted by u/divide777 - No votes and 1 comment
How to change all links across a 200 page site, automatically?
An affiliate program needs me to change all links to their new landing page URL.
It is thousands of links across 200 pages. What is the best way?
https://redd.it/11n3127
@r_devops
An affiliate program needs me to change all links to their new landing page URL.
It is thousands of links across 200 pages. What is the best way?
https://redd.it/11n3127
@r_devops
Reddit
r/devops on Reddit: How to change all links across a 200 page site, automatically?
Posted by u/SunnyRepository83 - No votes and 27 comments
Any way to automate CVS version control?
Company refuses to switch to git.. any way to automate CVS or create some type of pipeline with it? Right now I have to run cvsq on all dev files and then sign off on it.
https://redd.it/11n2nhy
@r_devops
Company refuses to switch to git.. any way to automate CVS or create some type of pipeline with it? Right now I have to run cvsq on all dev files and then sign off on it.
https://redd.it/11n2nhy
@r_devops
Reddit
r/devops on Reddit: Any way to automate CVS version control?
Posted by u/Real_Voice_7166 - 1 vote and 22 comments
DevOps with background in computer science
Hi. I'm currently facing a bit of a dilemma. So i recently started a position of devops trainee (i have a background in computer science and i actually enjoy programming).... However my company tends to be very "Ops" oriented, there's almost no work nor projects where we work directly with the developers, it's mostly "services" where they provide or maintain infrastructure. There's little to no-code involved (except for terraform used for IaC, which is mostly scripting) and I find it really boring working with that.... The most exciting task i came up with was developing a lambda (which i suggested, cause I was the few that knew how to code), where i can implement, create unit and integration tests and deploy it in a pipeline and it's very similar to the SWE types of tasks that I learnt in college, so I'm more familiar with and find it more exciting to do......
And I'm really debating now if I'm completely in the wrong job position (should i become a developer?), or if the company perception of DevOps is just off (but I'm actually very newbie and idk how i can help to improve its culture).....
https://redd.it/11mwirj
@r_devops
Hi. I'm currently facing a bit of a dilemma. So i recently started a position of devops trainee (i have a background in computer science and i actually enjoy programming).... However my company tends to be very "Ops" oriented, there's almost no work nor projects where we work directly with the developers, it's mostly "services" where they provide or maintain infrastructure. There's little to no-code involved (except for terraform used for IaC, which is mostly scripting) and I find it really boring working with that.... The most exciting task i came up with was developing a lambda (which i suggested, cause I was the few that knew how to code), where i can implement, create unit and integration tests and deploy it in a pipeline and it's very similar to the SWE types of tasks that I learnt in college, so I'm more familiar with and find it more exciting to do......
And I'm really debating now if I'm completely in the wrong job position (should i become a developer?), or if the company perception of DevOps is just off (but I'm actually very newbie and idk how i can help to improve its culture).....
https://redd.it/11mwirj
@r_devops
Reddit
r/devops on Reddit: DevOps with background in computer science
Posted by u/unknown529284 - No votes and 8 comments
❤1
Top 10 DevOps Tips for Cloud and backend Applications (Presented in the Arabic Language)
Top 10 DevOps Tips for Cloud and backend Applications (Presented in the Arabic Language)
أهم 10 نصائح DevOps للتطبيقات السحابية
https://www.youtube.com/watch?v=c_ay2xZDRUw
https://redd.it/11mzx6e
@r_devops
Top 10 DevOps Tips for Cloud and backend Applications (Presented in the Arabic Language)
أهم 10 نصائح DevOps للتطبيقات السحابية
https://www.youtube.com/watch?v=c_ay2xZDRUw
https://redd.it/11mzx6e
@r_devops
YouTube
Top 10 DevOps Tips for Cloud and backend Applications
# العنوان #
Top 10 DevOps Tips for Cloud and backend Applications
(شريف المتولي )
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
#MENADD
#MENADigitalDays2023
# cloud
» WebSite: https://bit.ly/mena-dd
» All sessions are recorded and will remain…
Top 10 DevOps Tips for Cloud and backend Applications
(شريف المتولي )
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
#MENADD
#MENADigitalDays2023
# cloud
» WebSite: https://bit.ly/mena-dd
» All sessions are recorded and will remain…
How do you Bootstrap an Organization in Google Cloud Platform?
I found this process very intense from a team interaction point of view, especially when the conversation goes down a rabbit hole trying to solve the chicken and egg problem.
I try to optimise based on principles while still knowing that we are in a state when we cannot adhere to them 100%. I proceed in a three phases approach:
* Inception Phase (Ring 0)
* Pre-operational Phase (Ring 1)
* Operational Phase (Ring 2)
You can imagine these 3 phases like the protection rings in an operation system where you gradually tighten the adhere to principles and policies. I explained in more detail in this video: [https://youtu.be/RDF4Yf5JhPI](https://youtu.be/RDF4Yf5JhPI)
Would appreciate any feedback.
https://redd.it/11njbmz
@r_devops
I found this process very intense from a team interaction point of view, especially when the conversation goes down a rabbit hole trying to solve the chicken and egg problem.
I try to optimise based on principles while still knowing that we are in a state when we cannot adhere to them 100%. I proceed in a three phases approach:
* Inception Phase (Ring 0)
* Pre-operational Phase (Ring 1)
* Operational Phase (Ring 2)
You can imagine these 3 phases like the protection rings in an operation system where you gradually tighten the adhere to principles and policies. I explained in more detail in this video: [https://youtu.be/RDF4Yf5JhPI](https://youtu.be/RDF4Yf5JhPI)
Would appreciate any feedback.
https://redd.it/11njbmz
@r_devops
YouTube
How I Bootstrap a GCP Org
How I Bootstrap an Organization in Google Cloud Platform
What is org bootstrapping?
When you create an account in GCP there is nothing, no folders, no projects, no resources.
You only have your user able to create these resources.
✨ Org bootstrapping is…
What is org bootstrapping?
When you create an account in GCP there is nothing, no folders, no projects, no resources.
You only have your user able to create these resources.
✨ Org bootstrapping is…
FeatureProbe: Streamline your DevOps workflow and achieve faster and safe feature releases with seamless feature flag open-source integration.
https://github.com/FeatureProbe/FeatureProbe
https://redd.it/11nk9na
@r_devops
https://github.com/FeatureProbe/FeatureProbe
https://redd.it/11nk9na
@r_devops
GitHub
GitHub - FeatureProbe/FeatureProbe: FeatureProbe is an open source feature management service. 开源的高效可视化『特性』管理平台,提供特性开关、灰度发布、AB实验全功能。
FeatureProbe is an open source feature management service. 开源的高效可视化『特性』管理平台,提供特性开关、灰度发布、AB实验全功能。 - FeatureProbe/FeatureProbe
How do you handle CSP Headers for a multi tenant application?
right now its just one CSP for all of our tenants and we keep adding domains if we see a block. as you can imagine our CSP is huge.
you think doing a * will not be a security issue? (my heart says it is.. lol)
Dev team seems dont think its a priority to include this in the application per tenant
https://redd.it/11nlice
@r_devops
right now its just one CSP for all of our tenants and we keep adding domains if we see a block. as you can imagine our CSP is huge.
you think doing a * will not be a security issue? (my heart says it is.. lol)
Dev team seems dont think its a priority to include this in the application per tenant
https://redd.it/11nlice
@r_devops
Reddit
r/devops on Reddit: How do you handle CSP Headers for a multi tenant application?
Posted by u/linux_n00by - No votes and no comments