initializes git repository, sets initial branch to main, sets remote to new gitlab repository, commits and pushes to gitlab
* gets the runner token for the new repository from gitlab
* copies ezinnit.config to server
* runs server initialization script on the remote server, which does the following:
* creates new ssh keys on server
* uploads server's ssh keys to gitlab repository
* downloads and installs [dokku](https://dokku.com/) on server (this takes a few minutes)
* creates dokku app on server
* sets the domain for the dokku app on server
* sets the apps port to 80:5000 on server
* downloads and creates a gitlab runner on server
* registers the gitlab runner on server
* downloads and installs [dokku-letsencrypt](https://github.com/dokku/dokku-letsencrypt) on server
* enables encryption for app on server with TLS certificate from [letsencrypt](https://letsencrypt.org/) on server
* adds a chron job on server to automatically renew TLS certificates
* for django, flask and fastApi, creates and runs a script: ezrunto find an open port and run locally in development environment
* when ezinnit completes, gitlab will automatically begin deploying your app to your server. ezinnit will give you a link to your new repository where you can check on the deployment status.
to find an open port and run django, flask or fastApi ezinnit template apps locally in development environment:
`bash ezrun`
Deploy Now and Forever
Use ezinnit whenever you start a new webapp project. At the push of a button, your project will begin with a gitlab repository that automatically deploys main commits to a container on the server of your choice, where your app is running and available at the domain of your choice.
You can now develop for the true environment your app is intended for with instant feedback about how changes will impact real world usability. You know instantly if your app will build in a container and how it will behave on a live server.
The secure production environment is the default, and development mode is the exception - making development safe.
When you start a project with ezinnit, you're really doing CICD. From day one, you hit the ground running with a live app on your own server on your own domain, so you can focus on what only you can do.
to start a django project from scratch:
`mkdir ezinnit wget https://raw.githubusercontent.com/johnsyncs/ezinnit/main/ezinnit%20template%20scripts/django.innit -P ezinnit bash ezinnit/django.innit`
https://redd.it/z6y9rn
@r_devops
* gets the runner token for the new repository from gitlab
* copies ezinnit.config to server
* runs server initialization script on the remote server, which does the following:
* creates new ssh keys on server
* uploads server's ssh keys to gitlab repository
* downloads and installs [dokku](https://dokku.com/) on server (this takes a few minutes)
* creates dokku app on server
* sets the domain for the dokku app on server
* sets the apps port to 80:5000 on server
* downloads and creates a gitlab runner on server
* registers the gitlab runner on server
* downloads and installs [dokku-letsencrypt](https://github.com/dokku/dokku-letsencrypt) on server
* enables encryption for app on server with TLS certificate from [letsencrypt](https://letsencrypt.org/) on server
* adds a chron job on server to automatically renew TLS certificates
* for django, flask and fastApi, creates and runs a script: ezrunto find an open port and run locally in development environment
* when ezinnit completes, gitlab will automatically begin deploying your app to your server. ezinnit will give you a link to your new repository where you can check on the deployment status.
to find an open port and run django, flask or fastApi ezinnit template apps locally in development environment:
`bash ezrun`
Deploy Now and Forever
Use ezinnit whenever you start a new webapp project. At the push of a button, your project will begin with a gitlab repository that automatically deploys main commits to a container on the server of your choice, where your app is running and available at the domain of your choice.
You can now develop for the true environment your app is intended for with instant feedback about how changes will impact real world usability. You know instantly if your app will build in a container and how it will behave on a live server.
The secure production environment is the default, and development mode is the exception - making development safe.
When you start a project with ezinnit, you're really doing CICD. From day one, you hit the ground running with a live app on your own server on your own domain, so you can focus on what only you can do.
to start a django project from scratch:
`mkdir ezinnit wget https://raw.githubusercontent.com/johnsyncs/ezinnit/main/ezinnit%20template%20scripts/django.innit -P ezinnit bash ezinnit/django.innit`
https://redd.it/z6y9rn
@r_devops
GitHub
GitHub - dokku/dokku-letsencrypt: Automatic Let's Encrypt TLS Certificate installation for dokku
Automatic Let's Encrypt TLS Certificate installation for dokku - dokku/dokku-letsencrypt
Triggering email and db write/reads.
Preface, marketing makes research difficult, more so when using the terms 'email' and 'service'.
I am developing a web-app that will integrate with email and SMS. The webapp is built using Sveltekit and hosted on Vercel. I'm using Mongodb as my db. Mongo has a watch feature that triggers when a change is made to whatever you've configured it to trigger it on. My thinking this far is to build an express app that will handle this watch behavior and email/sms handling.
When I start my googling-around-to-see-what-I-can-copy-paste I come across a lot of services that provide 'triggering' services.
Hosting/setting up servers is not something I have experience with; though I am confident with node.js.
Should I go the triggering service route or should I build/host my own service? Or, is there another path what I am unaware of?
https://redd.it/z70r5x
@r_devops
Preface, marketing makes research difficult, more so when using the terms 'email' and 'service'.
I am developing a web-app that will integrate with email and SMS. The webapp is built using Sveltekit and hosted on Vercel. I'm using Mongodb as my db. Mongo has a watch feature that triggers when a change is made to whatever you've configured it to trigger it on. My thinking this far is to build an express app that will handle this watch behavior and email/sms handling.
When I start my googling-around-to-see-what-I-can-copy-paste I come across a lot of services that provide 'triggering' services.
Hosting/setting up servers is not something I have experience with; though I am confident with node.js.
Should I go the triggering service route or should I build/host my own service? Or, is there another path what I am unaware of?
https://redd.it/z70r5x
@r_devops
reddit
Triggering email and db write/reads.
Preface, marketing makes research difficult, more so when using the terms 'email' and 'service'. I am developing a web-app that will integrate...
Job title not aligned with Job Description
TLDR; I do same tasks as DevOps Engineer in my team, My team is made of DevOps Engineers ( More inclined towards ops) and my title is not a DevOps Engineer ( Cloud Infra Dev )
Is it something to be concerned about?
https://redd.it/z72g8g
@r_devops
TLDR; I do same tasks as DevOps Engineer in my team, My team is made of DevOps Engineers ( More inclined towards ops) and my title is not a DevOps Engineer ( Cloud Infra Dev )
Is it something to be concerned about?
https://redd.it/z72g8g
@r_devops
reddit
Job title not aligned with Job Description
TLDR; I do same tasks as DevOps Engineer in my team, My team is made of DevOps Engineers ( More inclined towards ops) and my title is not a DevOps...
Overwhelmed by AWS
I have a basic understanding of lots of the core services and what they do, like IAM, security groups, EC2, ELB. But combining it all together is hard for me to wrap my head around. My company requires that all resources created in AWS are done through a cloud formation template that is deployed via our CICD pipeline. I’m overwhelmed with the amount of knowledge required to create a simple ec2 instance that has a public IP. Looking at some internal example templates we have EC2 instances that that have interfaces attached, those interfaces have SGs attached to them (I probably have it wrong I’m AFK). Combining everything together in a CFT is overwhelming. Any recommendations on resources I can use to combine everything together. Whenever I look at documentation it seems focused on one thing like “making an EC2 instance” I never see “making an EC2 instance with an interface, connected to an ELB with appropriate security groups”
https://redd.it/z72x9s
@r_devops
I have a basic understanding of lots of the core services and what they do, like IAM, security groups, EC2, ELB. But combining it all together is hard for me to wrap my head around. My company requires that all resources created in AWS are done through a cloud formation template that is deployed via our CICD pipeline. I’m overwhelmed with the amount of knowledge required to create a simple ec2 instance that has a public IP. Looking at some internal example templates we have EC2 instances that that have interfaces attached, those interfaces have SGs attached to them (I probably have it wrong I’m AFK). Combining everything together in a CFT is overwhelming. Any recommendations on resources I can use to combine everything together. Whenever I look at documentation it seems focused on one thing like “making an EC2 instance” I never see “making an EC2 instance with an interface, connected to an ELB with appropriate security groups”
https://redd.it/z72x9s
@r_devops
reddit
Overwhelmed by AWS
I have a basic understanding of lots of the core services and what they do, like IAM, security groups, EC2, ELB. But combining it all together...
AWS Cloudfront -> Cognito to Google suite
Hi all,
I've been trying to get my head around what I presumed to be a very simple setup but the whole thing is turning into a nightmare and just want to touch base and either confirm if I'm on the right path or perhaps I've gone off trail.
Currently I've IAM Identity Center setup that if people want to access the console they need to auth through our google suite. That all works as expected and is fine for any technical user of the platform.
However my needs are growing beyond having just technical users perform operations. So my idea was simple in that I have a bunch of Lambda applications and I wanted to provide a simple html website hosted on S3 where they can enter some details hit submit and then the lambdas operate without having to try and teach them any CLI or hitting api end points.
However to get this working I'm overwhelmed by all the different pieces I need to have in place. What I currently have in place
\- Suite of Lambdas
\- S3 private bucket for the front end pages
\- ACM provisioned from the cert
\- Route53 domain set up
\- Cloudfront set up pointing to the bucket
Now what I'd like to do is that when a user hits my route53 domain they're asked to auth ( similar to when they hit the AWS platform itself and auth through google.)
However when I google what I am trying to do I am seeing a lot of comments around setting up Cognito and Lambda@Edge and to be blunt I'm not understanding the purpose of them or how they resolve the goal since I didn't need to do any of that for the SSO integration earlier ( IAM Identity Center) . I find myself getting lost in the AWS docs and never getting the answers I want or finding tutorials but they only speak of public cloudfront distributions
Does anyone have any good guides or advice on what path I should be following ?
Like I say in the mind the use case is simple ( user --> hits website --> Auths --> fills out form --> triggers lambda ) but finding it very hard to implement
https://redd.it/z72aky
@r_devops
Hi all,
I've been trying to get my head around what I presumed to be a very simple setup but the whole thing is turning into a nightmare and just want to touch base and either confirm if I'm on the right path or perhaps I've gone off trail.
Currently I've IAM Identity Center setup that if people want to access the console they need to auth through our google suite. That all works as expected and is fine for any technical user of the platform.
However my needs are growing beyond having just technical users perform operations. So my idea was simple in that I have a bunch of Lambda applications and I wanted to provide a simple html website hosted on S3 where they can enter some details hit submit and then the lambdas operate without having to try and teach them any CLI or hitting api end points.
However to get this working I'm overwhelmed by all the different pieces I need to have in place. What I currently have in place
\- Suite of Lambdas
\- S3 private bucket for the front end pages
\- ACM provisioned from the cert
\- Route53 domain set up
\- Cloudfront set up pointing to the bucket
Now what I'd like to do is that when a user hits my route53 domain they're asked to auth ( similar to when they hit the AWS platform itself and auth through google.)
However when I google what I am trying to do I am seeing a lot of comments around setting up Cognito and Lambda@Edge and to be blunt I'm not understanding the purpose of them or how they resolve the goal since I didn't need to do any of that for the SSO integration earlier ( IAM Identity Center) . I find myself getting lost in the AWS docs and never getting the answers I want or finding tutorials but they only speak of public cloudfront distributions
Does anyone have any good guides or advice on what path I should be following ?
Like I say in the mind the use case is simple ( user --> hits website --> Auths --> fills out form --> triggers lambda ) but finding it very hard to implement
https://redd.it/z72aky
@r_devops
reddit
AWS Cloudfront -> Cognito to Google suite
Hi all, I've been trying to get my head around what I presumed to be a very simple setup but the whole thing is turning into a nightmare and...
Windows Container use as Market Share
Hello,
Does anyone know of a study or dataset that will show adoption of Windows containers across industries compared to adoption of Linux containers or no containers (on Windows)?
I would love to see some actual data that has buckets between Windows and Linux.
I'm not talking about the host OS being windows and running Docker with Linux containers. I would really like to see some research, how many people are actually running production workloads in Windows containers compared to production workloads in Linux containers.
Anyone?
https://redd.it/z71zbq
@r_devops
Hello,
Does anyone know of a study or dataset that will show adoption of Windows containers across industries compared to adoption of Linux containers or no containers (on Windows)?
I would love to see some actual data that has buckets between Windows and Linux.
I'm not talking about the host OS being windows and running Docker with Linux containers. I would really like to see some research, how many people are actually running production workloads in Windows containers compared to production workloads in Linux containers.
Anyone?
https://redd.it/z71zbq
@r_devops
reddit
Windows Container use as Market Share
Hello, Does anyone know of a study or dataset that will show adoption of Windows containers across industries compared to adoption of Linux...
Azure DevOps generate NSIS Setup using Pipelines
Hi there
I would like to generate NSIS executable everytime changes are pushed to main. I am now able to pull the NSIS setup and install it in a job. But where can i „export“ the built executable to? To my understanding Artifacts only support packages like nupkg. Maybe push the exe to a git Repo?
https://redd.it/z7087p
@r_devops
Hi there
I would like to generate NSIS executable everytime changes are pushed to main. I am now able to pull the NSIS setup and install it in a job. But where can i „export“ the built executable to? To my understanding Artifacts only support packages like nupkg. Maybe push the exe to a git Repo?
https://redd.it/z7087p
@r_devops
reddit
Azure DevOps generate NSIS Setup using Pipelines
Hi there I would like to generate NSIS executable everytime changes are pushed to main. I am now able to pull the NSIS setup and install it in a...
Some tool like drone.io for CD
I'm really embarassed to say that I love docker-compose over K8s for its simplicity & effectiveness.
But tools are reallly lacking.drone.io is like a docker-compose.yml. Simple, effictive & beautiful.
I'm wondering, is there any drone.io alike tool for CD?
https://redd.it/z79o6k
@r_devops
I'm really embarassed to say that I love docker-compose over K8s for its simplicity & effectiveness.
But tools are reallly lacking.drone.io is like a docker-compose.yml. Simple, effictive & beautiful.
I'm wondering, is there any drone.io alike tool for CD?
https://redd.it/z79o6k
@r_devops
www.drone.io
Drone CI – Automate Software Testing and Delivery
Drone is a self-service Continuous Delivery platform for busy development teams
Best implementation to spin up k8s clusters on demand?
I need to spin up multiples of the same cluster at different clients.
FastAPI, PostgreSQL, Elasticsearch.
Been thinking Jenkins, Helm, k8s.
Storage on OpenEBS?
https://redd.it/z703bh
@r_devops
I need to spin up multiples of the same cluster at different clients.
FastAPI, PostgreSQL, Elasticsearch.
Been thinking Jenkins, Helm, k8s.
Storage on OpenEBS?
https://redd.it/z703bh
@r_devops
reddit
Best implementation to spin up k8s clusters on demand?
I need to spin up multiples of the same cluster at different clients. FastAPI, PostgreSQL, Elasticsearch. Been thinking Jenkins, Helm, k8s....
GitFlow Branching Strategy and Alignment to Best Practices
Good evening everyone. First, let me start off by stating that we are a publicly traded company that falls under SOX controls & audit requirements.
For code branching strategies, we generally have followed the GitFlow strategy since our environments match up to the GitFlow branches (feature, develop, release, & main).
Our branches and how it maps to our environments
================================================================================
feature branch - developer's local instance for unit testing
develop branch is deployed to our DEV env.
the release branch is deployed to our QA env
main = PROD env.
================================================================================
Here is our typical workflow:
Developers create a feature branch off the "develop" branch and make their code changes. They will then perform unit testing of their changes.
The developer then requests a PR to the "develop" branch, which is then reviewed and approved by a lead dev. The code is now in the "develop" branch after approval. When all the features for all developers are in the "develop" branch, there may be end-to-end integration testing the team might perform if there are a lot of features that need to be tested together.
When the dev team is ready for formal QA testing by the QA individual/team, a release branch is cut from the develop branch, and the build is deployed to the QA environment. QA will validate the features in this QA environment, and an automated regression suite is run against the entire build. If a bug is found by QA, the feature is sent back to the develop to repeat the first bullet and onwards again. When we are audited, this is the environment that is noted in each backlog ticket for each feature.
When the release has passed all testing, the deployment in QA is released into the next environment - PROD.
We have a consulting group who prefers to change it up so that:
Developer's unit testing and formal QA by the QA team/individual are performed off the feature branch before the developer makes a PR request to get the code changes merged changes into the "develop" branch. They said this avoids them having to do a ton of PR merge requests for each break-fix cycle for a feature.
In this workflow, all the code, by the time it makes it to the QA environment, has been fully tested already. There is nothing more to test in QA besides maybe running the automated regression suite against that new set of changes.
I wanted to support a more efficient workflow for getting code into production, but also need to address SOX change control and stay within best practices at the same time. I am curious to hear if others are following the above process by our internal team or do you agree with the consulting group on having formal QA performed before the feature branch is merged into the "develop" branch.
Thank you ahead of time.
https://redd.it/z7eirb
@r_devops
Good evening everyone. First, let me start off by stating that we are a publicly traded company that falls under SOX controls & audit requirements.
For code branching strategies, we generally have followed the GitFlow strategy since our environments match up to the GitFlow branches (feature, develop, release, & main).
Our branches and how it maps to our environments
================================================================================
feature branch - developer's local instance for unit testing
develop branch is deployed to our DEV env.
the release branch is deployed to our QA env
main = PROD env.
================================================================================
Here is our typical workflow:
Developers create a feature branch off the "develop" branch and make their code changes. They will then perform unit testing of their changes.
The developer then requests a PR to the "develop" branch, which is then reviewed and approved by a lead dev. The code is now in the "develop" branch after approval. When all the features for all developers are in the "develop" branch, there may be end-to-end integration testing the team might perform if there are a lot of features that need to be tested together.
When the dev team is ready for formal QA testing by the QA individual/team, a release branch is cut from the develop branch, and the build is deployed to the QA environment. QA will validate the features in this QA environment, and an automated regression suite is run against the entire build. If a bug is found by QA, the feature is sent back to the develop to repeat the first bullet and onwards again. When we are audited, this is the environment that is noted in each backlog ticket for each feature.
When the release has passed all testing, the deployment in QA is released into the next environment - PROD.
We have a consulting group who prefers to change it up so that:
Developer's unit testing and formal QA by the QA team/individual are performed off the feature branch before the developer makes a PR request to get the code changes merged changes into the "develop" branch. They said this avoids them having to do a ton of PR merge requests for each break-fix cycle for a feature.
In this workflow, all the code, by the time it makes it to the QA environment, has been fully tested already. There is nothing more to test in QA besides maybe running the automated regression suite against that new set of changes.
I wanted to support a more efficient workflow for getting code into production, but also need to address SOX change control and stay within best practices at the same time. I am curious to hear if others are following the above process by our internal team or do you agree with the consulting group on having formal QA performed before the feature branch is merged into the "develop" branch.
Thank you ahead of time.
https://redd.it/z7eirb
@r_devops
reddit
GitFlow Branching Strategy and Alignment to Best Practices
Good evening everyone. First, let me start off by stating that we are a publicly traded company that falls under SOX controls & audit...
How do you update a MongoDB image to the latest version without losing the volume data?
How do you update a MongoDB image to the latest version without losing the volume data? Is there a tutorial for doing this? I wanted to update my MongoDB version locally, but then I realized I would wipe out the data in my local machine. Need to go from v4 to v6.
https://redd.it/z7eu4z
@r_devops
How do you update a MongoDB image to the latest version without losing the volume data? Is there a tutorial for doing this? I wanted to update my MongoDB version locally, but then I realized I would wipe out the data in my local machine. Need to go from v4 to v6.
https://redd.it/z7eu4z
@r_devops
reddit
How do you update a MongoDB image to the latest version without...
How do you update a MongoDB image to the latest version without losing the volume data? Is there a tutorial for doing this? I wanted to update my...
What OS is your Desktop/Laptop?
What OS do you use for your Main use system for work? Windows? Linux? Mac?
https://redd.it/z7i3io
@r_devops
What OS do you use for your Main use system for work? Windows? Linux? Mac?
https://redd.it/z7i3io
@r_devops
reddit
What OS is your Desktop/Laptop?
What OS do you use for your Main use system for work? Windows? Linux? Mac?
Can you create a Postgres Deployment with multiple replicas consuming to the same PV?
I am trying to setup HA PostgreSQL, but I have very minimal knowledge about this.
The PV of the cluster is being managed using Longhorn (or some other service, another team is working on this). Since the storage is already being made highly available, can I simply create two Postgres services that use the same data directory in the storage?
This might create deadlocks when two or more Postgres services are trying to access the PV and any one of them is trying to write to it, right? What if I develop a retry mechanism on the application level to handle these deadlocks?
Does this approach make sense and is actually implementable?
Thanks.
https://redd.it/z7jsa2
@r_devops
I am trying to setup HA PostgreSQL, but I have very minimal knowledge about this.
The PV of the cluster is being managed using Longhorn (or some other service, another team is working on this). Since the storage is already being made highly available, can I simply create two Postgres services that use the same data directory in the storage?
This might create deadlocks when two or more Postgres services are trying to access the PV and any one of them is trying to write to it, right? What if I develop a retry mechanism on the application level to handle these deadlocks?
Does this approach make sense and is actually implementable?
Thanks.
https://redd.it/z7jsa2
@r_devops
reddit
Can you create a Postgres Deployment with multiple replicas...
I am trying to setup HA PostgreSQL, but I have very minimal knowledge about this. The PV of the cluster is being managed using Longhorn (or some...
Whats is a good alternative to Heroku ? for free tier usage ?
i have had an app running on heroku free tier since 2018, now that heroku is turning off free tier i need a new place to host it, i need a place that provides some sort of usable psedo c-name like hero does, can you guys provide somesort of alternative ?
https://redd.it/z7n83n
@r_devops
i have had an app running on heroku free tier since 2018, now that heroku is turning off free tier i need a new place to host it, i need a place that provides some sort of usable psedo c-name like hero does, can you guys provide somesort of alternative ?
https://redd.it/z7n83n
@r_devops
reddit
Whats is a good alternative to Heroku ? for free tier usage ?
i have had an app running on heroku free tier since 2018, now that heroku is turning off free tier i need a new place to host it, i need a place...
Does "managed Nomad" exist?
Hi all, I've been working in a Nomad/Consul stack and I wonder why there are a lot of 'managed kubernetes' providers but I can't seem to find any 'managed nomad' providers. As far as I know, they both support the same use cases. How come this doesn't exist? Has anyone tried it? Am I missing something?
Nomad has proven to be rock-solid and really easy to use, so having this in a 'managed' form where you don't have to think about managing the infrastructure might be valuable?
https://redd.it/z7q7l0
@r_devops
Hi all, I've been working in a Nomad/Consul stack and I wonder why there are a lot of 'managed kubernetes' providers but I can't seem to find any 'managed nomad' providers. As far as I know, they both support the same use cases. How come this doesn't exist? Has anyone tried it? Am I missing something?
Nomad has proven to be rock-solid and really easy to use, so having this in a 'managed' form where you don't have to think about managing the infrastructure might be valuable?
https://redd.it/z7q7l0
@r_devops
reddit
Does "managed Nomad" exist?
Hi all, I've been working in a Nomad/Consul stack and I wonder why there are a lot of 'managed kubernetes' providers but I can't seem to find any...
Observability and logs body requests
Hi,
Do you save body requests also or only endpoints?
We are thinking to save body requests also but I think is not necessary for all the data and we have some sensitive data also.
I don't understand why we need to save login register requests and not only what we need.
What Info so you logging and save thanks.
https://redd.it/z7s56l
@r_devops
Hi,
Do you save body requests also or only endpoints?
We are thinking to save body requests also but I think is not necessary for all the data and we have some sensitive data also.
I don't understand why we need to save login register requests and not only what we need.
What Info so you logging and save thanks.
https://redd.it/z7s56l
@r_devops
reddit
Observability and logs body requests
Hi, Do you save body requests also or only endpoints? We are thinking to save body requests also but I think is not necessary for all the data and...
Docker Compose Deploy project to server
I am learning to use docker and docker compose, the part which I am not understanding is how to integrate this as part of a ci/cd pipeline. I am looking to use github actions and deploy to a digital ocean droplet.
In the usual (non docker) way I would commit and push to Github and trigger a Github Action that would responsible for building and generating my build artefacts that I would then deploy onto my server.
What I don't understand is do I need to create artefacts to deploy onto my server with docker compose? If not what is the process for deploying latest changes?
In experimenting i ssh onto my server clone my repo and got the project working by running docker compose manually on the box, but clearly this doesn't feel right.
https://redd.it/z7rnyz
@r_devops
I am learning to use docker and docker compose, the part which I am not understanding is how to integrate this as part of a ci/cd pipeline. I am looking to use github actions and deploy to a digital ocean droplet.
In the usual (non docker) way I would commit and push to Github and trigger a Github Action that would responsible for building and generating my build artefacts that I would then deploy onto my server.
What I don't understand is do I need to create artefacts to deploy onto my server with docker compose? If not what is the process for deploying latest changes?
In experimenting i ssh onto my server clone my repo and got the project working by running docker compose manually on the box, but clearly this doesn't feel right.
https://redd.it/z7rnyz
@r_devops
reddit
Docker Compose Deploy project to server
I am learning to use docker and docker compose, the part which I am not understanding is how to integrate this as part of a ci/cd pipeline. I am...
Malwarebytes recently announced a collaboration with Stellar Cyber, thoughts?
"Malwarebytes recently announced a collaboration with Stellar Cyber with the goal to improve resource-constrained security teams produce better security outcomes, whether it's on premise or cloud."
What does everyone think? Malwarebytes has a pretty solid EDR system, and with the new addition of Stellar Cyber's XDR platform, do you think they'll be making a greater impact in the security space? Also if anyone has experience using either vendor that also helps!
Source here
https://redd.it/z7wuok
@r_devops
"Malwarebytes recently announced a collaboration with Stellar Cyber with the goal to improve resource-constrained security teams produce better security outcomes, whether it's on premise or cloud."
What does everyone think? Malwarebytes has a pretty solid EDR system, and with the new addition of Stellar Cyber's XDR platform, do you think they'll be making a greater impact in the security space? Also if anyone has experience using either vendor that also helps!
Source here
https://redd.it/z7wuok
@r_devops
Security Boulevard
Malwarebytes collaborates with Stellar Cyber to increase productivity and efficiency of security teams
Malwarebytes collaborates with Stellar Cyber to increase productivity and efficiency of security teams Malwarebytes partners with Stellar Cyber to help resource-constrained teams produce consistent security outcomes across all environments; on-premises, cloud…
We're trying to revolutionize how you E2E test microservices with open source...
If you're an automation engineer, QA, or anything in between, I'd love some feedback on this open-source tool we're building called Tracetest: https://github.com/kubeshop/tracetest
You can see how it works here: https://tracetest.io/blog/v08-release-notes
https://redd.it/z803zq
@r_devops
If you're an automation engineer, QA, or anything in between, I'd love some feedback on this open-source tool we're building called Tracetest: https://github.com/kubeshop/tracetest
You can see how it works here: https://tracetest.io/blog/v08-release-notes
https://redd.it/z803zq
@r_devops
GitHub
GitHub - kubeshop/tracetest: 🔭 Tracetest - Build integration and end-to-end tests in minutes, instead of days, using OpenTelemetry…
🔭 Tracetest - Build integration and end-to-end tests in minutes, instead of days, using OpenTelemetry and trace-based testing. - kubeshop/tracetest
I am standing at the crossroads.
Hey guys,
I am currently in the position to choose if i want to learn Azure Cloud tech and DevOps or stay in my current role as a Software Engineer. I have about 15 years of experience in coding.
I also did some work on build pipelines and Software deployment with some tools so i know how to automate things.
My company would pay for Azure certification and in the end the goal would be to have the Azure DevOps Expert certificate. I don't know if i am up to it though.
Especially the whole network and sysadmin part, because i do not have much experience in this field. I am pretty intimidated by this part alone.
Maybe someone has some tips or advices to maybe take away my fear? of the new. I don't know how to describe it ;)
https://redd.it/z7wowu
@r_devops
Hey guys,
I am currently in the position to choose if i want to learn Azure Cloud tech and DevOps or stay in my current role as a Software Engineer. I have about 15 years of experience in coding.
I also did some work on build pipelines and Software deployment with some tools so i know how to automate things.
My company would pay for Azure certification and in the end the goal would be to have the Azure DevOps Expert certificate. I don't know if i am up to it though.
Especially the whole network and sysadmin part, because i do not have much experience in this field. I am pretty intimidated by this part alone.
Maybe someone has some tips or advices to maybe take away my fear? of the new. I don't know how to describe it ;)
https://redd.it/z7wowu
@r_devops
reddit
I am standing at the crossroads.
Hey guys, I am currently in the position to choose if i want to learn Azure Cloud tech and DevOps or stay in my current role as a Software...
Feature Flags: How to implement "sticky" variants?
I'm trying to wrap my head around how to best solve a problem my team is running into with regards to feature flags. We are using [LaunchDarkly](https://launchdarkly.com/), but I don't think the problem we're dealing with is platform-specific.
My team is responsible for conducting a variety of A/B/N experiments for our company's SaaS product. Very often, this involves the creation of feature flag variants that should only ever be enabled for new, prospective customers... In other words, anonymous users who do not yet have an account with us. For example, consider the following scenario:
We offer five different plans that potential customers can choose from. For 10% of prospective customers, we want to instead offer them a single, streamlined plan (with the hope of increasing conversions).
Accomplishing this goal is simple enough. Within our feature flag platform, we create a flag with a rule that says:
* Only enable for anonymous users.
* Only enable for 10% of visitors. Show the other 90% the five plans that we normally show everyone.
There's a problem, though. Once a prospective user has been assigned the experimental variant, we need that variant to stick with them after they have registered. That does not currently happen, because according to our rule - the experimental variant should only be made available to anonymous users, and our user is no longer anonymous... they have an account and are signed in.
After having wrestled with this for a few days, I'm left thinking that there is no simple solution to this problem that does not involve us having to maintain additional state on our end. Has anyone else dealt with this before?
See also:
* [How to Implement "Sticky" Treatments for an Experiment](https://help.split.io/hc/en-us/articles/360051389331-How-to-Implement-Sticky-Treatments-for-an-Experiment)
https://redd.it/z7zgee
@r_devops
I'm trying to wrap my head around how to best solve a problem my team is running into with regards to feature flags. We are using [LaunchDarkly](https://launchdarkly.com/), but I don't think the problem we're dealing with is platform-specific.
My team is responsible for conducting a variety of A/B/N experiments for our company's SaaS product. Very often, this involves the creation of feature flag variants that should only ever be enabled for new, prospective customers... In other words, anonymous users who do not yet have an account with us. For example, consider the following scenario:
We offer five different plans that potential customers can choose from. For 10% of prospective customers, we want to instead offer them a single, streamlined plan (with the hope of increasing conversions).
Accomplishing this goal is simple enough. Within our feature flag platform, we create a flag with a rule that says:
* Only enable for anonymous users.
* Only enable for 10% of visitors. Show the other 90% the five plans that we normally show everyone.
There's a problem, though. Once a prospective user has been assigned the experimental variant, we need that variant to stick with them after they have registered. That does not currently happen, because according to our rule - the experimental variant should only be made available to anonymous users, and our user is no longer anonymous... they have an account and are signed in.
After having wrestled with this for a few days, I'm left thinking that there is no simple solution to this problem that does not involve us having to maintain additional state on our end. Has anyone else dealt with this before?
See also:
* [How to Implement "Sticky" Treatments for an Experiment](https://help.split.io/hc/en-us/articles/360051389331-How-to-Implement-Sticky-Treatments-for-an-Experiment)
https://redd.it/z7zgee
@r_devops
LaunchDarkly
LaunchDarkly: Feature Flags, Feature Management, and Experimentation
Maximize the value of every software feature through automation and feature management.