How is success measured in your DevOps team? Is anyone using these 4 key metrics?
Through six years of research, the DevOps Research and Assessment (DORA) team has identified four key metrics that indicate the performance of a software development team:
Deployment Frequency—How often an organization successfully releases to production
Lead Time for Changes—The amount of time it takes a commit to get into production
Change Failure Rate—The percentage of deployments causing a failure in production
Time to Restore Service—How long it takes an organization to recover from a failure in production
https://cloud.google.com/blog/products/devops-sre/using-the-four-keys-to-measure-your-devops-performance
https://redd.it/poczlz
@r_devops
Through six years of research, the DevOps Research and Assessment (DORA) team has identified four key metrics that indicate the performance of a software development team:
Deployment Frequency—How often an organization successfully releases to production
Lead Time for Changes—The amount of time it takes a commit to get into production
Change Failure Rate—The percentage of deployments causing a failure in production
Time to Restore Service—How long it takes an organization to recover from a failure in production
https://cloud.google.com/blog/products/devops-sre/using-the-four-keys-to-measure-your-devops-performance
https://redd.it/poczlz
@r_devops
Google Cloud Blog
The 2019 Accelerate State of DevOps: Elite performance, productivity, and scaling | Google Cloud Blog
DORA and Google Cloud have published the 2019 Accelerate State of DevOps Report.
Management of Change
I did not know if our organization had an effective or ineffective change management process.
I did know that we needed to remain relevant and resilient for all hazards security risk management in fluid risk and organizational change scenarios.
We had to organize and use that information to compare the effects of budget and policy alternatives and make better choices.
So, we had to work with change management in security projects and convince others in our organization that data quality is important.
And make hard choices, like perceive the leadership during the process, has it been sufficient?
To visualize the Management of Change work and manage it, I made a Management of Change Kanban board that is broken down into 1282 Work Items that are prioritized into their Workflows.
It worked for me, it's for where to get started on your current or impending Management of Change journey.
If you want to check it out and give me feedback go here:
https://theartofservice.com/Management-of-Change-Kanban
https://redd.it/podhr2
@r_devops
I did not know if our organization had an effective or ineffective change management process.
I did know that we needed to remain relevant and resilient for all hazards security risk management in fluid risk and organizational change scenarios.
We had to organize and use that information to compare the effects of budget and policy alternatives and make better choices.
So, we had to work with change management in security projects and convince others in our organization that data quality is important.
And make hard choices, like perceive the leadership during the process, has it been sufficient?
To visualize the Management of Change work and manage it, I made a Management of Change Kanban board that is broken down into 1282 Work Items that are prioritized into their Workflows.
It worked for me, it's for where to get started on your current or impending Management of Change journey.
If you want to check it out and give me feedback go here:
https://theartofservice.com/Management-of-Change-Kanban
https://redd.it/podhr2
@r_devops
Theartofservice
Management of Change Kanban- The Art of Service, Standard Requirements Self Assessments
Ready to use prioritized Management of Change requirements, to: Make sure the Project specialization Change Management proactively works with change
how do we get arguments here
https://cloud.google.com/build/docs/building/build-containers
​
here how do we get arguments ? like here :
steps:
name: 'gcr.io/cloud-builders/docker' args: \[ 'build', '-t', 'gcr.io/PROJECT\_ID/IMAGE\_NAME', '.' \]
# Install dependencies
name: pythonentrypoint: pipargs: ["install", "-r", "requirements.txt", "--user"\]
​
name: google/cloud-sdkargs: ['gcloud', 'run', 'deploy', 'helloworld','--image=us-central1-docker.pkg.dev/$PROJECT_ID/$_REPO_NAME/myimage:$SHORT_SHA','--region', 'us-central1', '--platform', 'managed','--allow-unauthenticated'\]
https://redd.it/pocxsp
@r_devops
https://cloud.google.com/build/docs/building/build-containers
​
here how do we get arguments ? like here :
steps:
name: 'gcr.io/cloud-builders/docker' args: \[ 'build', '-t', 'gcr.io/PROJECT\_ID/IMAGE\_NAME', '.' \]
# Install dependencies
name: pythonentrypoint: pipargs: ["install", "-r", "requirements.txt", "--user"\]
​
name: google/cloud-sdkargs: ['gcloud', 'run', 'deploy', 'helloworld','--image=us-central1-docker.pkg.dev/$PROJECT_ID/$_REPO_NAME/myimage:$SHORT_SHA','--region', 'us-central1', '--platform', 'managed','--allow-unauthenticated'\]
https://redd.it/pocxsp
@r_devops
Google Cloud
Build container images | Cloud Build Documentation | Google Cloud
Cache | Caching | Create Redis in Azure and Integrate in API and check performance | E2E Demo | Beginner Series
https://www.youtube.com/watch?v=npBGXYuf1JA
https://redd.it/pocodl
@r_devops
https://www.youtube.com/watch?v=npBGXYuf1JA
https://redd.it/pocodl
@r_devops
YouTube
Create Redis in Azure and Integrate in API and check performance | E2E Demo | Beginner Series
Create Redis in Azure and Integrate in API and check performance | E2E Demo | Beginner Series
Quickstart: Use Azure Cache for Redis in .NET Framework
Learn how to use Azure Cache for Redis, a secure data cache and messaging broker that provides high throughput…
Quickstart: Use Azure Cache for Redis in .NET Framework
Learn how to use Azure Cache for Redis, a secure data cache and messaging broker that provides high throughput…
Anyone have an AWS to GCP guide?
Im probably gonna be working on a GCP environment. Im pretty well versed in AWS land. Is there a quick terminology guide with some substance as an option?
Thanks.
https://redd.it/pogodh
@r_devops
Im probably gonna be working on a GCP environment. Im pretty well versed in AWS land. Is there a quick terminology guide with some substance as an option?
Thanks.
https://redd.it/pogodh
@r_devops
reddit
Anyone have an AWS to GCP guide?
Im probably gonna be working on a GCP environment. Im pretty well versed in AWS land. Is there a quick terminology guide with some substance as an...
Best query functionality for logging/observability products?
I'm wondering if anyone can compare products like sql, datadog, elk, splunk or more and how strong their actual query languages are.
I keep seeing people bring up ease of deployment and cost in the discussion around these products, but not how useful and realizable their data querying and transforming abilities actually are. Be it tables, dashboards or multiple layers of logic and transforming and joining.
I've used enough of them to know straight sql is just not strong or flexible enough when working on huge dumps of data that aren't locked into a schema and are ever changing and adding more sources.
Opinions? Things to consider when choosing?
https://redd.it/poj0s8
@r_devops
I'm wondering if anyone can compare products like sql, datadog, elk, splunk or more and how strong their actual query languages are.
I keep seeing people bring up ease of deployment and cost in the discussion around these products, but not how useful and realizable their data querying and transforming abilities actually are. Be it tables, dashboards or multiple layers of logic and transforming and joining.
I've used enough of them to know straight sql is just not strong or flexible enough when working on huge dumps of data that aren't locked into a schema and are ever changing and adding more sources.
Opinions? Things to consider when choosing?
https://redd.it/poj0s8
@r_devops
reddit
Best query functionality for logging/observability products?
I'm wondering if anyone can compare products like sql, datadog, elk, splunk or more and how strong their actual query languages are. I keep...
Terraform routine use of -target
Our team routinely uses -target regardless of terraform's recommendation to not do that. We have a repo with TF code for our whole infra split in many "product"-based modules which we apply in a few TF environments (production, staging and a couple supporting ones). Generally when someone works on a part of the infra (usually some product/app/api/etc) they apply the relevant module(s) with -target while others working on other parts apply on other targets at the same time (not absolutely simultaneously, we do respect the lock mechanism). The only problem that arises from this kind of use is when applying stuff in a couple core/common modules, where other people need to pause their work, wait for a full apply & merge of the changes and then continue.
So I'm not sure why terraform "looks down" on this method of doing things (I mean it prints 5 lines of text on each plan/apply....). The alternative would be to either split our stack in multiple sub environments which would require a ton of boilerplate code setting up all the providers and such or to always first push stuff to the repo, merge and then apply automatically through a CI for example which sounds very cumbersome and bureaucratic.
Maybe the organisation & infrastructure size is an important variable in deciding the proper approach. In our case we're a company of a few hundred people overall, our devops/infra team that develops our TF code are 5 devs and our AWS bill is in the low 5 figures.
I'd appreciate your opinions, recommendations and examples of other approaches.
https://redd.it/poo7un
@r_devops
Our team routinely uses -target regardless of terraform's recommendation to not do that. We have a repo with TF code for our whole infra split in many "product"-based modules which we apply in a few TF environments (production, staging and a couple supporting ones). Generally when someone works on a part of the infra (usually some product/app/api/etc) they apply the relevant module(s) with -target while others working on other parts apply on other targets at the same time (not absolutely simultaneously, we do respect the lock mechanism). The only problem that arises from this kind of use is when applying stuff in a couple core/common modules, where other people need to pause their work, wait for a full apply & merge of the changes and then continue.
So I'm not sure why terraform "looks down" on this method of doing things (I mean it prints 5 lines of text on each plan/apply....). The alternative would be to either split our stack in multiple sub environments which would require a ton of boilerplate code setting up all the providers and such or to always first push stuff to the repo, merge and then apply automatically through a CI for example which sounds very cumbersome and bureaucratic.
Maybe the organisation & infrastructure size is an important variable in deciding the proper approach. In our case we're a company of a few hundred people overall, our devops/infra team that develops our TF code are 5 devs and our AWS bill is in the low 5 figures.
I'd appreciate your opinions, recommendations and examples of other approaches.
https://redd.it/poo7un
@r_devops
reddit
Terraform routine use of -target
Our team routinely uses -target regardless of terraform's recommendation to not do that. We have a repo with TF code for our whole infra split in...
Ansible - get all servers with same variable value
My Ansible hosts file looks something like this:
KNM ## (fake provider name)
prod1 srvloc="JPN1"
prod2 srvloc="JPN1"
prod3 srvloc="JPN1"
PNA
prod4 srvloc="JPN2"
prod5 srvloc="JPN2"
prod6 srvloc="JPN2"
JAPAN:children
KNM
PNA
In my Ansible playbook, I'm trying to loop through all the servers that have the same value of
Is it possible to do in Ansible? How can I list all the servers that has the same value of
This is a small example, in reality I have lots of variables for servers, which is why simply getting the current group name is not enough (not to mention that as seen in the example, each server is a member of multiple groups)
Huge thanks ahead!
https://redd.it/pooqdt
@r_devops
My Ansible hosts file looks something like this:
KNM ## (fake provider name)
prod1 srvloc="JPN1"
prod2 srvloc="JPN1"
prod3 srvloc="JPN1"
PNA
prod4 srvloc="JPN2"
prod5 srvloc="JPN2"
prod6 srvloc="JPN2"
JAPAN:children
KNM
PNA
In my Ansible playbook, I'm trying to loop through all the servers that have the same value of
srvloc.Is it possible to do in Ansible? How can I list all the servers that has the same value of
srvloc as the current target server?This is a small example, in reality I have lots of variables for servers, which is why simply getting the current group name is not enough (not to mention that as seen in the example, each server is a member of multiple groups)
Huge thanks ahead!
https://redd.it/pooqdt
@r_devops
reddit
Ansible - get all servers with same variable value
My Ansible hosts file looks something like this: [KNM] ## (fake provider name) prod1 srvloc="JPN1" prod2 srvloc="JPN1" prod3...
Incident management system for NOC, DevOps, SRE on shift
Hi
I realized that most of the existing incident management system is more oriented for automatic notifications/escalation and don`t provide good enough UI and features for operators to work with alerts 24/7.
It works well if:
you don\`t need to do additional troubleshooting
you don`t have false-positive/flapping alerts
Otherwise, you need to have some people (NOC or Support or DevOps etc..) behind the screen who is managing alerts.
I decided to create Incident management systems to collect alerts from different monitoring systems and provide a simple way when multiple teams can manage them in one place.
​
Please take a look and share your feedback:
Playgroud - https://playground.harpia.io/#/login-?demo=true
High Level comparison - https://medium.com/@the.harpia.io/incident-management-systems-harp-vs-pagerduty-92adf6c025ce
​
There is a lot of ideas to extend it. For example - correlate alerts, show root cause, or enrich alerts with additional info. But, I need to understand if it’s something that make sense for people who is working with it.
https://redd.it/pop819
@r_devops
Hi
I realized that most of the existing incident management system is more oriented for automatic notifications/escalation and don`t provide good enough UI and features for operators to work with alerts 24/7.
It works well if:
you don\`t need to do additional troubleshooting
you don`t have false-positive/flapping alerts
Otherwise, you need to have some people (NOC or Support or DevOps etc..) behind the screen who is managing alerts.
I decided to create Incident management systems to collect alerts from different monitoring systems and provide a simple way when multiple teams can manage them in one place.
​
Please take a look and share your feedback:
Playgroud - https://playground.harpia.io/#/login-?demo=true
High Level comparison - https://medium.com/@the.harpia.io/incident-management-systems-harp-vs-pagerduty-92adf6c025ce
​
There is a lot of ideas to extend it. For example - correlate alerts, show root cause, or enrich alerts with additional info. But, I need to understand if it’s something that make sense for people who is working with it.
https://redd.it/pop819
@r_devops
Medium
Incident management systems: Harp vs PagerDuty
Are you in the process of choosing an IT Incident management system — Harp or PagerDuty? Even though we develop Harp, our goal is for you…
It took me over two years to create this (Free) Docker, DevOps, Kubernetes, Ansible Courses. Excited to introduce Thetips4you. Learn how your app is deployed using docker, kubernetes, with automation, and a CI pipeline.
Hello from Thetips4you, There are two widely used learning mediums for self-directed learning: Books and Lectures. I have been working on creating a new learning medium that is designed specifically for self-learners. Thetips4you is rather a better Book. You can think of it as a book that talks. I like to think of it as your personal tutor.
You can visit Thetips4you (https://www.youtube.com/channel/UCoOq-DtESvayx5yJE5H6-qQ/videos) right away to interact. If you want to take a look at it and share the feedback. You don't need to to sign up or make any payment to interact with Thetips4you. It is completely free.
The DevOps course is actually around 100+ lectures, however, on average you can spend 15-20 minutes a day to complete the course. You will find this learnings from introductory to advanced knowledge and is better than books and paid lectures (in many cases).
All of this is free, yes free, no need to buy a course from a random dude on the internet.
The other courses for Docker, K8s, Ansible, Prometheus, Grafana, GitLab, Splunk & more! are also available for free. While other paid learning websites may provide you with certificates of completion, I believe that a personalized and practical examples will be much more useful for you in the long run.
I am excited to hear from you, folks.
If you face any issues, please feel free to hit me up at the Facebook or the Tips4you channel comment section.
https://redd.it/poq30p
@r_devops
Hello from Thetips4you, There are two widely used learning mediums for self-directed learning: Books and Lectures. I have been working on creating a new learning medium that is designed specifically for self-learners. Thetips4you is rather a better Book. You can think of it as a book that talks. I like to think of it as your personal tutor.
You can visit Thetips4you (https://www.youtube.com/channel/UCoOq-DtESvayx5yJE5H6-qQ/videos) right away to interact. If you want to take a look at it and share the feedback. You don't need to to sign up or make any payment to interact with Thetips4you. It is completely free.
The DevOps course is actually around 100+ lectures, however, on average you can spend 15-20 minutes a day to complete the course. You will find this learnings from introductory to advanced knowledge and is better than books and paid lectures (in many cases).
All of this is free, yes free, no need to buy a course from a random dude on the internet.
The other courses for Docker, K8s, Ansible, Prometheus, Grafana, GitLab, Splunk & more! are also available for free. While other paid learning websites may provide you with certificates of completion, I believe that a personalized and practical examples will be much more useful for you in the long run.
I am excited to hear from you, folks.
If you face any issues, please feel free to hit me up at the Facebook or the Tips4you channel comment section.
https://redd.it/poq30p
@r_devops
reddit
It took me over two years to create this (Free) Docker, DevOps,...
The devops community on Reddit. Reddit gives you the best of the internet in one place.
How to run integrationtest with docker-compose and jenkins
Hello.
I have a spring boot project in which I'm using a tag to run my integrationtests
My compose file is straight forward with a network, three services in which one depends on the other two. Works brilliantly.
I'm at my integrationtest stage but I can't get it to work.
This is my stage:
steps {
sh 'docker-compose -f docker-compose.dev.yml up'
sh './gradlew integrationtest'
}
and while it builds all my services and I see my health checks on both non-3rd-party services it never goes down and runs my ./gradlew integrationtest
How do I move on to the next cli command when all three services has succeeded?
This is my cleaned up version of docker compose:
version: "2.4"
services:
rabbitMQ:
image: rabbitmq:management-alpine
ports:
- 5672:5672
- 15672:15672
networks:
- backend
subscribe:
image: image from personal registry
ports:
- '8080:8080'
networks:
- backend
push:
build:
dockerfile: Dockerfile
context: .
networks:
- backend
ports:
- '8181:8181'
volumes:
- ./auth:/auth/:ro
depends_on:
- rabbitMQ
- subscribe
networks:
backend:
https://redd.it/pos7d7
@r_devops
Hello.
I have a spring boot project in which I'm using a tag to run my integrationtests
My compose file is straight forward with a network, three services in which one depends on the other two. Works brilliantly.
I'm at my integrationtest stage but I can't get it to work.
This is my stage:
steps {
sh 'docker-compose -f docker-compose.dev.yml up'
sh './gradlew integrationtest'
}
and while it builds all my services and I see my health checks on both non-3rd-party services it never goes down and runs my ./gradlew integrationtest
How do I move on to the next cli command when all three services has succeeded?
This is my cleaned up version of docker compose:
version: "2.4"
services:
rabbitMQ:
image: rabbitmq:management-alpine
ports:
- 5672:5672
- 15672:15672
networks:
- backend
subscribe:
image: image from personal registry
ports:
- '8080:8080'
networks:
- backend
push:
build:
dockerfile: Dockerfile
context: .
networks:
- backend
ports:
- '8181:8181'
volumes:
- ./auth:/auth/:ro
depends_on:
- rabbitMQ
- subscribe
networks:
backend:
https://redd.it/pos7d7
@r_devops
reddit
How to run integrationtest with docker-compose and jenkins
Hello. I have a spring boot project in which I'm using a tag to run my integrationtests My compose file is straight forward with a network,...
A client wants in two weeks as a deliverable a document of how they can implement DevOps internally in their company. They are a traditional business in the sense Devs and Ops are definitely in separated silos. What are some ideas / thoughts that I could insert in this document?
I know that DevOps implementation is unique per environment, organization, client, etc.
But I was thinking... are there any "general rules" I should follow when thinking about how I can set up some guidelines so my client can implement DevOps in their company?
I did some research and found the following recommendations from Google Cloud:
1. Form a Site Reliability Engineering team: Site Reliability Engineers (SREs) are a mix of application SMEs and operations SMEs who are focused on operational reliability through procedural automation. Provide these SREs a mandate to automate every aspect of operations. The SRE team will continue to be responsible for the deployment of all infrastructure, but will do so via repeatable declarative configurations.
2. Only permit automated access to production resources:Taking developer access away from production resources can be a psychological challenge for Agile software developers and should be avoided. Instead of removing access, insist that all production access be scripted and tested in a staging environment. Software engineers will begin to work with SREs to develop reusable libraries and tools to probe and reconfigure production infrastructure. These tools can be used, maintained, and further enhanced by the SRE team.
3. Separate your environments: Google recommends creating at least three environments:
1. Dev: Environments where software engineers have full access to play, test, and do their work. While the SRE team will probably not need regular access to these development environments, the SRE team's automation capabilities can be used in these projects to ease the creation of desired infrastructure.
2. Stage: A single staging environment where the SRE team and software engineers collaborate on implementing and testing automation capabilities.
3. Prod: A single production environment where the SRE team uses scripts and automation developed and successfully tested in the staging environment to affect change.
4. Fully implement Continuous Integration: To free the SRE team from having to deal with production issues stemming from minor development errors:
1. All code paths need associated unit tests.
2. All checked-in code must pass a peer review, which includes checking for unit tests.
3. All checked-in code must pass an automated build process that implements the full battery of unit tests.
It is recommended to set up a code repository structure that is capable of containing an entire ACME enterprise architecture and integrate with the CI/CD process.
5. Fully implement Continuous Delivery: Once a successful build has been created, automate its rollout to the staging environment. Despite the ability to automate the process, most organizations elect to have a manual review of the new build in a staging environment and have someone "press the button" to initiate the phases of the production deployment.
​
​
I don't know if there are other ideas / guidelines I could use for this document. Do you have any ideas from your current / past experience with DevOps?
Thanks in advance!
https://redd.it/pot6gu
@r_devops
I know that DevOps implementation is unique per environment, organization, client, etc.
But I was thinking... are there any "general rules" I should follow when thinking about how I can set up some guidelines so my client can implement DevOps in their company?
I did some research and found the following recommendations from Google Cloud:
1. Form a Site Reliability Engineering team: Site Reliability Engineers (SREs) are a mix of application SMEs and operations SMEs who are focused on operational reliability through procedural automation. Provide these SREs a mandate to automate every aspect of operations. The SRE team will continue to be responsible for the deployment of all infrastructure, but will do so via repeatable declarative configurations.
2. Only permit automated access to production resources:Taking developer access away from production resources can be a psychological challenge for Agile software developers and should be avoided. Instead of removing access, insist that all production access be scripted and tested in a staging environment. Software engineers will begin to work with SREs to develop reusable libraries and tools to probe and reconfigure production infrastructure. These tools can be used, maintained, and further enhanced by the SRE team.
3. Separate your environments: Google recommends creating at least three environments:
1. Dev: Environments where software engineers have full access to play, test, and do their work. While the SRE team will probably not need regular access to these development environments, the SRE team's automation capabilities can be used in these projects to ease the creation of desired infrastructure.
2. Stage: A single staging environment where the SRE team and software engineers collaborate on implementing and testing automation capabilities.
3. Prod: A single production environment where the SRE team uses scripts and automation developed and successfully tested in the staging environment to affect change.
4. Fully implement Continuous Integration: To free the SRE team from having to deal with production issues stemming from minor development errors:
1. All code paths need associated unit tests.
2. All checked-in code must pass a peer review, which includes checking for unit tests.
3. All checked-in code must pass an automated build process that implements the full battery of unit tests.
It is recommended to set up a code repository structure that is capable of containing an entire ACME enterprise architecture and integrate with the CI/CD process.
5. Fully implement Continuous Delivery: Once a successful build has been created, automate its rollout to the staging environment. Despite the ability to automate the process, most organizations elect to have a manual review of the new build in a staging environment and have someone "press the button" to initiate the phases of the production deployment.
​
​
I don't know if there are other ideas / guidelines I could use for this document. Do you have any ideas from your current / past experience with DevOps?
Thanks in advance!
https://redd.it/pot6gu
@r_devops
reddit
A client wants in two weeks as a deliverable a document of how...
I know that DevOps implementation is unique per environment, organization, client, etc. But I was thinking... are there any "general rules" I...
Is there a way to monitor per-pod disk i/o for pods using Rook/Ceph including per-osd pod usage?
I am trying to find a way to monitor per-pod/node and per osd/pod disk usage and i/o operations in order to debug some problems with i/o on some nodes.
Right now i am using the default prometheus monitoring instance and kube-prometheus-stack but none of the metrics seem to show any connection between the Ceph/OSD usage and pods/pvcs including the default exporter for Rook/Ceph:
https://github.com/rook/rook/blob/master/Documentation/ceph-monitoring.md
https://redd.it/poqdcn
@r_devops
I am trying to find a way to monitor per-pod/node and per osd/pod disk usage and i/o operations in order to debug some problems with i/o on some nodes.
Right now i am using the default prometheus monitoring instance and kube-prometheus-stack but none of the metrics seem to show any connection between the Ceph/OSD usage and pods/pvcs including the default exporter for Rook/Ceph:
https://github.com/rook/rook/blob/master/Documentation/ceph-monitoring.md
https://redd.it/poqdcn
@r_devops
GitHub
rook/ceph-monitoring.md at master · rook/rook
Storage Orchestration for Kubernetes. Contribute to rook/rook development by creating an account on GitHub.
What are your hours? 9-5?
I was wondering how many hours a day people in devops usually work. With my new job starting soon, I wanted to set realistic expectations for myself for what is standard in the industry. What is your daily schedule?
https://redd.it/povfwj
@r_devops
I was wondering how many hours a day people in devops usually work. With my new job starting soon, I wanted to set realistic expectations for myself for what is standard in the industry. What is your daily schedule?
https://redd.it/povfwj
@r_devops
reddit
What are your hours? 9-5?
I was wondering how many hours a day people in devops usually work. With my new job starting soon, I wanted to set realistic expectations for...
Aws data api or just expose the database network to the public?
I'm trying to set up a database to connect to an aws lambda, and there doesn't really seem to be any straightforward way to do it. I always have to set up a bunch of other services for access that cost another $20 a month each (expensive for just a hobby project).
Is aws aurora with the data api enabled accessible by a non-VPC lambda (without a ton of setup and costly services)? Or should I just use a regular mariadb RDS, set a really long password, and expose the network to the public?
I gotta say, the lambdas were WAY easier to set up than this. Am I missing something obvious, or is database hosting just like that for now?
https://redd.it/pouul4
@r_devops
I'm trying to set up a database to connect to an aws lambda, and there doesn't really seem to be any straightforward way to do it. I always have to set up a bunch of other services for access that cost another $20 a month each (expensive for just a hobby project).
Is aws aurora with the data api enabled accessible by a non-VPC lambda (without a ton of setup and costly services)? Or should I just use a regular mariadb RDS, set a really long password, and expose the network to the public?
I gotta say, the lambdas were WAY easier to set up than this. Am I missing something obvious, or is database hosting just like that for now?
https://redd.it/pouul4
@r_devops
reddit
Aws data api or just expose the database network to the public?
I'm trying to set up a database to connect to an aws lambda, and there doesn't really seem to be any straightforward way to do it. I always have...
Infrastructure as SQL
After having worked with numerous engineering teams of different sizes, as small as few people and up to several hundred, we have seen first hand how difficult it is to safely make and revert changes to infrastructure in a micro services architecture. This is a consequence of the current Infrastructure of Code solutions not having truly good ways to define dependencies between the different pieces of infrastructure.
What software you have deployed on what services and the interactions between them is not a program, it is information about your infrastructure. Changing your infrastructure is a set of operations to perform, a program. A SQL database is a set of information and SQL queries read or change that data. We are humbly putting forward our ideal way to describe cloud infrastructure that is familiar and makes the relations between pieces of your infrastructure first-class citizens. There is more information about Infrastructure as SQL in this post.
https://redd.it/pomj83
@r_devops
After having worked with numerous engineering teams of different sizes, as small as few people and up to several hundred, we have seen first hand how difficult it is to safely make and revert changes to infrastructure in a micro services architecture. This is a consequence of the current Infrastructure of Code solutions not having truly good ways to define dependencies between the different pieces of infrastructure.
What software you have deployed on what services and the interactions between them is not a program, it is information about your infrastructure. Changing your infrastructure is a set of operations to perform, a program. A SQL database is a set of information and SQL queries read or change that data. We are humbly putting forward our ideal way to describe cloud infrastructure that is familiar and makes the relations between pieces of your infrastructure first-class citizens. There is more information about Infrastructure as SQL in this post.
https://redd.it/pomj83
@r_devops
DEV Community
Infrastructure as SQL
What software you have deployed on what services and the interactions between them and the outside...
Istio Install/Upgrade Strategies
Anyone have experiences (good or bad) with Istio upgrades in production or suggestions on deployment strategies (ie. canary, blue/green etc)
Also interested to hear of experiences with the various install options (ie helm vs operator vs istioctl etc)
https://redd.it/poygit
@r_devops
Anyone have experiences (good or bad) with Istio upgrades in production or suggestions on deployment strategies (ie. canary, blue/green etc)
Also interested to hear of experiences with the various install options (ie helm vs operator vs istioctl etc)
https://redd.it/poygit
@r_devops
reddit
Istio Install/Upgrade Strategies
Anyone have experiences (good or bad) with Istio upgrades in production or suggestions on deployment strategies (ie. canary, blue/green etc) Also...
Key areas in first DevOps position.
I recently started a new DevOps position in my job. I took over from a guy who left the company. I don’t have much actual DevOps experience so I’ve been trying to pick things up as I go.
I spend a lot of my time working on Linux Virtual Machines so I’ve been trying to improve my Linux knowledge and my scripting.
I also plan on getting hands on with Jenkins and other automation tools. I’m also working on a project using AWS and Azure.
Any season DevOps pros please advise me how to succeed at this role. I don’t mind doing the hard work but want to make sure I’m on the right track
https://redd.it/poyeuw
@r_devops
I recently started a new DevOps position in my job. I took over from a guy who left the company. I don’t have much actual DevOps experience so I’ve been trying to pick things up as I go.
I spend a lot of my time working on Linux Virtual Machines so I’ve been trying to improve my Linux knowledge and my scripting.
I also plan on getting hands on with Jenkins and other automation tools. I’m also working on a project using AWS and Azure.
Any season DevOps pros please advise me how to succeed at this role. I don’t mind doing the hard work but want to make sure I’m on the right track
https://redd.it/poyeuw
@r_devops
reddit
Key areas in first DevOps position.
I recently started a new DevOps position in my job. I took over from a guy who left the company. I don’t have much actual DevOps experience so...
https://firehydrant.io/blog/firehydrant-plugin-for-backstage/
FireHydrant launched the first incident management plugin for Backstage. With the Firehydrant plugin, we introduce FireHydrant’s incident management and analytics in Backstage, so you can quickly and efficiently manage your incidents within Backstage. Teams can keep organized and easily identify information about services like recent active incidents, incident analytics service healthiness, time impacted, and MTT (Mean Time To ) data. The FireHydrant plugin will soon be available on SaaS Backstage provider, Roadie.
https://redd.it/poy15k
@r_devops
FireHydrant launched the first incident management plugin for Backstage. With the Firehydrant plugin, we introduce FireHydrant’s incident management and analytics in Backstage, so you can quickly and efficiently manage your incidents within Backstage. Teams can keep organized and easily identify information about services like recent active incidents, incident analytics service healthiness, time impacted, and MTT (Mean Time To ) data. The FireHydrant plugin will soon be available on SaaS Backstage provider, Roadie.
https://redd.it/poy15k
@r_devops
Now available: FireHydrant plugin for Backstage
Learn more about our new plugin with Backstage
What is the most complicated CI/CD system you had to deal with or build one?
Being there many tools and ways of doing this, which is your most complicated setup you had to deal with?
https://redd.it/poo8p0
@r_devops
Being there many tools and ways of doing this, which is your most complicated setup you had to deal with?
https://redd.it/poo8p0
@r_devops
reddit
What is the most complicated CI/CD system you had to deal with or...
Being there many tools and ways of doing this, which is your most complicated setup you had to deal with?
k3d - k3s in Docker
Is it recommended to use k3d fro production?
I have just started exploring around k3d and its really amazing tool to work upon especially for local Kubernetes development but I am not sure we should use it in production or not.
As of now in local development, I haven't faced any such issues that would restrict me to use it in prod, but being a beginner I am not very much comfortable with the production use-cases,
so can anyone help me with this?
https://redd.it/pp8ol4
@r_devops
Is it recommended to use k3d fro production?
I have just started exploring around k3d and its really amazing tool to work upon especially for local Kubernetes development but I am not sure we should use it in production or not.
As of now in local development, I haven't faced any such issues that would restrict me to use it in prod, but being a beginner I am not very much comfortable with the production use-cases,
so can anyone help me with this?
https://redd.it/pp8ol4
@r_devops
reddit
k3d - k3s in Docker
Is it recommended to use k3d fro production? I have just started exploring around k3d and its really amazing tool to work upon especially for...