How did devops work before the onset of cloud computing?
Sometimes it seems that many people rely a ton on cloud technologies and these few questions came to mind.
1.How did devops work before the onset of cloud computing?
2.What technologies would you use today to achieve this?
3.When would you use the old approach?
4.What have you learnt from it?
https://redd.it/lpnnkq
@r_devops
Sometimes it seems that many people rely a ton on cloud technologies and these few questions came to mind.
1.How did devops work before the onset of cloud computing?
2.What technologies would you use today to achieve this?
3.When would you use the old approach?
4.What have you learnt from it?
https://redd.it/lpnnkq
@r_devops
reddit
How did devops work before the onset of cloud computing?
Sometimes it seems that many people rely a ton on cloud technologies and these few questions came to mind. 1.How did devops work before the...
A (over)simplified comparison of DevOps, SecOps and DevSecOps
Mild entertainment purposes only.
**DevOps**
"Launch new code daily!"
Priority - rapid delivery of value
Bring devs and ops on the same page
More likely to use public cloud
Example - social media app
​
**SecOps**
"Protect this fortress!"
Priority - security above all else
Integrate security practices into ops
More likely to run on-prem
Example - human clinical trial tool
​
**DevSecOps**
"Mission-critical and pronto!"
Priority - scale-up securely while delivering value
Bring dev, security and ops on the same page
More likely to use hybrid infrastructure
Example - fast-growing fintech
https://redd.it/lpl3ky
@r_devops
Mild entertainment purposes only.
**DevOps**
"Launch new code daily!"
Priority - rapid delivery of value
Bring devs and ops on the same page
More likely to use public cloud
Example - social media app
​
**SecOps**
"Protect this fortress!"
Priority - security above all else
Integrate security practices into ops
More likely to run on-prem
Example - human clinical trial tool
​
**DevSecOps**
"Mission-critical and pronto!"
Priority - scale-up securely while delivering value
Bring dev, security and ops on the same page
More likely to use hybrid infrastructure
Example - fast-growing fintech
https://redd.it/lpl3ky
@r_devops
reddit
A (over)simplified comparison of DevOps, SecOps and DevSecOps
Mild entertainment purposes only. **\*\*DevOps\*\*** "Launch new code daily!" Priority - rapid delivery of value Bring devs and ops on the...
Automate baseline deployment
I am seeking some suggestions from the greatest community ever.
So my environment consists of Linux and Windows nodes. They are connected to each other in a localised network
Baselines are sent to me regularly, i.e on a weekly basis. The Baselines are from a team who are based overseas. I download the patches from an internal server and manually deploy them as my environment is not connected to any internet or Intranet. Making the environment face the internet or intranet is not possible at all.
With that being said, the challenge is, not all baselines are not the same, some replace drivers, some edit configuration, or install updated software, so meaning when I receive the patches, it comes with a document that tells me what to do exactly, so i just blindly follow the document
​
This process is very very tedious and annoying- imagine doing it every week.
Is there any way to automate this?
​
Edit: There is no way of knowing what changes the baseline makes, I will only know it after I have deployed it
https://redd.it/lpo6ag
@r_devops
I am seeking some suggestions from the greatest community ever.
So my environment consists of Linux and Windows nodes. They are connected to each other in a localised network
Baselines are sent to me regularly, i.e on a weekly basis. The Baselines are from a team who are based overseas. I download the patches from an internal server and manually deploy them as my environment is not connected to any internet or Intranet. Making the environment face the internet or intranet is not possible at all.
With that being said, the challenge is, not all baselines are not the same, some replace drivers, some edit configuration, or install updated software, so meaning when I receive the patches, it comes with a document that tells me what to do exactly, so i just blindly follow the document
​
This process is very very tedious and annoying- imagine doing it every week.
Is there any way to automate this?
​
Edit: There is no way of knowing what changes the baseline makes, I will only know it after I have deployed it
https://redd.it/lpo6ag
@r_devops
reddit
Automate baseline deployment
I am seeking some suggestions from the greatest community ever. So my environment consists of Linux and Windows nodes. They are connected to each...
Needs some help deciding the best DevOps strategy (AWS hell)
Hello!Long story short, I'm forcing myself to learn AWS and to practice, I'm trying to deploy a side project but it's fighting me every step of the way. Here's what I want to end up with:
Frontend (mysite.com) <-- HTTPS --> Backend (api.mysite.com) <---> Database (RDS/Postgres)
Nothing groundbreaking:a React frontend that talks to a basic CRUD server (likely Express.js) hosted under the `api` subdomain with a Postgres database - ideally, I also want pipelines for the front and backend that propagates from GitHub.
It doesn't make sense to me to deploy to an EC2 instance as I'll have to pay for all the uptime. I tried setting up the backend API with AWS API Gateway and AWS Lambda in a serverless way, but connecting this to the RDS database was a nightmare.
I feel like this should super simple but it's been days of stress can anyone please point me in the right direction, please!?!?!?
https://redd.it/lpmi91
@r_devops
Hello!Long story short, I'm forcing myself to learn AWS and to practice, I'm trying to deploy a side project but it's fighting me every step of the way. Here's what I want to end up with:
Frontend (mysite.com) <-- HTTPS --> Backend (api.mysite.com) <---> Database (RDS/Postgres)
Nothing groundbreaking:a React frontend that talks to a basic CRUD server (likely Express.js) hosted under the `api` subdomain with a Postgres database - ideally, I also want pipelines for the front and backend that propagates from GitHub.
It doesn't make sense to me to deploy to an EC2 instance as I'll have to pay for all the uptime. I tried setting up the backend API with AWS API Gateway and AWS Lambda in a serverless way, but connecting this to the RDS database was a nightmare.
I feel like this should super simple but it's been days of stress can anyone please point me in the right direction, please!?!?!?
https://redd.it/lpmi91
@r_devops
Mysite
Website Hosting - Mysite.com
For affordable website hosting packages, go to MySite.com. You'll find complete and reliable website hosting with a range of prices and options.
Argo CD Vault Replacer Plugin
I recently collaborated on an Argo CD plugin called ArgoCD-Vault-Replacer. It allows you to merge your code in Git with your secrets in Hashicorp Vault to deploy into your Kubernetes cluster(s). It supports ‘normal’ Kubernetes yaml (or yml) manifests (of any type) as well as argocd-managed Kustomize and Helm charts.
The plugin camne about because I'm currently pushing my company deeper and deeper into GitOps and the thorny topic of secrets management came up. We already have Vault in place so went looking at the existing options available to us. For one reason or another, they weren't quite right for us, so this plugin was born. Of course, there's no guarantee that this is right for you, there are other great solutions out there.
It works by you first authenticating the Argo CD pods with Vault using Vault's Kubernetes Auth Method. Then you simply modify your yaml (or yml, or Helm, or Kustomize scripts) to point it at the relevant path(s) and key(s) in vault that you wish to add to your code.
In the following example, we populate a Kubernetes Secret with the key secretkey on the path path/to/your/secret. As we are using a Vault kv2 store, we must include
apiVersion: v1
kind: Secret
metadata:
name: argocd-vault-replacer-secret
data:
sample-secret: <vault:path/data/to/your/secret~secretkey|base64>
type: Opaque
When Argo CD runs, it will pull your yaml from Git, find the secret at the given path and will merge the two together inside your cluster. The result is exactly what you’d expect, a nicely populated Kubernetes Secret.
If you’re already using Argo CD and Vault, then this is really simple to set up and start using. Please do try it out, and issues, comments and PRs are more than welcome: github.com/crumbhole/argocd-vault-replacer
https://redd.it/lphgti
@r_devops
I recently collaborated on an Argo CD plugin called ArgoCD-Vault-Replacer. It allows you to merge your code in Git with your secrets in Hashicorp Vault to deploy into your Kubernetes cluster(s). It supports ‘normal’ Kubernetes yaml (or yml) manifests (of any type) as well as argocd-managed Kustomize and Helm charts.
The plugin camne about because I'm currently pushing my company deeper and deeper into GitOps and the thorny topic of secrets management came up. We already have Vault in place so went looking at the existing options available to us. For one reason or another, they weren't quite right for us, so this plugin was born. Of course, there's no guarantee that this is right for you, there are other great solutions out there.
It works by you first authenticating the Argo CD pods with Vault using Vault's Kubernetes Auth Method. Then you simply modify your yaml (or yml, or Helm, or Kustomize scripts) to point it at the relevant path(s) and key(s) in vault that you wish to add to your code.
In the following example, we populate a Kubernetes Secret with the key secretkey on the path path/to/your/secret. As we are using a Vault kv2 store, we must include
../data/.. in our path. Kubernetes secrets are base64 encoded, so we add the modifier |base64 and the plugin handles the rest.apiVersion: v1
kind: Secret
metadata:
name: argocd-vault-replacer-secret
data:
sample-secret: <vault:path/data/to/your/secret~secretkey|base64>
type: Opaque
When Argo CD runs, it will pull your yaml from Git, find the secret at the given path and will merge the two together inside your cluster. The result is exactly what you’d expect, a nicely populated Kubernetes Secret.
If you’re already using Argo CD and Vault, then this is really simple to set up and start using. Please do try it out, and issues, comments and PRs are more than welcome: github.com/crumbhole/argocd-vault-replacer
https://redd.it/lphgti
@r_devops
GitHub
GitHub - crumbhole/argocd-vault-replacer: An Argo CD plugin to replace placeholders in Kubernetes manifests with secrets stored…
An Argo CD plugin to replace placeholders in Kubernetes manifests with secrets stored in Hashicorp Vault. - crumbhole/argocd-vault-replacer
a curated collection of resources on how orgs around the world practice Site Reliability Engineering
How They SRE is a curated knowledge repository of best practices, tools, techniques, and culture of SRE adopted by the leading technology or tech-savvy organizations.
Many organizations regularly come forward and share their best practices, tools, techniques and offer an insight into engineering culture on various public platforms like engineering blogs, conferences & meetups. The content is curated from these avenues and shared in this repository.
https://github.com/upgundecha/howtheysre
https://redd.it/lpdfcg
@r_devops
How They SRE is a curated knowledge repository of best practices, tools, techniques, and culture of SRE adopted by the leading technology or tech-savvy organizations.
Many organizations regularly come forward and share their best practices, tools, techniques and offer an insight into engineering culture on various public platforms like engineering blogs, conferences & meetups. The content is curated from these avenues and shared in this repository.
https://github.com/upgundecha/howtheysre
https://redd.it/lpdfcg
@r_devops
GitHub
GitHub - upgundecha/howtheysre: A curated collection of publicly available resources on how technology and tech-savvy organizations…
A curated collection of publicly available resources on how technology and tech-savvy organizations around the world practice Site Reliability Engineering (SRE) - upgundecha/howtheysre
Success Stories of running Autopilot on Autopilot
OpsMx Autopilot is an AI/ML-powered Continuous Verification platform that verifies software updates across different deployment stages using CI/CD pipelines, ensuring their safety and reliability in live/production environment. It automates new release verification, reducing time-consuming and error-prone manual verification processes. Autopilot uses AI and ML technologies to assess the risk of a new release, find the root-cause of issues and abnormalities for instantaneous diagnosis, and provide real-time visibility and insight about the performance and quality of new deployments to avoid business disruption.
In this blog, we will discuss how Autopilot is used to analyze and improve the release and deployment of any particular product. To understand the efficiency of Autopilot, we at OpsMx, decided to use it in our own product, and we chose to use the functionalities of Autopilot on itself. This is when we came up with the name – “Autopilot on Autopilot”.
https://redd.it/lpj3qr
@r_devops
OpsMx Autopilot is an AI/ML-powered Continuous Verification platform that verifies software updates across different deployment stages using CI/CD pipelines, ensuring their safety and reliability in live/production environment. It automates new release verification, reducing time-consuming and error-prone manual verification processes. Autopilot uses AI and ML technologies to assess the risk of a new release, find the root-cause of issues and abnormalities for instantaneous diagnosis, and provide real-time visibility and insight about the performance and quality of new deployments to avoid business disruption.
In this blog, we will discuss how Autopilot is used to analyze and improve the release and deployment of any particular product. To understand the efficiency of Autopilot, we at OpsMx, decided to use it in our own product, and we chose to use the functionalities of Autopilot on itself. This is when we came up with the name – “Autopilot on Autopilot”.
https://redd.it/lpj3qr
@r_devops
OpsMx Blog
Analyze Logs and Metrics in CI/CD using Autopilot | Use Case
Learn how OpsMx Autopilot AI/ML powered software can help you verify software updates, new feature release for safety & reliability across CI/CD stages.
Resources Limits Kubernetes
Hello Community,
I am trying understand this use case of cpu utilization by pods. I have the single k3s node cluster setup on a 8 core bare metal server, i am performing some benchmarks on MySQL pod with Nodeport service using sysbench. I've set the cpu limit to '500m' , wherein during the tests i have noticed in the TOP utility, it seems like the pod process uses all the cpu's? Does the pods uses all cores irrespective of the limits defined ? Need some help on understanding this in more detail as i couldn't find this in the official docs of Kubernetes .
Thanks
https://redd.it/lpj1sb
@r_devops
Hello Community,
I am trying understand this use case of cpu utilization by pods. I have the single k3s node cluster setup on a 8 core bare metal server, i am performing some benchmarks on MySQL pod with Nodeport service using sysbench. I've set the cpu limit to '500m' , wherein during the tests i have noticed in the TOP utility, it seems like the pod process uses all the cpu's? Does the pods uses all cores irrespective of the limits defined ? Need some help on understanding this in more detail as i couldn't find this in the official docs of Kubernetes .
Thanks
https://redd.it/lpj1sb
@r_devops
reddit
Resources Limits Kubernetes
Hello Community, I am trying understand this use case of cpu utilization by pods. I have the single k3s node cluster setup on a 8 core bare metal...
Dynamically launch multiple auto-scaling groups with Terraform?
I want to dynamically launch & tear-down auto-scaling groups for batch processing jobs.
The thing about Terraform is, I know I can use it to launch 1/2/3/etc. auto-scaling groups. But.. I don't know if I can use it to dynamically launch a new auto-scaling group & then tear it down later.
Is there a good solution for this?
https://redd.it/lpaq4s
@r_devops
I want to dynamically launch & tear-down auto-scaling groups for batch processing jobs.
The thing about Terraform is, I know I can use it to launch 1/2/3/etc. auto-scaling groups. But.. I don't know if I can use it to dynamically launch a new auto-scaling group & then tear it down later.
Is there a good solution for this?
https://redd.it/lpaq4s
@r_devops
reddit
Dynamically launch multiple auto-scaling groups with Terraform?
I want to dynamically launch & tear-down auto-scaling groups for batch processing jobs. The thing about Terraform is, I know I can use it to...
Level-Up Your Gitconfig: Platform-specific Configurations
https://medium.com/doing-things-right/platform-specific-gitconfigs-and-the-wonderful-includeif-7376cd44994d
Wrote this article a while back, but finally got around to publishing it. It’s been a really handy concept since now I can finally share my dotfiles on all OS’s without having to do anything to accommodate for Windows, Linux or macOS.
There’s a few other helpful examples in there as well to save time when navigating different platforms.
I am very curious how others do this as well.
https://redd.it/lq360b
@r_devops
https://medium.com/doing-things-right/platform-specific-gitconfigs-and-the-wonderful-includeif-7376cd44994d
Wrote this article a while back, but finally got around to publishing it. It’s been a really handy concept since now I can finally share my dotfiles on all OS’s without having to do anything to accommodate for Windows, Linux or macOS.
There’s a few other helpful examples in there as well to save time when navigating different platforms.
I am very curious how others do this as well.
https://redd.it/lq360b
@r_devops
Medium
Platform-Specific .gitconfig’s and the Wonderful includeIf
By now, you’ve invested plenty of time in your local configurations to make yourself as productive as possible. It’s not uncommon for…
I made a desktop app where you can monitor apps that you depend on (right from your menu bar) and get notified when they're down
I created a free & open-source app where you can:
* Select services you depend on
* Check their status in your menu bar
* Get notified when they change their status
Could you try it out and let me know your feedback?!
Website: [https://instatus.com/out](https://instatus.com/out)
Github: [https://github.com/instatushq/out](https://github.com/instatushq/out)
https://redd.it/lpvvp1
@r_devops
I created a free & open-source app where you can:
* Select services you depend on
* Check their status in your menu bar
* Get notified when they change their status
Could you try it out and let me know your feedback?!
Website: [https://instatus.com/out](https://instatus.com/out)
Github: [https://github.com/instatushq/out](https://github.com/instatushq/out)
https://redd.it/lpvvp1
@r_devops
GitHub
instatushq/out
Monitor services in your menu bar. Contribute to instatushq/out development by creating an account on GitHub.
GitHub Flow, CI/CD pipelines and UAT deployments
I haven't really thought about my Git workflow in a long time. I'm currently working on Azure DevOps Pipelines, got stuck on something and posed the question to SO. One of the comments was essentially: "Your branch-per-environment workflow is antiquated and doesn't work well with modern Git and continuous delivery thinking... you should reconsider your workflow and take a look at Git Flow or Github Flow."
I'm trying to be more DevOps minded, and I'm always looking for a way to "do things better," so I took that as an opportunity to reconsider how I've been using Git for several years now.
My workflow has pretty much been:
* Feature branch
* PR to merge feature branch to staging
* Approval triggers CI/CD pipeline that deploys to UAT site ([staging.example.com](https://staging.example.com))
* If everything looks good and is approved, staging is merged to the production branch
* Merge to production branch triggers CI/CD pipeline (and basically same tests are run again that were run for staging) to push to production ([example.com](https://example.com))
Thus, staging branch is tied to the staging pipeline and production branch is tied to the production pipeline (hence "branch-per-environment").
I spent the past few hours reading up on various strategies and Github Flow sounded like a good fit even though what I'm doing now seems closer to Git Flow. We are a really small team and even the author/creator/whateve of Git Flow recommends using Github Flow instead.
But it left me with a few questions:
1. How does a UAT site fit into the Github Flow model?
2. If it doesn't fit into the model, is UAT just supposed to be done on the production site?
3. Obviously, a change that breaks production is a concern and the staging layer gives a peace of mind that won't happen, so is the end goal to have your tests so rock solid that breaking changes to production don't slip through the tests?
4. How isn't Git Flow a "branch-per-environment" scenario?
I guess I'm just looking for an elaboration of CI/CD pipelines in the a GitHub Flow model.
https://redd.it/lq3an8
@r_devops
I haven't really thought about my Git workflow in a long time. I'm currently working on Azure DevOps Pipelines, got stuck on something and posed the question to SO. One of the comments was essentially: "Your branch-per-environment workflow is antiquated and doesn't work well with modern Git and continuous delivery thinking... you should reconsider your workflow and take a look at Git Flow or Github Flow."
I'm trying to be more DevOps minded, and I'm always looking for a way to "do things better," so I took that as an opportunity to reconsider how I've been using Git for several years now.
My workflow has pretty much been:
* Feature branch
* PR to merge feature branch to staging
* Approval triggers CI/CD pipeline that deploys to UAT site ([staging.example.com](https://staging.example.com))
* If everything looks good and is approved, staging is merged to the production branch
* Merge to production branch triggers CI/CD pipeline (and basically same tests are run again that were run for staging) to push to production ([example.com](https://example.com))
Thus, staging branch is tied to the staging pipeline and production branch is tied to the production pipeline (hence "branch-per-environment").
I spent the past few hours reading up on various strategies and Github Flow sounded like a good fit even though what I'm doing now seems closer to Git Flow. We are a really small team and even the author/creator/whateve of Git Flow recommends using Github Flow instead.
But it left me with a few questions:
1. How does a UAT site fit into the Github Flow model?
2. If it doesn't fit into the model, is UAT just supposed to be done on the production site?
3. Obviously, a change that breaks production is a concern and the staging layer gives a peace of mind that won't happen, so is the end goal to have your tests so rock solid that breaking changes to production don't slip through the tests?
4. How isn't Git Flow a "branch-per-environment" scenario?
I guess I'm just looking for an elaboration of CI/CD pipelines in the a GitHub Flow model.
https://redd.it/lq3an8
@r_devops
reddit
GitHub Flow, CI/CD pipelines and UAT deployments
I haven't really thought about my Git workflow in a long time. I'm currently working on Azure DevOps Pipelines, got stuck on something and posed...
Reverse Proxy - Programmable with provisioning of TLS Certs?
I'm trying to author a SaaS/PaaS solution in my basement, and I'm running into a barrier with scalability. You see, I'd like to be able to allow clients to sign up on my website, ask them to point an A record to my IP (and they will tell my site that FQDN), and while they're working on the A record, my site has already instructed the reverse proxy to forward incoming https requests for that FQDN to my web application, and has already begun provisioning a cert with let's encrypt. Obviously, let's encrypt won't be able to issue the cert until the A record propagates.
So, that said, my point is that I'd like to figure out a way that omits me and my hands from the equation: I'd like to not have to sit at the ready to hand-enter configuration for the new FQDN in nginx. I also don't want to pay handsomely for nginx+ for the privilege of using their API. If it came to that, I'd be willing to write a microservice that sits on top of an nginx instance and listens for calls indicating a configuration change, writes out the new config, and issues a sighup to nginx. None of that is desirable though.
I looked into Traefik, which auto-provisions let's encrypt certs, but the Traefik API seems to be read-only. I also looked into Fabio, which does allow for hot configuration changes through Consul (among others) but it doesn't seem to have any facility for getting certs issued without outside intervention.
Does anyone have any ideas for me to look into? Thanks.
https://redd.it/lq3d0y
@r_devops
I'm trying to author a SaaS/PaaS solution in my basement, and I'm running into a barrier with scalability. You see, I'd like to be able to allow clients to sign up on my website, ask them to point an A record to my IP (and they will tell my site that FQDN), and while they're working on the A record, my site has already instructed the reverse proxy to forward incoming https requests for that FQDN to my web application, and has already begun provisioning a cert with let's encrypt. Obviously, let's encrypt won't be able to issue the cert until the A record propagates.
So, that said, my point is that I'd like to figure out a way that omits me and my hands from the equation: I'd like to not have to sit at the ready to hand-enter configuration for the new FQDN in nginx. I also don't want to pay handsomely for nginx+ for the privilege of using their API. If it came to that, I'd be willing to write a microservice that sits on top of an nginx instance and listens for calls indicating a configuration change, writes out the new config, and issues a sighup to nginx. None of that is desirable though.
I looked into Traefik, which auto-provisions let's encrypt certs, but the Traefik API seems to be read-only. I also looked into Fabio, which does allow for hot configuration changes through Consul (among others) but it doesn't seem to have any facility for getting certs issued without outside intervention.
Does anyone have any ideas for me to look into? Thanks.
https://redd.it/lq3d0y
@r_devops
doc.traefik.io
Traefik API & Dashboard Documentation - Traefik
Traefik Proxy exposes information through API handlers and showcase them on the Dashboard. Learn about the security, configuration, and endpoints of the APIs and Dashboard. Read the technical documentation.
An essential library for functional programming lovers in Golang
Go does not provide many essential built in functions when it comes to the data structure such as slice and map. This library provides a list of most frequently needed utility functions which are inspired by Lodash(a Javascript utility library).
https://github.com/rbrahul/gofp
It will be appreciated if you could provide your valuable feedback.
https://redd.it/lq75qj
@r_devops
Go does not provide many essential built in functions when it comes to the data structure such as slice and map. This library provides a list of most frequently needed utility functions which are inspired by Lodash(a Javascript utility library).
https://github.com/rbrahul/gofp
It will be appreciated if you could provide your valuable feedback.
https://redd.it/lq75qj
@r_devops
GitHub
GitHub - rbrahul/gofp: A super simple Lodash like utility library with essential functions that empowers the development in Go
A super simple Lodash like utility library with essential functions that empowers the development in Go - rbrahul/gofp
Linux Foundation Certified IT Associate (LFCA)
Colleagues, the Linux Foundation Certified IT Associate (LFCA) exam demonstrates a user’s expertise and skills in fundamental information technology functions, especially in cloud computing. It is ideal for those getting started in an IT career as an administrator/engineer.The LFCA is a pre-professional certification intended for those new to the industry or considering starting an IT career as an administrator or engineer. This certification is ideal for users interested in advancing to the professional level through a demonstrated understanding of critical concepts for modern IT systems including cloud computing. LFCA will test candidates' knowledge of fundamental IT concepts including operating systems, software application installation and management, hardware installation, use of the command line and basic programming, basic networking functions, security best practices, and other related topics to validate their capability and preparedness for an entry-level IT position. Domains and Competencies address: 1) Linux Fundamentals (20%) , 2) System Administration Fundamentals (20%), 3) Cloud Computing Fundamentals (20%), 4) Security Fundamentals (16%), 5) DevOps Fundamentals (16%), and 6) Supporting Applications and Developers (8%). This program includes LFCA Certification Valid for 3 Years, 12 Month Exam Eligibility, Free Retake and Multiple Choice Certification Exam.
Enroll today (individuals & teams welcome): https://fxo.co/BOhH
Much career success, Lawrence E. Wilson - Online Learning Central (https://tinyurl.com/2re6558z)
https://redd.it/lq3wv9
@r_devops
Colleagues, the Linux Foundation Certified IT Associate (LFCA) exam demonstrates a user’s expertise and skills in fundamental information technology functions, especially in cloud computing. It is ideal for those getting started in an IT career as an administrator/engineer.The LFCA is a pre-professional certification intended for those new to the industry or considering starting an IT career as an administrator or engineer. This certification is ideal for users interested in advancing to the professional level through a demonstrated understanding of critical concepts for modern IT systems including cloud computing. LFCA will test candidates' knowledge of fundamental IT concepts including operating systems, software application installation and management, hardware installation, use of the command line and basic programming, basic networking functions, security best practices, and other related topics to validate their capability and preparedness for an entry-level IT position. Domains and Competencies address: 1) Linux Fundamentals (20%) , 2) System Administration Fundamentals (20%), 3) Cloud Computing Fundamentals (20%), 4) Security Fundamentals (16%), 5) DevOps Fundamentals (16%), and 6) Supporting Applications and Developers (8%). This program includes LFCA Certification Valid for 3 Years, 12 Month Exam Eligibility, Free Retake and Multiple Choice Certification Exam.
Enroll today (individuals & teams welcome): https://fxo.co/BOhH
Much career success, Lawrence E. Wilson - Online Learning Central (https://tinyurl.com/2re6558z)
https://redd.it/lq3wv9
@r_devops
Blogspot
Linux Foundation Certified IT Associate (LFCA)
Octopus Deploy Email Notifications
Curious if anyone has come up with a decent Octopus Deploy email notification template that only lists the deployed/completed steps (excluding steps that were excluded). The template that Octopus provides in their email notification how-to lists every step in a deployment even if it was excluded. I've put this together and it outputs way too much. Thoughts?
Current code used in body:
https://redd.it/lq1psj
@r_devops
Curious if anyone has come up with a decent Octopus Deploy email notification template that only lists the deployed/completed steps (excluding steps that were excluded). The template that Octopus provides in their email notification how-to lists every step in a deployment even if it was excluded. I've put this together and it outputs way too much. Thoughts?
Current code used in body:
<h2>Deployment of #{`Octopus.Project.Name`} #{Octopus.Release.Number} to #{`Octopus.Environment.Name`}</h2><p><em>Initiated by #{unless Octopus.Deployment.CreatedBy.DisplayName}#{Octopus.Deployment.CreatedBy.Username}#{/unless} #{if Octopus.Deployment.CreatedBy.DisplayName}#{Octopus.Deployment.CreatedBy.DisplayName}#{/if} #{if Octopus.Deployment.CreatedBy.EmailAddress} (<a href="mailto:%20#{Octopus.Deployment.CreatedBy.EmailAddress}">#{Octopus.Deployment.CreatedBy.EmailAddress}</a>)#{/if} at #{Octopus.Deployment.Created}</em><br><h3>Deployment process</h3><p>The deployment included the following actions:</p><ul><li style="list-style: none">#{each action in Octopus.Action}</li><li><strong>#{`action.Name`}</strong> #{if action.Package.NuGetPackageId}— {action.Package.NuGetPackageId} <em>version #{action.Package.NuGetPackageVersion}#{/if}</em></li><li style="list-style: none">#{/each}</li></ul><h4>Task summary</h4><ol><li style="list-style: none">#{each step in Octopus.Step} #{if step.Status.Code}</li><li>#{step | HtmlEscape} — <strong>#{step.Status.Code}</strong> #{if step.Status.Error}<pre>#{step.Status.Error | HtmlEscape}</pre><pre>#{step.Status.ErrorDetail | HtmlEscape}</pre>#{/if}#{/if}#{/each}</li></ol>https://redd.it/lq1psj
@r_devops
Octopus Deploy
Email notification step - Octopus Deploy
Email notification steps allow you to notify team members and stakeholders of deployment activities.
Building a Home Cloud with Proxmox: DNS + Terraform
This is part of my series on setting up a Kubernetes cluster at home using Proxmox and Terraform
https://blog.sunshower.io/2021/02/22/building-a-home-cloud-with-proxmox-dns-terraform/
https://redd.it/lq1gbm
@r_devops
This is part of my series on setting up a Kubernetes cluster at home using Proxmox and Terraform
https://blog.sunshower.io/2021/02/22/building-a-home-cloud-with-proxmox-dns-terraform/
https://redd.it/lq1gbm
@r_devops
The Sunshower.io Blog
Building a Home Cloud with Proxmox: DNS + Terraform - The Sunshower.io Blog
Overview In our last post, we configured a Ceph storage cluster, which we’ll be using as the storage for our virtual machines that we’ll be using to host Kubernetes. Before we get to that, however, we need to configure our… Read More Building a Home Cloud…
Thinking about creating a web app to keep track of upgrades
Hi friends!
So a problem I noticed working in the field is making sure upgrading components of a platform doesn't break the platform itself. I do this by "researching" aka reading through release notes and noting down possible conflicts.
I was thinking of creating a web app where we can track upgrades, dependencies, and potential conflicts. We can also mark the upgrade as "Do it", "Skip", and "On latest version". I also am thinking of having an anonymous sharing feature where we can share our research, so that someone else upgrading can have a reference (or if they are really lazy they can just rely on the researcher's findings). Maybe once the app gets traction, I can invite the companies responsible for the components to contribute as well.
What do you guys think? Is this a viable app idea? Any suggestions?
Thanks!
https://redd.it/lptni0
@r_devops
Hi friends!
So a problem I noticed working in the field is making sure upgrading components of a platform doesn't break the platform itself. I do this by "researching" aka reading through release notes and noting down possible conflicts.
I was thinking of creating a web app where we can track upgrades, dependencies, and potential conflicts. We can also mark the upgrade as "Do it", "Skip", and "On latest version". I also am thinking of having an anonymous sharing feature where we can share our research, so that someone else upgrading can have a reference (or if they are really lazy they can just rely on the researcher's findings). Maybe once the app gets traction, I can invite the companies responsible for the components to contribute as well.
What do you guys think? Is this a viable app idea? Any suggestions?
Thanks!
https://redd.it/lptni0
@r_devops
reddit
Thinking about creating a web app to keep track of upgrades
Hi friends! So a problem I noticed working in the field is making sure upgrading components of a platform doesn't break the platform itself. I do...
Flask app inside docker container
So deployed 2 flask apps in 2 separate docker containers
Each app hai 2 endpoints.
/testHealth - this endpoint hits the same container you call it from and throws back a json output saying “ flask running “
/testComms - this endpoint hits other container’s /testHealth endpoint
Turns out /testHealth works but /testComms isnt working.
There is a server code 500 error
so app1 runs on port 5000 and app2 on 6000
Localhost:5000/testHealth would run while localhost:6000/testComms wouldnt run and throw 500 error.
Now upon inspection with the newest docker update. You need to replace localhost with ip address of your docker container. In my case it was 172.XX.X.X
So if its 172.XX.X.X:5000/testhealth it would return the correct response.
PS: my docker desktop is updated to the latest version. I have forwarded the port using -p flag and I my host is 0.0.0.0 in my flask app. I am using a 2019 Macbook pro with Big Sur.
Is this something docker hasnt documented yet?
https://redd.it/lptmjf
@r_devops
So deployed 2 flask apps in 2 separate docker containers
Each app hai 2 endpoints.
/testHealth - this endpoint hits the same container you call it from and throws back a json output saying “ flask running “
/testComms - this endpoint hits other container’s /testHealth endpoint
Turns out /testHealth works but /testComms isnt working.
There is a server code 500 error
so app1 runs on port 5000 and app2 on 6000
Localhost:5000/testHealth would run while localhost:6000/testComms wouldnt run and throw 500 error.
Now upon inspection with the newest docker update. You need to replace localhost with ip address of your docker container. In my case it was 172.XX.X.X
So if its 172.XX.X.X:5000/testhealth it would return the correct response.
PS: my docker desktop is updated to the latest version. I have forwarded the port using -p flag and I my host is 0.0.0.0 in my flask app. I am using a 2019 Macbook pro with Big Sur.
Is this something docker hasnt documented yet?
https://redd.it/lptmjf
@r_devops
reddit
Flask app inside docker container
So deployed 2 flask apps in 2 separate docker containers Each app hai 2 endpoints. /testHealth - this endpoint hits the same container you call it...
A lost devops
Hello guys,
I'm a relatively young devops (3 years experience) searching what could be an interesting company to work at. I'm planning to leave my current position and to relocate myself to Dublin.
I really love my job, but our team is small. I end up being interrupted by level 1 & 2 support tasks way too often. After two years in this company it kind of feel like I need to move on if I want to improve my skills.
I got different advices: "You should try to work for Google, AWS,..., those are big companies with the most interesting positions". But also: "Why don't you apply to a small consulting company? In a too big structure you will be stuck in a box, whereas in a small one you will have more room to learn".
Now I do not now where I should start looking. Learning is extremely important to me. And being able to work on different projects too. On the other hand, I cannot find any consulting company that seems to display this kind of mindset in Dublin. Maybe I'm heading the wrong way, or maybe I do not know how to search what I'm looking for...
So if any one has an advice for a confused devops, I would really appreciate it!
https://redd.it/lpsysh
@r_devops
Hello guys,
I'm a relatively young devops (3 years experience) searching what could be an interesting company to work at. I'm planning to leave my current position and to relocate myself to Dublin.
I really love my job, but our team is small. I end up being interrupted by level 1 & 2 support tasks way too often. After two years in this company it kind of feel like I need to move on if I want to improve my skills.
I got different advices: "You should try to work for Google, AWS,..., those are big companies with the most interesting positions". But also: "Why don't you apply to a small consulting company? In a too big structure you will be stuck in a box, whereas in a small one you will have more room to learn".
Now I do not now where I should start looking. Learning is extremely important to me. And being able to work on different projects too. On the other hand, I cannot find any consulting company that seems to display this kind of mindset in Dublin. Maybe I'm heading the wrong way, or maybe I do not know how to search what I'm looking for...
So if any one has an advice for a confused devops, I would really appreciate it!
https://redd.it/lpsysh
@r_devops
reddit
A lost devops
Hello guys, I'm a relatively young devops (3 years experience) searching what could be an interesting company to work at. I'm planning to leave...
You are on an island, and can only have Terraform or Ansible for IaC. Which do you choose and why?
Trying to decide on which path to go down. We are using LocalStack, AWS, and mostly what they call Serverless tools. It seems that both have a lot of pluses and minuses.
https://redd.it/lqk92n
@r_devops
Trying to decide on which path to go down. We are using LocalStack, AWS, and mostly what they call Serverless tools. It seems that both have a lot of pluses and minuses.
https://redd.it/lqk92n
@r_devops
reddit
You are on an island, and can only have Terraform or Ansible for...
Trying to decide on which path to go down. We are using LocalStack, AWS, and mostly what they call Serverless tools. It seems that both have a...