Tech newsletters.
Hey community! ✌
I was wondering what are your go-to sources for industry news? Any newsletter you are subscribed to?
I'm particularly interested in AI / ML / AIOps , Cloud , Open Source, IT Culture, DevOps tools, IoT Security.
Thank you. 🙏
https://redd.it/o9i8g6
@r_devops
Hey community! ✌
I was wondering what are your go-to sources for industry news? Any newsletter you are subscribed to?
I'm particularly interested in AI / ML / AIOps , Cloud , Open Source, IT Culture, DevOps tools, IoT Security.
Thank you. 🙏
https://redd.it/o9i8g6
@r_devops
reddit
Tech newsletters.
Hey community! ✌ I was wondering what are your go-to sources for industry news? Any newsletter you are subscribed to? I'm particularly...
Transform legacy apps to Microservices using the DevOps approach
Read this blog post to know how DevOps can help you to transform your old apps into microservices.
https://redd.it/o9k3fq
@r_devops
Read this blog post to know how DevOps can help you to transform your old apps into microservices.
https://redd.it/o9k3fq
@r_devops
softwebsolutions
DevOps approach to migrating legacy apps to Microservices
Why you should migrate your legacy apps to Microservices? Read our blog post to know why and how to transform your legacy apps to Microservices using the DevOps approach.
Is Devops a entry level friendly job?
Can somebody who doesn't have a experience get hire for Devops job?
Thank you
https://redd.it/o9l8tq
@r_devops
Can somebody who doesn't have a experience get hire for Devops job?
Thank you
https://redd.it/o9l8tq
@r_devops
reddit
Is Devops a entry level friendly job?
Can somebody who doesn't have a experience get hire for Devops job? Thank you
Career Question - SysOps -> DevOps
Hi,
I'm coming from an 3 yr system engineer background and wanting to move into DevOps & Cloud engineering. I've got work experience with Linux, Cisco, Python, Ansible and a bit of Azure. I did a self-hosted kubernetes cluster on a few pi's and hosted my self developed JS application on it. I applied to various DevOps positions ranging from intern to junior to mid, but I always got rejected immediately.
What can I improve? More projects? Maybe some certs?
https://redd.it/o9k0s0
@r_devops
Hi,
I'm coming from an 3 yr system engineer background and wanting to move into DevOps & Cloud engineering. I've got work experience with Linux, Cisco, Python, Ansible and a bit of Azure. I did a self-hosted kubernetes cluster on a few pi's and hosted my self developed JS application on it. I applied to various DevOps positions ranging from intern to junior to mid, but I always got rejected immediately.
What can I improve? More projects? Maybe some certs?
https://redd.it/o9k0s0
@r_devops
reddit
Career Question - SysOps -> DevOps
Hi, I'm coming from an 3 yr system engineer background and wanting to move into DevOps & Cloud engineering. I've got work experience with Linux,...
Ideas and Topics for DevOps/DevSecOps Speaking Sessions?
Hi all -
Trying to brainstorm some potential topics around DevOps/DevSecOps for speaking (30 min topics) at events like DevOps Days, etc.
What are some ideas/topics that you all would love to hear more about or even hear about? Automation? Getting a foot into the door? Career transitions from Ops to DevOps? Culture?
Love to get some idea from others on what topics you think might be missing in tech talks.
Yes, I'm polling the audience to help my brainstorm. :)
https://redd.it/o9jvr9
@r_devops
Hi all -
Trying to brainstorm some potential topics around DevOps/DevSecOps for speaking (30 min topics) at events like DevOps Days, etc.
What are some ideas/topics that you all would love to hear more about or even hear about? Automation? Getting a foot into the door? Career transitions from Ops to DevOps? Culture?
Love to get some idea from others on what topics you think might be missing in tech talks.
Yes, I'm polling the audience to help my brainstorm. :)
https://redd.it/o9jvr9
@r_devops
reddit
Ideas and Topics for DevOps/DevSecOps Speaking Sessions?
Hi all - Trying to brainstorm some potential topics around DevOps/DevSecOps for speaking (30 min topics) at events like DevOps Days, etc. What...
A GitHub Action that automatically generates & updates markdown content (like your README.md) from external or remote files.
Hi everyone!, I just released markdown-autodocs GitHub action which helps to auto-document your markdown files. Please give a star for this repo if you find it useful.
​
Github repo: https://github.com/dineshsonachalam/markdown-autodocs
Hacker News: https://news.ycombinator.com/item?id=27662736
https://redd.it/o9oo4j
@r_devops
Hi everyone!, I just released markdown-autodocs GitHub action which helps to auto-document your markdown files. Please give a star for this repo if you find it useful.
​
Github repo: https://github.com/dineshsonachalam/markdown-autodocs
Hacker News: https://news.ycombinator.com/item?id=27662736
https://redd.it/o9oo4j
@r_devops
GitHub
GitHub - dineshsonachalam/markdown-autodocs: ✨ A GitHub Action that automatically generates & updates markdown content (like your…
✨ A GitHub Action that automatically generates & updates markdown content (like your README.md) from external or remote files. - dineshsonachalam/markdown-autodocs
Fork or Copy an Entire DevOps Organization or Project
I have an organization and 2 projects i would like to keep in sync across entire different accounts and completely different env is this possible? Of course having it completely automated would be great but any way to export and import maybe when needed? Or something similar? I want to use all of my work for in one DevOps tenant/env to another
https://redd.it/o9qljx
@r_devops
I have an organization and 2 projects i would like to keep in sync across entire different accounts and completely different env is this possible? Of course having it completely automated would be great but any way to export and import maybe when needed? Or something similar? I want to use all of my work for in one DevOps tenant/env to another
https://redd.it/o9qljx
@r_devops
reddit
Fork or Copy an Entire DevOps Organization or Project
I have an organization and 2 projects i would like to keep in sync across entire different accounts and completely different env is this possible?...
Debugging on AWS infrastructure
Situation:
There are 3 environments: prod, qa, dev.
All 3 are deployed using cloudformation template generated by the same serverless framework template.
All 3 are deployed using the same source code.
All 3 have fully working configurations.
Tech involved: AWS ECS+Faragate, AWS ALB, AWS Lambda, AWS ApiGateway, AWS CloudFront.
Issue:
Started happening after dev environment was redeployed from scratch. Using the same serverless framework template. Wasn't happening before.
https://example.com/service/some-service/ returns HTTP 200 on qa and prod but fails with HTTP 403 on dev.
Everything else works as expected.
Questions:
1. How would you go about debugging this?
2. What questions would you ask?
3. What is your best educated guess on what is the issue?
https://redd.it/o9qgxo
@r_devops
Situation:
There are 3 environments: prod, qa, dev.
All 3 are deployed using cloudformation template generated by the same serverless framework template.
All 3 are deployed using the same source code.
All 3 have fully working configurations.
Tech involved: AWS ECS+Faragate, AWS ALB, AWS Lambda, AWS ApiGateway, AWS CloudFront.
Issue:
Started happening after dev environment was redeployed from scratch. Using the same serverless framework template. Wasn't happening before.
https://example.com/service/some-service/ returns HTTP 200 on qa and prod but fails with HTTP 403 on dev.
Everything else works as expected.
Questions:
1. How would you go about debugging this?
2. What questions would you ask?
3. What is your best educated guess on what is the issue?
https://redd.it/o9qgxo
@r_devops
reddit
r/devops - Debugging on AWS infrastructure
0 votes and 0 comments so far on Reddit
DevOps Days Amsterdam Online - JUNE 29, 2021
If you want to join https://devopsdays.org/events/2021-amsterdam/welcome/
https://redd.it/o9tom8
@r_devops
If you want to join https://devopsdays.org/events/2021-amsterdam/welcome/
https://redd.it/o9tom8
@r_devops
devopsdays.org
devopsdays Amsterdam 2021
I built a reference architecture for global package logistics and learned a bunch about Terraform in the process + scaled up to 400k packages delivered per second!
I recently joined a new team at my company focused on innovation. Part of my new job is developing reference architectures with various technologies. For June, I decided to experiment with SingleStore (scale out relational database) and Redpanda (high performance alternative to Apache Kafka).
While I am quite excited to share my blog post, for the purpose of discussion in this subreddit I feel like talking about how I used Terraform to deploy a reference architecture quickly.
tldr; step by step instructions and Terraform module
My goal for this simulation was to eventually get it running on a cloud platform in order to show off it's ability to scale. But, I also wanted to start developing quickly, so with that in mind I started with docker-compose (docker-compose.yml).
After getting the simulation working locally and building out a couple dashboards in Grafana, I was ready to embark on deployment. I evaluated a couple different approaches:
First, I thought about deploying into managed kubernetes. On the plus side, since I had already got everything working with docker-compose, it seemed like it should be straightforward to lift and shift over. Unfortunately, in order to squeeze maximum performance out of this architecture I needed to make extensive use of ephemeral storage. While this is certainly possible in Kube, it's not easy to configure and tends to result in fragile clusters. On top of that, I wanted to control every variable in the simulation (again, performance first) and the additional layers of abstraction was worrying.
I pivoted from Kube to Terraform + Ansible. I had used Terraform before, and had heard good things about Ansible. Seemed like a great place to get started. After setting up the resources I needed I started to figure out how to hydrate my Ansible inventory from my Terraform state. Turns out this is something that a lot of people want, and there are some solutions floating around - but none of them are easy or simple. My goal was a single command to bring up the whole infrastructure along with all of the software from a bare github checkout. While it looked like Ansible would do it - it required more of an investment than I wanted in this short term project (2 weeks).
Thus, I found myself with a bare Terraform module and a large number of hand-written setup scripts. Honestly, this ended up working very nicely. I organized the scripts here and kept each one focused on a specific task. This ended up working sorta similar to Ansible modules. Then for each of my different host types I dynamically compiled a single script and uploaded it to google storage. Finally, I leveraged the startup-script-url metadata option that works on Google Cloud instances to cause the script to download at instance startup. You will also see script variables injected via metadata as well.
So, how would I review my overall experience? I would have preferred to use something like Ansible but the overhead of initializing inventory from dynamic state turned me off that option. Kube would have been also great, but ephemeral resources
I recently joined a new team at my company focused on innovation. Part of my new job is developing reference architectures with various technologies. For June, I decided to experiment with SingleStore (scale out relational database) and Redpanda (high performance alternative to Apache Kafka).
While I am quite excited to share my blog post, for the purpose of discussion in this subreddit I feel like talking about how I used Terraform to deploy a reference architecture quickly.
tldr; step by step instructions and Terraform module
My goal for this simulation was to eventually get it running on a cloud platform in order to show off it's ability to scale. But, I also wanted to start developing quickly, so with that in mind I started with docker-compose (docker-compose.yml).
After getting the simulation working locally and building out a couple dashboards in Grafana, I was ready to embark on deployment. I evaluated a couple different approaches:
First, I thought about deploying into managed kubernetes. On the plus side, since I had already got everything working with docker-compose, it seemed like it should be straightforward to lift and shift over. Unfortunately, in order to squeeze maximum performance out of this architecture I needed to make extensive use of ephemeral storage. While this is certainly possible in Kube, it's not easy to configure and tends to result in fragile clusters. On top of that, I wanted to control every variable in the simulation (again, performance first) and the additional layers of abstraction was worrying.
I pivoted from Kube to Terraform + Ansible. I had used Terraform before, and had heard good things about Ansible. Seemed like a great place to get started. After setting up the resources I needed I started to figure out how to hydrate my Ansible inventory from my Terraform state. Turns out this is something that a lot of people want, and there are some solutions floating around - but none of them are easy or simple. My goal was a single command to bring up the whole infrastructure along with all of the software from a bare github checkout. While it looked like Ansible would do it - it required more of an investment than I wanted in this short term project (2 weeks).
Thus, I found myself with a bare Terraform module and a large number of hand-written setup scripts. Honestly, this ended up working very nicely. I organized the scripts here and kept each one focused on a specific task. This ended up working sorta similar to Ansible modules. Then for each of my different host types I dynamically compiled a single script and uploaded it to google storage. Finally, I leveraged the startup-script-url metadata option that works on Google Cloud instances to cause the script to download at instance startup. You will also see script variables injected via metadata as well.
So, how would I review my overall experience? I would have preferred to use something like Ansible but the overhead of initializing inventory from dynamic state turned me off that option. Kube would have been also great, but ephemeral resources
SingleStore
SingleStore | The Performance You Need for Enterprise AI
SingleStore delivers the performance you need for enterprise AI. We combine transactional (OLTP) and analytical (OLAP) processing, multi-model data support (vectors, full-text, JSON, time-series, etc.) and real-time analytics all in one platform.
I built a reference architecture for global package logistics and learned a bunch about Terraform in the process + scaled up to 400k packages delivered per second!
I recently joined a new team at my company focused on innovation. Part of my new job is developing reference architectures with various technologies. For June, I decided to experiment with [SingleStore](https://www.singlestore.com) (scale out relational database) and [Redpanda](https://vectorized.io/redpanda) (high performance alternative to Apache Kafka).
While I am quite excited to share my [blog post](https://www.singlestore.com/blog/scaling-worldwide-parcel-logistics-with-singlestore-and-vectorized/), for the purpose of discussion in this subreddit I feel like talking about how I used Terraform to deploy a reference architecture quickly.
tldr; [step by step instructions](https://github.com/singlestore-labs/singlestore-logistics-sim#deploying-into-google-cloud) and [Terraform module](https://github.com/singlestore-labs/singlestore-logistics-sim/tree/main/deploy/terraform-gcp)
My goal for this simulation was to eventually get it running on a cloud platform in order to show off it's ability to scale. But, I also wanted to start developing quickly, so with that in mind I started with docker-compose ([docker-compose.yml](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/docker-compose.yml)).
After getting the simulation working locally and building out a couple dashboards in Grafana, I was ready to embark on deployment. I evaluated a couple different approaches:
First, I thought about deploying into managed kubernetes. On the plus side, since I had already got everything working with docker-compose, it seemed like it should be straightforward to lift and shift over. Unfortunately, in order to squeeze maximum performance out of this architecture I needed to make extensive use of [ephemeral storage](https://cloud.google.com/compute/docs/disks#localssds). While this is certainly possible in Kube, it's not easy to configure and tends to result in fragile clusters. On top of that, I wanted to control every variable in the simulation (again, performance first) and the additional layers of abstraction was worrying.
I pivoted from Kube to Terraform + Ansible. I had used Terraform before, and had heard good things about Ansible. Seemed like a great place to get started. After setting up the resources I needed I started to figure out how to hydrate my Ansible inventory from my Terraform state. Turns out this is something that a lot of people want, and there are some solutions floating around - but none of them are easy or simple. My goal was a single command to bring up the whole infrastructure along with all of the software from a bare github checkout. While it looked like Ansible would do it - it required more of an investment than I wanted in this short term project (2 weeks).
Thus, I found myself with a bare Terraform module and a large number of hand-written setup scripts. Honestly, this ended up working very nicely. I [organized the scripts here](https://github.com/singlestore-labs/singlestore-logistics-sim/tree/main/deploy/terraform-gcp/scripts) and kept each one focused on a specific task. This ended up working sorta similar to Ansible modules. Then for each of my different host types [I dynamically compiled a single script and uploaded it to google storage](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/deploy/terraform-gcp/scripts.tf#L60). Finally, I leveraged the [startup-script-url metadata option](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/deploy/terraform-gcp/singlestore.tf#L24) that works on Google Cloud instances to cause the script to download at instance startup. You will also see script variables injected via metadata as well.
So, how would I review my overall experience? I would have preferred to use something like Ansible but the overhead of initializing inventory from dynamic state turned me off that option. Kube would have been also great, but ephemeral resources
I recently joined a new team at my company focused on innovation. Part of my new job is developing reference architectures with various technologies. For June, I decided to experiment with [SingleStore](https://www.singlestore.com) (scale out relational database) and [Redpanda](https://vectorized.io/redpanda) (high performance alternative to Apache Kafka).
While I am quite excited to share my [blog post](https://www.singlestore.com/blog/scaling-worldwide-parcel-logistics-with-singlestore-and-vectorized/), for the purpose of discussion in this subreddit I feel like talking about how I used Terraform to deploy a reference architecture quickly.
tldr; [step by step instructions](https://github.com/singlestore-labs/singlestore-logistics-sim#deploying-into-google-cloud) and [Terraform module](https://github.com/singlestore-labs/singlestore-logistics-sim/tree/main/deploy/terraform-gcp)
My goal for this simulation was to eventually get it running on a cloud platform in order to show off it's ability to scale. But, I also wanted to start developing quickly, so with that in mind I started with docker-compose ([docker-compose.yml](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/docker-compose.yml)).
After getting the simulation working locally and building out a couple dashboards in Grafana, I was ready to embark on deployment. I evaluated a couple different approaches:
First, I thought about deploying into managed kubernetes. On the plus side, since I had already got everything working with docker-compose, it seemed like it should be straightforward to lift and shift over. Unfortunately, in order to squeeze maximum performance out of this architecture I needed to make extensive use of [ephemeral storage](https://cloud.google.com/compute/docs/disks#localssds). While this is certainly possible in Kube, it's not easy to configure and tends to result in fragile clusters. On top of that, I wanted to control every variable in the simulation (again, performance first) and the additional layers of abstraction was worrying.
I pivoted from Kube to Terraform + Ansible. I had used Terraform before, and had heard good things about Ansible. Seemed like a great place to get started. After setting up the resources I needed I started to figure out how to hydrate my Ansible inventory from my Terraform state. Turns out this is something that a lot of people want, and there are some solutions floating around - but none of them are easy or simple. My goal was a single command to bring up the whole infrastructure along with all of the software from a bare github checkout. While it looked like Ansible would do it - it required more of an investment than I wanted in this short term project (2 weeks).
Thus, I found myself with a bare Terraform module and a large number of hand-written setup scripts. Honestly, this ended up working very nicely. I [organized the scripts here](https://github.com/singlestore-labs/singlestore-logistics-sim/tree/main/deploy/terraform-gcp/scripts) and kept each one focused on a specific task. This ended up working sorta similar to Ansible modules. Then for each of my different host types [I dynamically compiled a single script and uploaded it to google storage](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/deploy/terraform-gcp/scripts.tf#L60). Finally, I leveraged the [startup-script-url metadata option](https://github.com/singlestore-labs/singlestore-logistics-sim/blob/main/deploy/terraform-gcp/singlestore.tf#L24) that works on Google Cloud instances to cause the script to download at instance startup. You will also see script variables injected via metadata as well.
So, how would I review my overall experience? I would have preferred to use something like Ansible but the overhead of initializing inventory from dynamic state turned me off that option. Kube would have been also great, but ephemeral resources
SingleStore
SingleStore | The Performance You Need for Enterprise AI
SingleStore delivers the performance you need for enterprise AI. We combine transactional (OLTP) and analytical (OLAP) processing, multi-model data support (vectors, full-text, JSON, time-series, etc.) and real-time analytics all in one platform.
like local disks aren't quite there yet. Perhaps something to revisit in the future. For now I am reasonably happy with my all-terraform solution. The downsides of this approach include:
* don't store sensitive data in metadata (i.e. like I am storing the license) - anyone able to issue a http request from one of the machines can easily exfiltrate it
* writing a ton of bash scripts is not easy to debug and even harder to make idempotent - in this case I just blew everything away if there was an issue, but obviously don't do this in production
* not mentioning that there is no way to easily upgrade this solution - I would only trust something like a full backup + offline rebuild
Well... this ended up being a lot longer than expected - but I figured that this community might enjoy hearing some of the background behind the various decisions I made while quickly putting this system together. Check out [the blog post](https://www.singlestore.com/blog/scaling-worldwide-parcel-logistics-with-singlestore-and-vectorized/) if you want to learn more about this projects background, and be sure to leave comments if you have any questions about my solution!
Thanks for reading! :)
https://redd.it/o9toig
@r_devops
* don't store sensitive data in metadata (i.e. like I am storing the license) - anyone able to issue a http request from one of the machines can easily exfiltrate it
* writing a ton of bash scripts is not easy to debug and even harder to make idempotent - in this case I just blew everything away if there was an issue, but obviously don't do this in production
* not mentioning that there is no way to easily upgrade this solution - I would only trust something like a full backup + offline rebuild
Well... this ended up being a lot longer than expected - but I figured that this community might enjoy hearing some of the background behind the various decisions I made while quickly putting this system together. Check out [the blog post](https://www.singlestore.com/blog/scaling-worldwide-parcel-logistics-with-singlestore-and-vectorized/) if you want to learn more about this projects background, and be sure to leave comments if you have any questions about my solution!
Thanks for reading! :)
https://redd.it/o9toig
@r_devops
Singlestore
Scaling Worldwide Parcel Logistics with SingleStore and Vectorized
Learn how SingleStore and Redpanda can work together to solve the operational complexity of global logistics.
Anyone here hold a security clearance and work with a modern tech stack (k8s, Terraform, AWS/GCP, Python/Go, etc)? How much more do you make with the clearance vs without? Is it worth getting one?
I'm considering a gig that requires obtaining a security clearance but worried about some things from my past in regards to drug use (one that's legal at state level, not federal, and have no criminal record with it). Is it worth it to just go through with the process and try to obtain one? Passing a drug test and staying clean while holding a clearance wouldn't be an issue just previous history with it might be.
https://redd.it/o9uvge
@r_devops
I'm considering a gig that requires obtaining a security clearance but worried about some things from my past in regards to drug use (one that's legal at state level, not federal, and have no criminal record with it). Is it worth it to just go through with the process and try to obtain one? Passing a drug test and staying clean while holding a clearance wouldn't be an issue just previous history with it might be.
https://redd.it/o9uvge
@r_devops
reddit
r/devops - Anyone here hold a security clearance and work with a modern tech stack (k8s, Terraform, AWS/GCP, Python/Go, etc)? How…
0 votes and 4 comments so far on Reddit
Direktiv: Docker development environment, VSCode plugin & Infrastructure-as-a-Chatbot
G'day DevOps,
Another update to our Direktiv event-driven serverless workflow engine - but this one focused on development. Release v0.3.1 included some bug fixes, improved stability and security enhancements, but more notably:
A Docker development environment (A Direktiv instance on your laptop or desktop!)
VSCode integration for workflow management and development
The update adds on to the features released for GitHub marketplace and hopefully makes it easier for developers (and non-developers alike) to create, verify and deploy the Direktiv workflows and plugins as containers!
The latest blog article is available at:
https://blog.direktiv.io/direktiv-new-dev-environment-vscode-plugin-ab047b7a8266
and the implementation docs at:
https://docs.direktiv.io/docs/development.html
We also created (as a PoC) an Infrastructure-as-a-Chat integration with Google Diaglogflow (it provisions to AWS and GCP):
https://blog.direktiv.io/direktiv-cloud-provisioning-chatbot-part-1-f482bb9ea943
The second article is the first in a three part series - but gives a good overview of what was done :)
Finally, added some new plugins to the direktiv-apps repository, one of which allows you to now run Terraform scripts (without having a Terraform environment):
https://github.com/vorteil/direktiv-apps/tree/master/terraform
As always - feedback is welcomed!!!
https://redd.it/o9wwlp
@r_devops
G'day DevOps,
Another update to our Direktiv event-driven serverless workflow engine - but this one focused on development. Release v0.3.1 included some bug fixes, improved stability and security enhancements, but more notably:
A Docker development environment (A Direktiv instance on your laptop or desktop!)
VSCode integration for workflow management and development
The update adds on to the features released for GitHub marketplace and hopefully makes it easier for developers (and non-developers alike) to create, verify and deploy the Direktiv workflows and plugins as containers!
The latest blog article is available at:
https://blog.direktiv.io/direktiv-new-dev-environment-vscode-plugin-ab047b7a8266
and the implementation docs at:
https://docs.direktiv.io/docs/development.html
We also created (as a PoC) an Infrastructure-as-a-Chat integration with Google Diaglogflow (it provisions to AWS and GCP):
https://blog.direktiv.io/direktiv-cloud-provisioning-chatbot-part-1-f482bb9ea943
The second article is the first in a three part series - but gives a good overview of what was done :)
Finally, added some new plugins to the direktiv-apps repository, one of which allows you to now run Terraform scripts (without having a Terraform environment):
https://github.com/vorteil/direktiv-apps/tree/master/terraform
As always - feedback is welcomed!!!
https://redd.it/o9wwlp
@r_devops
GitHub
GitHub - vorteil/direktiv: Serverless Container Orchestration
Serverless Container Orchestration. Contribute to vorteil/direktiv development by creating an account on GitHub.
DevOps Beginner Guide
I'm a beginner and I want to start learning DevOps and practically apply the lifecycle of DevOps ( Plan, Code, Build, Test, CI/CD, etc) using the tools and software.
Can anyone guide me to books or courses or tutorials where I will learn to use all the tools needed like how it's done in this field?
It would be better if the same project is being used in the entire lifecycle of DevOps?
Thanks in advance!
https://redd.it/o9r1gb
@r_devops
I'm a beginner and I want to start learning DevOps and practically apply the lifecycle of DevOps ( Plan, Code, Build, Test, CI/CD, etc) using the tools and software.
Can anyone guide me to books or courses or tutorials where I will learn to use all the tools needed like how it's done in this field?
It would be better if the same project is being used in the entire lifecycle of DevOps?
Thanks in advance!
https://redd.it/o9r1gb
@r_devops
reddit
r/devops - DevOps Beginner Guide
1 vote and 0 comments so far on Reddit
Is it hard to find DevOps jobs that involve software dev?
I know "Dev" is in the name, however a lot of the time DevOps engineering positions seem to skew more towards "Ops", from what I've seen. This is the case with my job. I am more of a Cloud engineer who can write IAC and automation code as well. Any other "DevOps" things I do like CI/CD pipelines and containers revolve around cloud infrastructure.
I like this stuff, but where my heart truly belongs is building applications through the SDLC, full stack development, automated testing, the works. I had hoped my job woukd include these things, as well as the operational work like infrastructure, hosting and pipelines. Afterall, that's a big part of the idea of DevOps. But it hasn't turned out that way.
My question is how common is it to find positions that truly include Dev and Ops in a single position/team? I'm not talking about just the big tech companies either, I mean through the entire "DevOps" landscape.
https://redd.it/o9z25l
@r_devops
I know "Dev" is in the name, however a lot of the time DevOps engineering positions seem to skew more towards "Ops", from what I've seen. This is the case with my job. I am more of a Cloud engineer who can write IAC and automation code as well. Any other "DevOps" things I do like CI/CD pipelines and containers revolve around cloud infrastructure.
I like this stuff, but where my heart truly belongs is building applications through the SDLC, full stack development, automated testing, the works. I had hoped my job woukd include these things, as well as the operational work like infrastructure, hosting and pipelines. Afterall, that's a big part of the idea of DevOps. But it hasn't turned out that way.
My question is how common is it to find positions that truly include Dev and Ops in a single position/team? I'm not talking about just the big tech companies either, I mean through the entire "DevOps" landscape.
https://redd.it/o9z25l
@r_devops
reddit
Is it hard to find DevOps jobs that involve software dev?
I know "Dev" is in the name, however a lot of the time DevOps engineering positions seem to skew more towards "Ops", from what I've seen. This is...
Logical grouping of resources created using for_each with a conditional statement
Consider the following scenario:
​
​
​
I am trying to create multiple resources from multiple modules using for\_each.
​
My [main.tf](https://main.tf) file reads
​
​
​
//postgres
​
module "postgres" {
source = "./postgres"
for_each = var.app
name = each.key
region = each.value.postgres.region
postgres_database_version = lookup(each.value.postgres, "postgres_database_version", "")
}
//mysql
​
module "mysql" {
source = "./mysql"
for_each = var.app
name = each.key
region = each.value.mysql.region
mysql_database_version = lookup(each.value.mysql, "mysql_database_version", "")
}
​
​
//mssql
​
module "mssql" {
source = "./mssql"
for_each = var.app
name = each.key
region = each.value.mssql.region
mssql_database_version = lookup(each.value.mssql, "mssql_database_version", "")
}
[variable.tf](https://variable.tf)
​
​
​
variable "app" {}
​
​
terraform.tfvars
​
app = {
app1 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
app2 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
app3 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
}
This works fine if I am creating all three resources(MySQL, mssql and postgres) for app1, app2, and app3.
​
However, it does not work if I want to create say only postgres for app1, MySQL and mssql for app2, and mssql and postgres for app3 as follows
​
​
​
app = {
app1 = {
postgres = {
region = "us-east1"
}
}
app2 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
}
app3 = {
mssql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
}
I need to include a conditional statement in for\_each that prevents the creation of a resource if no value for the resource is provided or if an empty map is passed
​
example
​
​
​
app = {
app1 = {
postgres = {
region = "us-east1"
}
}
mssql = {}
mysql = {}
}
should only create a postgres DB
​
I have tried,
​
​
​
module "mysql" {source = "./mysql"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].mssql != {} }
module "postgres" {source = "./postgres"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].postgres != {} }
module "mssql" {source = "./mssql"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].mysql != {} }
​
​
but this does not seem to work. Any ideas on how to solve this would be much appreciated
https://redd.it/o9x47s
@r_devops
Consider the following scenario:
​
​
​
I am trying to create multiple resources from multiple modules using for\_each.
​
My [main.tf](https://main.tf) file reads
​
​
​
//postgres
​
module "postgres" {
source = "./postgres"
for_each = var.app
name = each.key
region = each.value.postgres.region
postgres_database_version = lookup(each.value.postgres, "postgres_database_version", "")
}
//mysql
​
module "mysql" {
source = "./mysql"
for_each = var.app
name = each.key
region = each.value.mysql.region
mysql_database_version = lookup(each.value.mysql, "mysql_database_version", "")
}
​
​
//mssql
​
module "mssql" {
source = "./mssql"
for_each = var.app
name = each.key
region = each.value.mssql.region
mssql_database_version = lookup(each.value.mssql, "mssql_database_version", "")
}
[variable.tf](https://variable.tf)
​
​
​
variable "app" {}
​
​
terraform.tfvars
​
app = {
app1 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
app2 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
app3 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
}
This works fine if I am creating all three resources(MySQL, mssql and postgres) for app1, app2, and app3.
​
However, it does not work if I want to create say only postgres for app1, MySQL and mssql for app2, and mssql and postgres for app3 as follows
​
​
​
app = {
app1 = {
postgres = {
region = "us-east1"
}
}
app2 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
}
app3 = {
mssql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
}
I need to include a conditional statement in for\_each that prevents the creation of a resource if no value for the resource is provided or if an empty map is passed
​
example
​
​
​
app = {
app1 = {
postgres = {
region = "us-east1"
}
}
mssql = {}
mysql = {}
}
should only create a postgres DB
​
I have tried,
​
​
​
module "mysql" {source = "./mysql"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].mssql != {} }
module "postgres" {source = "./postgres"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].postgres != {} }
module "mssql" {source = "./mssql"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].mysql != {} }
​
​
but this does not seem to work. Any ideas on how to solve this would be much appreciated
https://redd.it/o9x47s
@r_devops
SMTP relay for private Domains
Hello folks,
the last days I'm struggling finding the best service to send E-Mails from my 2 domains. I have a mailcow server cloud hosted. Problem: no IP reservation possible, no PTR record possible. I'm sending maybe 20 E-Mails a week. So after research I set up a Sendgrid account. I like the UI, I like the easy setup, but after first tests I recognized that the pool IP of send grid is on several blacklists (Zen Spamhouse and so on). So I have a problem with the basic goal of all this:)
I found on reddit several topics about SMTP relays but they are quite old. Do you have an update wich service to use? As I said, \~20 E-Mails a week, 2-3 domains, free of charge or max. 5 Euro / 5$ monthly costs. It is important that I can validate sender domain-wise and not every single mailbox
Thanks for any ideas / suggestions!
https://redd.it/o9fcn3
@r_devops
Hello folks,
the last days I'm struggling finding the best service to send E-Mails from my 2 domains. I have a mailcow server cloud hosted. Problem: no IP reservation possible, no PTR record possible. I'm sending maybe 20 E-Mails a week. So after research I set up a Sendgrid account. I like the UI, I like the easy setup, but after first tests I recognized that the pool IP of send grid is on several blacklists (Zen Spamhouse and so on). So I have a problem with the basic goal of all this:)
I found on reddit several topics about SMTP relays but they are quite old. Do you have an update wich service to use? As I said, \~20 E-Mails a week, 2-3 domains, free of charge or max. 5 Euro / 5$ monthly costs. It is important that I can validate sender domain-wise and not every single mailbox
Thanks for any ideas / suggestions!
https://redd.it/o9fcn3
@r_devops
reddit
SMTP relay for private Domains
Hello folks, the last days I'm struggling finding the best service to send E-Mails from my 2 domains. I have a mailcow server cloud hosted....
Summary of nginx error logs of a day
Hi,
Is there any product, which can show all the error logs, suppose 500 http code logs over the last 24 hours and show a summary of the logs and post it to slack. Open source software is preferred.
Any help will be appreciated.
https://redd.it/oa3706
@r_devops
Hi,
Is there any product, which can show all the error logs, suppose 500 http code logs over the last 24 hours and show a summary of the logs and post it to slack. Open source software is preferred.
Any help will be appreciated.
https://redd.it/oa3706
@r_devops
reddit
Summary of nginx error logs of a day
Hi, Is there any product, which can show all the error logs, suppose 500 http code logs over the last 24 hours and show a summary of the logs...
Here is something worth watching. "Stop wasting your time learning pentesting"
https://www.youtube.com/watch?v=DwAY6MOKI9c
https://redd.it/oa3gmp
@r_devops
https://www.youtube.com/watch?v=DwAY6MOKI9c
https://redd.it/oa3gmp
@r_devops
YouTube
Stop wasting your time learning pentesting
If you are a SOC Analyst, IT Admin or a newbie in Cybersecurity and want to create a successful career in a multinational company …
Don’t waste your time learning penetration testing ❌
Or web bug hunting
Or password cracking, or even vulnerability researching…
Don’t waste your time learning penetration testing ❌
Or web bug hunting
Or password cracking, or even vulnerability researching…
How to best source control Ansible playbooks?
What started as small collection of Ansible playbooks became a large collection of long playbooks, all placed in a local git repo. The Ansible documentation just mentions it recommends using git.
This is the current flow to execute an ansible playbook:
1. User opens the playbook from his repo
2. Changes the value of
3. SSH to the ansible machine
4. Run the playbook:
This creates a huge mess, as all users have a different status on the value of
Here's what I'm thinking: Break the playbook into roles, and have playbooks executed by AWX.
With this, I have a few questions:
1. Does it seem like an organized way to go? Is this considered best practice?
2. Once I organize everything into roles, what's the best way to create a playbook calling specific roles? in AWX, is it possible to create a playbook combining some specific roles? If not, how should I do it? ( I assume not in the git repo because then I'm back to the
3. The Ansible server has a lot of things configured in the
Thanks ahead!
https://redd.it/oa4aor
@r_devops
What started as small collection of Ansible playbooks became a large collection of long playbooks, all placed in a local git repo. The Ansible documentation just mentions it recommends using git.
This is the current flow to execute an ansible playbook:
1. User opens the playbook from his repo
2. Changes the value of
- hosts:3. SSH to the ansible machine
4. Run the playbook:
ansible-playbook /home/*USER*/repo/playbook.ymlThis creates a huge mess, as all users have a different status on the value of
-hosts: in their repos.Here's what I'm thinking: Break the playbook into roles, and have playbooks executed by AWX.
With this, I have a few questions:
1. Does it seem like an organized way to go? Is this considered best practice?
2. Once I organize everything into roles, what's the best way to create a playbook calling specific roles? in AWX, is it possible to create a playbook combining some specific roles? If not, how should I do it? ( I assume not in the git repo because then I'm back to the
-hosts: problem.3. The Ansible server has a lot of things configured in the
ansible.cfg and hosts file. If I install AWX, would I have to reconfigure it, or would it be able to use the existing config?Thanks ahead!
https://redd.it/oa4aor
@r_devops