Logical grouping of resources created using for_each with a conditional statement
Consider the following scenario:
​
​
​
I am trying to create multiple resources from multiple modules using for\_each.
​
My [main.tf](https://main.tf) file reads
​
​
​
//postgres
​
module "postgres" {
source = "./postgres"
for_each = var.app
name = each.key
region = each.value.postgres.region
postgres_database_version = lookup(each.value.postgres, "postgres_database_version", "")
}
//mysql
​
module "mysql" {
source = "./mysql"
for_each = var.app
name = each.key
region = each.value.mysql.region
mysql_database_version = lookup(each.value.mysql, "mysql_database_version", "")
}
​
​
//mssql
​
module "mssql" {
source = "./mssql"
for_each = var.app
name = each.key
region = each.value.mssql.region
mssql_database_version = lookup(each.value.mssql, "mssql_database_version", "")
}
[variable.tf](https://variable.tf)
​
​
​
variable "app" {}
​
​
terraform.tfvars
​
app = {
app1 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
app2 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
app3 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
}
This works fine if I am creating all three resources(MySQL, mssql and postgres) for app1, app2, and app3.
​
However, it does not work if I want to create say only postgres for app1, MySQL and mssql for app2, and mssql and postgres for app3 as follows
​
​
​
app = {
app1 = {
postgres = {
region = "us-east1"
}
}
app2 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
}
app3 = {
mssql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
}
I need to include a conditional statement in for\_each that prevents the creation of a resource if no value for the resource is provided or if an empty map is passed
​
example
​
​
​
app = {
app1 = {
postgres = {
region = "us-east1"
}
}
mssql = {}
mysql = {}
}
should only create a postgres DB
​
I have tried,
​
​
​
module "mysql" {source = "./mysql"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].mssql != {} }
module "postgres" {source = "./postgres"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].postgres != {} }
module "mssql" {source = "./mssql"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].mysql != {} }
​
​
but this does not seem to work. Any ideas on how to solve this would be much appreciated
https://redd.it/o9x47s
@r_devops
Consider the following scenario:
​
​
​
I am trying to create multiple resources from multiple modules using for\_each.
​
My [main.tf](https://main.tf) file reads
​
​
​
//postgres
​
module "postgres" {
source = "./postgres"
for_each = var.app
name = each.key
region = each.value.postgres.region
postgres_database_version = lookup(each.value.postgres, "postgres_database_version", "")
}
//mysql
​
module "mysql" {
source = "./mysql"
for_each = var.app
name = each.key
region = each.value.mysql.region
mysql_database_version = lookup(each.value.mysql, "mysql_database_version", "")
}
​
​
//mssql
​
module "mssql" {
source = "./mssql"
for_each = var.app
name = each.key
region = each.value.mssql.region
mssql_database_version = lookup(each.value.mssql, "mssql_database_version", "")
}
[variable.tf](https://variable.tf)
​
​
​
variable "app" {}
​
​
terraform.tfvars
​
app = {
app1 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
app2 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
app3 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
}
This works fine if I am creating all three resources(MySQL, mssql and postgres) for app1, app2, and app3.
​
However, it does not work if I want to create say only postgres for app1, MySQL and mssql for app2, and mssql and postgres for app3 as follows
​
​
​
app = {
app1 = {
postgres = {
region = "us-east1"
}
}
app2 = {
mssql = {
region = "us-east1"
}
mysql = {
region = "us-east1"
}
}
app3 = {
mssql = {
region = "us-east1"
}
postgres = {
region = "us-east1"
}
}
}
I need to include a conditional statement in for\_each that prevents the creation of a resource if no value for the resource is provided or if an empty map is passed
​
example
​
​
​
app = {
app1 = {
postgres = {
region = "us-east1"
}
}
mssql = {}
mysql = {}
}
should only create a postgres DB
​
I have tried,
​
​
​
module "mysql" {source = "./mysql"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].mssql != {} }
module "postgres" {source = "./postgres"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].postgres != {} }
module "mssql" {source = "./mssql"
for_each = { for k, v in values(var.app)[*] : i => c if values(var.app)[*].mysql != {} }
​
​
but this does not seem to work. Any ideas on how to solve this would be much appreciated
https://redd.it/o9x47s
@r_devops
SMTP relay for private Domains
Hello folks,
the last days I'm struggling finding the best service to send E-Mails from my 2 domains. I have a mailcow server cloud hosted. Problem: no IP reservation possible, no PTR record possible. I'm sending maybe 20 E-Mails a week. So after research I set up a Sendgrid account. I like the UI, I like the easy setup, but after first tests I recognized that the pool IP of send grid is on several blacklists (Zen Spamhouse and so on). So I have a problem with the basic goal of all this:)
I found on reddit several topics about SMTP relays but they are quite old. Do you have an update wich service to use? As I said, \~20 E-Mails a week, 2-3 domains, free of charge or max. 5 Euro / 5$ monthly costs. It is important that I can validate sender domain-wise and not every single mailbox
Thanks for any ideas / suggestions!
https://redd.it/o9fcn3
@r_devops
Hello folks,
the last days I'm struggling finding the best service to send E-Mails from my 2 domains. I have a mailcow server cloud hosted. Problem: no IP reservation possible, no PTR record possible. I'm sending maybe 20 E-Mails a week. So after research I set up a Sendgrid account. I like the UI, I like the easy setup, but after first tests I recognized that the pool IP of send grid is on several blacklists (Zen Spamhouse and so on). So I have a problem with the basic goal of all this:)
I found on reddit several topics about SMTP relays but they are quite old. Do you have an update wich service to use? As I said, \~20 E-Mails a week, 2-3 domains, free of charge or max. 5 Euro / 5$ monthly costs. It is important that I can validate sender domain-wise and not every single mailbox
Thanks for any ideas / suggestions!
https://redd.it/o9fcn3
@r_devops
reddit
SMTP relay for private Domains
Hello folks, the last days I'm struggling finding the best service to send E-Mails from my 2 domains. I have a mailcow server cloud hosted....
Summary of nginx error logs of a day
Hi,
Is there any product, which can show all the error logs, suppose 500 http code logs over the last 24 hours and show a summary of the logs and post it to slack. Open source software is preferred.
Any help will be appreciated.
https://redd.it/oa3706
@r_devops
Hi,
Is there any product, which can show all the error logs, suppose 500 http code logs over the last 24 hours and show a summary of the logs and post it to slack. Open source software is preferred.
Any help will be appreciated.
https://redd.it/oa3706
@r_devops
reddit
Summary of nginx error logs of a day
Hi, Is there any product, which can show all the error logs, suppose 500 http code logs over the last 24 hours and show a summary of the logs...
Here is something worth watching. "Stop wasting your time learning pentesting"
https://www.youtube.com/watch?v=DwAY6MOKI9c
https://redd.it/oa3gmp
@r_devops
https://www.youtube.com/watch?v=DwAY6MOKI9c
https://redd.it/oa3gmp
@r_devops
YouTube
Stop wasting your time learning pentesting
If you are a SOC Analyst, IT Admin or a newbie in Cybersecurity and want to create a successful career in a multinational company …
Don’t waste your time learning penetration testing ❌
Or web bug hunting
Or password cracking, or even vulnerability researching…
Don’t waste your time learning penetration testing ❌
Or web bug hunting
Or password cracking, or even vulnerability researching…
How to best source control Ansible playbooks?
What started as small collection of Ansible playbooks became a large collection of long playbooks, all placed in a local git repo. The Ansible documentation just mentions it recommends using git.
This is the current flow to execute an ansible playbook:
1. User opens the playbook from his repo
2. Changes the value of
3. SSH to the ansible machine
4. Run the playbook:
This creates a huge mess, as all users have a different status on the value of
Here's what I'm thinking: Break the playbook into roles, and have playbooks executed by AWX.
With this, I have a few questions:
1. Does it seem like an organized way to go? Is this considered best practice?
2. Once I organize everything into roles, what's the best way to create a playbook calling specific roles? in AWX, is it possible to create a playbook combining some specific roles? If not, how should I do it? ( I assume not in the git repo because then I'm back to the
3. The Ansible server has a lot of things configured in the
Thanks ahead!
https://redd.it/oa4aor
@r_devops
What started as small collection of Ansible playbooks became a large collection of long playbooks, all placed in a local git repo. The Ansible documentation just mentions it recommends using git.
This is the current flow to execute an ansible playbook:
1. User opens the playbook from his repo
2. Changes the value of
- hosts:3. SSH to the ansible machine
4. Run the playbook:
ansible-playbook /home/*USER*/repo/playbook.ymlThis creates a huge mess, as all users have a different status on the value of
-hosts: in their repos.Here's what I'm thinking: Break the playbook into roles, and have playbooks executed by AWX.
With this, I have a few questions:
1. Does it seem like an organized way to go? Is this considered best practice?
2. Once I organize everything into roles, what's the best way to create a playbook calling specific roles? in AWX, is it possible to create a playbook combining some specific roles? If not, how should I do it? ( I assume not in the git repo because then I'm back to the
-hosts: problem.3. The Ansible server has a lot of things configured in the
ansible.cfg and hosts file. If I install AWX, would I have to reconfigure it, or would it be able to use the existing config?Thanks ahead!
https://redd.it/oa4aor
@r_devops
AWS CodeCommit and Azure Devops
I need to create a deployment pipeline.
My code is in AWS CodeCommit. I need to deploy this code to AWS EKS ( Elastic Kubernetes Service ) and Azure AKS ( Azure Kubernetes Service).
I am using terraform to build the infrastructure and have the manifests files for K8.
In AWS, I am using a code pipeline with CodeCommit and code build to build the docker image and deploy it to EKS but for Azure, I am unable to figure out how to run the build in Azure Devops pipeline.
Currently, I have connected the codecommit as an other git repository but it is not allowing me to use starter pipeline.
What is the best way to deploy in this scenario?
What should I do next?
https://redd.it/oa59tn
@r_devops
I need to create a deployment pipeline.
My code is in AWS CodeCommit. I need to deploy this code to AWS EKS ( Elastic Kubernetes Service ) and Azure AKS ( Azure Kubernetes Service).
I am using terraform to build the infrastructure and have the manifests files for K8.
In AWS, I am using a code pipeline with CodeCommit and code build to build the docker image and deploy it to EKS but for Azure, I am unable to figure out how to run the build in Azure Devops pipeline.
Currently, I have connected the codecommit as an other git repository but it is not allowing me to use starter pipeline.
What is the best way to deploy in this scenario?
What should I do next?
https://redd.it/oa59tn
@r_devops
reddit
AWS CodeCommit and Azure Devops
I need to create a deployment pipeline. My code is in AWS CodeCommit. I need to deploy this code to AWS EKS ( Elastic Kubernetes Service ) and...
The automation challenge: Kubernetes operators vs Helm Charts with Ana-Maria Mihalceanu
Check out this live-coding talk with Ana-Maria Mihalceanu, Co-founder of Bucharest Software Craftsmanship Community.
Working with Kubernetes for some time or you just started your journey?
If you love automation and dislike having to perform repetitive tasks manually, you have come across concepts of Helm charts and Kubernetes operators. Although they solve similar types of problems, they are not exactly interchangeable tools, but rather complementary.
During this session, Ana-Maria will highlight which to use and when by sharing several code based examples and lessons learned.
In this talk, you'll learn:
1. Kubernetes operators and Helm Charts: which to use when
2. How Kubernetes are complementary not interchangeable tools
​
[Video](https://youtu.be/dGx8PjmWkyM)
**Slides**
https://redd.it/oa60lz
@r_devops
Check out this live-coding talk with Ana-Maria Mihalceanu, Co-founder of Bucharest Software Craftsmanship Community.
Working with Kubernetes for some time or you just started your journey?
If you love automation and dislike having to perform repetitive tasks manually, you have come across concepts of Helm charts and Kubernetes operators. Although they solve similar types of problems, they are not exactly interchangeable tools, but rather complementary.
During this session, Ana-Maria will highlight which to use and when by sharing several code based examples and lessons learned.
In this talk, you'll learn:
1. Kubernetes operators and Helm Charts: which to use when
2. How Kubernetes are complementary not interchangeable tools
​
[Video](https://youtu.be/dGx8PjmWkyM)
**Slides**
https://redd.it/oa60lz
@r_devops
YouTube
The Automation Challenge: Kubernetes Operators vs Helm Charts • Ana-Maria Mihalceanu • GOTO 2021
This presentation was recorded at GOTOpia February 2021. #GOTOcon #GOTOpiahttps://gotopia.euAna-Maria Mihalceanu - Co-founder of Bucharest Software Craftsmans...
Could not request certificate: execution expired while giving PUPPET status
when i try to see the puppet master status it says Could not request certificate: execution expired
can anyone help me out
this is my master config file
[master\]
vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puppetlabs/code
​
[main\]
certname = puppetmaster
server = puppetmaster
runinterval = 1hr
strict_variables = true
​
and this is my client config file
[main\]
certname = puppetclient
server= puppetmaster
runinterval = 1h
​
​
​
\~
https://redd.it/oa5lnk
@r_devops
when i try to see the puppet master status it says Could not request certificate: execution expired
can anyone help me out
this is my master config file
[master\]
vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puppetlabs/code
​
[main\]
certname = puppetmaster
server = puppetmaster
runinterval = 1hr
strict_variables = true
​
and this is my client config file
[main\]
certname = puppetclient
server= puppetmaster
runinterval = 1h
​
​
​
\~
https://redd.it/oa5lnk
@r_devops
reddit
Could not request certificate: execution expired while giving...
when i try to see the puppet master status it says Could not request certificate: execution expired can anyone help me out this is my master...
SRE without programming experience?
I have 13 plus years in the industry. I come from a systems administration background with the last few years on platform engineering . Powershell/bash/basic python/cloudformation/terraform. I don't come from a programming background and do okay at scripting and automation. I have almost a decade of experience with application/server/production support.
I also have experience working with ci cd in aws / azure.
Is it wise to try to move into SRE if I don't have a programming background? Or should I build those skills before I do?
https://redd.it/oa4vb6
@r_devops
I have 13 plus years in the industry. I come from a systems administration background with the last few years on platform engineering . Powershell/bash/basic python/cloudformation/terraform. I don't come from a programming background and do okay at scripting and automation. I have almost a decade of experience with application/server/production support.
I also have experience working with ci cd in aws / azure.
Is it wise to try to move into SRE if I don't have a programming background? Or should I build those skills before I do?
https://redd.it/oa4vb6
@r_devops
reddit
SRE without programming experience?
I have 13 plus years in the industry. I come from a systems administration background with the last few years on platform engineering ....
Small ELK setup on Azure
Hello Folks,
In my current project we would like to setup small ELK stack to monitor our Prod Env application ( for now its closer to PoC rather than real setup, worst case scenario we will scale ). What number of machines and setup ( one elastic or maybe two, one kibana or kibana + grafana,HA,lB) would you recommend? We will push data in json format using Rest Api to Elasticsearch indexes instead of reading them from fs ( proprietary solution, no access to logs on server) so most likely we will not gonna use logstash or his peers. I did some research but I feel that there are 10's of posts regarding this topic and I'm little bit lost on this. We will host it on Azure, so if you know maybe what are optimal machines in terms of resources that we would not go bankrupt I would also appreciate that.
https://redd.it/oa7pxu
@r_devops
Hello Folks,
In my current project we would like to setup small ELK stack to monitor our Prod Env application ( for now its closer to PoC rather than real setup, worst case scenario we will scale ). What number of machines and setup ( one elastic or maybe two, one kibana or kibana + grafana,HA,lB) would you recommend? We will push data in json format using Rest Api to Elasticsearch indexes instead of reading them from fs ( proprietary solution, no access to logs on server) so most likely we will not gonna use logstash or his peers. I did some research but I feel that there are 10's of posts regarding this topic and I'm little bit lost on this. We will host it on Azure, so if you know maybe what are optimal machines in terms of resources that we would not go bankrupt I would also appreciate that.
https://redd.it/oa7pxu
@r_devops
reddit
Small ELK setup on Azure
Hello Folks, In my current project we would like to setup small ELK stack to monitor our Prod Env application ( for now its closer to PoC rather...
Question about https with AWS loadbalancer
Hi all,
I see something happening that I did not expect and it is probably because I miss some knowledge here so hopefully you girls and guys can help me fill the gaps.
I have an app running in AWS ec2. It is behind a LB. I have a SSL certificate associated with it. The LB has a security group that allows only 443 incoming.
Now I moved the app to a new domain, and the new certificate is not yet validated. I would expect that I then can not access the app.
However if I connect to the new domain, it gives me a certificate error in the browser but when I tell it to just go ahead and connect insecure it actually does that. This is unexpected for me. How do I make sure that it is not only accessible over https, but also ONLY accessible over https ? What do I miss ?
https://redd.it/oaa90h
@r_devops
Hi all,
I see something happening that I did not expect and it is probably because I miss some knowledge here so hopefully you girls and guys can help me fill the gaps.
I have an app running in AWS ec2. It is behind a LB. I have a SSL certificate associated with it. The LB has a security group that allows only 443 incoming.
Now I moved the app to a new domain, and the new certificate is not yet validated. I would expect that I then can not access the app.
However if I connect to the new domain, it gives me a certificate error in the browser but when I tell it to just go ahead and connect insecure it actually does that. This is unexpected for me. How do I make sure that it is not only accessible over https, but also ONLY accessible over https ? What do I miss ?
https://redd.it/oaa90h
@r_devops
reddit
Question about https with AWS loadbalancer
Hi all, I see something happening that I did not expect and it is probably because I miss some knowledge here so hopefully you girls and guys can...
Is DevOps appropriate for hardware/embedded designs?
I work as a design engineer doing hardware and embedded designs (bare metal not Linux), and I am wondering if a DevOps workflow would be a good change for me and my team.
From what I read, the entire DevOps cycle doesn't really apply to the overall workflow of our company. We unfortunately have no say in the company wide workflow, but we have full autonomy with our group.
Does anyone here have any experience implementing DevOps practices at this low level? I've gotten out builds automated and have now slowly started to introduce the concept of HDL simulations and C unit tests to our process. Nothing officially mandated though.
https://redd.it/oa9ynm
@r_devops
I work as a design engineer doing hardware and embedded designs (bare metal not Linux), and I am wondering if a DevOps workflow would be a good change for me and my team.
From what I read, the entire DevOps cycle doesn't really apply to the overall workflow of our company. We unfortunately have no say in the company wide workflow, but we have full autonomy with our group.
Does anyone here have any experience implementing DevOps practices at this low level? I've gotten out builds automated and have now slowly started to introduce the concept of HDL simulations and C unit tests to our process. Nothing officially mandated though.
https://redd.it/oa9ynm
@r_devops
reddit
r/devops - Is DevOps appropriate for hardware/embedded designs?
0 votes and 3 comments so far on Reddit
Devtron, Heroku for Kubernetes. An Open Source DevOps tool to Manage and Operationalize your applications on K8s
I am one of the contributors of Devtron, Heroku for Kubernetes.
TL;DR - [Devtron, An OpenSource DevOps tool](https://github.com/devtron-labs/devtron) to manage and operationalize your applications End-to-End on Kubernetes. Would love to know what you think about it.
A short background; In the past, while working with Kubernetes, we have had the first-hand experience of using multiple tools on top of it. Being a DevOps engineer, it sure was a hassle to manage various aspects of the application lifecycle while they don't talk to each other - CI, CD, security, cost observability, stabilization. We could not find any viable solution to solve this issue, to manage and operationalizing applications without an in-depth understanding of each tool.
So, we started working on Devtron to tackle the problem. With Devtron, we integrated with the existing open source systems like argocd, Argo workflow, Clair, hibernator, grafana, Prometheus, envoy, and many others and add capabilities on top of them to enable self serve for developers and DevOps.
Devtron, in short, is an Open Source application-first way of looking at Kubernetes, meaning deep integrations with existing OpenSource and commercial software to quickly onboard state-of-the-art systems. We call it 'The AppOps approach.' :)
Some of the Features:
* Zero code software delivery workflow
* Multi-cloud deployment
* Easy dev-sec-ops integration
* Application debugging dashboard
* Enterprise-grade security and compliances
* Gitops aware
* Operational insights
You can check the [Devtron repo](https://github.com/devtron-labs/devtron) to know more about the project.
You can also check the [docs](https://docs.devtron.ai/) directly if you'd like.
Would love to know what you think about this. Happy to hear all your suggestions and improvements regarding the project.
https://redd.it/oab67i
@r_devops
I am one of the contributors of Devtron, Heroku for Kubernetes.
TL;DR - [Devtron, An OpenSource DevOps tool](https://github.com/devtron-labs/devtron) to manage and operationalize your applications End-to-End on Kubernetes. Would love to know what you think about it.
A short background; In the past, while working with Kubernetes, we have had the first-hand experience of using multiple tools on top of it. Being a DevOps engineer, it sure was a hassle to manage various aspects of the application lifecycle while they don't talk to each other - CI, CD, security, cost observability, stabilization. We could not find any viable solution to solve this issue, to manage and operationalizing applications without an in-depth understanding of each tool.
So, we started working on Devtron to tackle the problem. With Devtron, we integrated with the existing open source systems like argocd, Argo workflow, Clair, hibernator, grafana, Prometheus, envoy, and many others and add capabilities on top of them to enable self serve for developers and DevOps.
Devtron, in short, is an Open Source application-first way of looking at Kubernetes, meaning deep integrations with existing OpenSource and commercial software to quickly onboard state-of-the-art systems. We call it 'The AppOps approach.' :)
Some of the Features:
* Zero code software delivery workflow
* Multi-cloud deployment
* Easy dev-sec-ops integration
* Application debugging dashboard
* Enterprise-grade security and compliances
* Gitops aware
* Operational insights
You can check the [Devtron repo](https://github.com/devtron-labs/devtron) to know more about the project.
You can also check the [docs](https://docs.devtron.ai/) directly if you'd like.
Would love to know what you think about this. Happy to hear all your suggestions and improvements regarding the project.
https://redd.it/oab67i
@r_devops
GitHub
GitHub - devtron-labs/devtron: The only Kubernetes dashboard you need
The only Kubernetes dashboard you need. Contribute to devtron-labs/devtron development by creating an account on GitHub.
DevOps / GitOps way to manage Operations tools
Formatting warning as I am on mobile.
I am the sole NOC Engineer for my company, and have been creating a bunch of PowerShell/python (Soon C# to manage Microsoft products) tools for my team. No one in my operations team is able to script or develop tools, and I only have half a year of experience writing and maintaining code in a professional environment.
I have made tools that have drastically reduced toil, so we no longer create user and license then by hand, turning a 4-8 man hour daily process to 10 minutes with minimal intervention. I have also created a plethora of one off or sparsely used scripts to resolve repeat issues. These all execute in a PowerShell terminal on a shared computer.
I would appreciate some insite on how to develop and maintain these tools in a way that others in the future could come in and maintain/improve these tools (currently doing some less than best practices involving private repos). My current idea involves using the Azure DevOps suite provided to my team (which is currently unused and empty) to store, test, and push code to a VM or possibly a static webpage (blob storage?) that makes API calls to these tools (azure functions?).
I don't have any coworkers or superiors to lean on for advice, and my manager said that I am free to try anything that would improve our workflow. I can reach out to developers in our company for help with specific products we manage, but not to help contribute to the codebase I'm any meaningful way. Thank you for your advice in this situation 😁
https://redd.it/oabs5z
@r_devops
Formatting warning as I am on mobile.
I am the sole NOC Engineer for my company, and have been creating a bunch of PowerShell/python (Soon C# to manage Microsoft products) tools for my team. No one in my operations team is able to script or develop tools, and I only have half a year of experience writing and maintaining code in a professional environment.
I have made tools that have drastically reduced toil, so we no longer create user and license then by hand, turning a 4-8 man hour daily process to 10 minutes with minimal intervention. I have also created a plethora of one off or sparsely used scripts to resolve repeat issues. These all execute in a PowerShell terminal on a shared computer.
I would appreciate some insite on how to develop and maintain these tools in a way that others in the future could come in and maintain/improve these tools (currently doing some less than best practices involving private repos). My current idea involves using the Azure DevOps suite provided to my team (which is currently unused and empty) to store, test, and push code to a VM or possibly a static webpage (blob storage?) that makes API calls to these tools (azure functions?).
I don't have any coworkers or superiors to lean on for advice, and my manager said that I am free to try anything that would improve our workflow. I can reach out to developers in our company for help with specific products we manage, but not to help contribute to the codebase I'm any meaningful way. Thank you for your advice in this situation 😁
https://redd.it/oabs5z
@r_devops
reddit
DevOps / GitOps way to manage Operations tools
Formatting warning as I am on mobile. I am the sole NOC Engineer for my company, and have been creating a bunch of PowerShell/python (Soon C# to...
Isn't putting a private SSH key on Gitlab (or any other CI solution) really insecure? New to CI, would love some thoughts!
Not sure if this is the right place to ask this, so I'm sorry if it isn't!
I'm messing around with Gitlab CI and I'm currently trying to evaluate security risks involved with storing a private key in Gitlab's CI Variables.
My goal is to build some Javascript/HTML files and then to deploy them on my VPS. I'm planning on deploying by using rsync with SSH.
However, my internal spider senses are tingling since (1) I'm storing a private key in the cloud and (2) I feel that if this key gets compromised then my whole server would be too.
Am I being too paranoid? I really want to know what the best practices regarding this are. My plan is to make a new user and to put it into a chroot jail, however I've read chroot jail isn't really secure. Obviously I'll be rechecking all folder permissions, but I'm still not really comfortable.
Am I missing something? I would really appreciate any thoughts.
Thanks!
https://redd.it/oae8mc
@r_devops
Not sure if this is the right place to ask this, so I'm sorry if it isn't!
I'm messing around with Gitlab CI and I'm currently trying to evaluate security risks involved with storing a private key in Gitlab's CI Variables.
My goal is to build some Javascript/HTML files and then to deploy them on my VPS. I'm planning on deploying by using rsync with SSH.
However, my internal spider senses are tingling since (1) I'm storing a private key in the cloud and (2) I feel that if this key gets compromised then my whole server would be too.
Am I being too paranoid? I really want to know what the best practices regarding this are. My plan is to make a new user and to put it into a chroot jail, however I've read chroot jail isn't really secure. Obviously I'll be rechecking all folder permissions, but I'm still not really comfortable.
Am I missing something? I would really appreciate any thoughts.
Thanks!
https://redd.it/oae8mc
@r_devops
reddit
Isn't putting a private SSH key on Gitlab (or any other CI...
Not sure if this is the right place to ask this, so I'm sorry if it isn't! I'm messing around with Gitlab CI and I'm currently trying to evaluate...
DAST in Gitlab
Hey guys, as an DevOps engineer, I have integrated native SAST and open source tools into my gitlab pipelines. I want to integrate DAST into pipelines but the problem is DAST scans are so long that they are delaying the pipelines and developers are not happy they have to wait so long everytime.
I don't use Gitlab Uktinage which has ZAP as part of it but even then I don't see how it can beat the ling delays due to scan time.
Any thoughts on how to create the work flow without affecting developer experience.
https://redd.it/oafhy1
@r_devops
Hey guys, as an DevOps engineer, I have integrated native SAST and open source tools into my gitlab pipelines. I want to integrate DAST into pipelines but the problem is DAST scans are so long that they are delaying the pipelines and developers are not happy they have to wait so long everytime.
I don't use Gitlab Uktinage which has ZAP as part of it but even then I don't see how it can beat the ling delays due to scan time.
Any thoughts on how to create the work flow without affecting developer experience.
https://redd.it/oafhy1
@r_devops
reddit
DAST in Gitlab
Hey guys, as an DevOps engineer, I have integrated native SAST and open source tools into my gitlab pipelines. I want to integrate DAST into...
Free intro to Linux commandline/server course starting 5 July 2021
This free month-long course is re-starting again on the first Monday of next month.
This course has been running successfully now every month since February 2020 - more detail at: https://LinuxUpskillChallenge.org - daily lessons appear in the sub-reddit r/linuxupskillchallenge - which is also used for support/discussion.
Suitable whatever your background, and aims to provide that "base layer" of traditional Linux skills in a fun interactive way.
Any feedback very welcome.
https://redd.it/oaf8cy
@r_devops
This free month-long course is re-starting again on the first Monday of next month.
This course has been running successfully now every month since February 2020 - more detail at: https://LinuxUpskillChallenge.org - daily lessons appear in the sub-reddit r/linuxupskillchallenge - which is also used for support/discussion.
Suitable whatever your background, and aims to provide that "base layer" of traditional Linux skills in a fun interactive way.
Any feedback very welcome.
https://redd.it/oaf8cy
@r_devops
linuxupskillchallenge.org
Linux Upskill Challenge - Linux Upskill Challenge
A month-long course aimed at those who aspire to get Linux-related jobs in the industry - junior Linux sysadmin, DevOps-related work, and similar. Learn the skills required to sysadmin a remote Linux server from the commandline.
Best Udemy Course to learn DevSecOps
Hi everyone!
I have 3 years of agile development (Scrum) in Java Web. I started studying topics about security, like OWASP, Pentesting, etc and I want to work with DevSecOps.
I live in Brazil, so things can be a bit different, but can you point to me some of the best courses about this subject, so I can apply to jobs?
Thanks :)
https://redd.it/oaiyhg
@r_devops
Hi everyone!
I have 3 years of agile development (Scrum) in Java Web. I started studying topics about security, like OWASP, Pentesting, etc and I want to work with DevSecOps.
I live in Brazil, so things can be a bit different, but can you point to me some of the best courses about this subject, so I can apply to jobs?
Thanks :)
https://redd.it/oaiyhg
@r_devops
reddit
r/devops - Best Udemy Course to learn DevSecOps
0 votes and 0 comments so far on Reddit
Learn Kubernetes by Example
a free and continuously updated online collection of resources on everything Kubernetes , by RedHat
https://www.i-programmer.info/news/150-training-a-education/14680-learn-kubernetes-by-example.html
https://redd.it/oabatj
@r_devops
a free and continuously updated online collection of resources on everything Kubernetes , by RedHat
https://www.i-programmer.info/news/150-training-a-education/14680-learn-kubernetes-by-example.html
https://redd.it/oabatj
@r_devops
www.i-programmer.info
Learn Kubernetes by Example
Programming book reviews, programming tutorials,programming news, C#, Ruby, Python,C, C++, PHP, Visual Basic, Computer book reviews, computer history, programming history, joomla, theory, spreadsheets and more.
Sql failover groups and cross regional DR
I thought the purpose was to have high availability so if one goes down the other will pick up however it seems they are all on the same server so that wouldnt work? How do you use them for cross region geo disaster recovery in azure? How much of it is redundant?
https://redd.it/oa8wx6
@r_devops
I thought the purpose was to have high availability so if one goes down the other will pick up however it seems they are all on the same server so that wouldnt work? How do you use them for cross region geo disaster recovery in azure? How much of it is redundant?
https://redd.it/oa8wx6
@r_devops
reddit
Sql failover groups and cross regional DR
I thought the purpose was to have high availability so if one goes down the other will pick up however it seems they are all on the same server so...
Delivery Plan Expanded
Hi,
​
Anyone have any ideas how to plot your epics/features ala Gannt chart? I like delivery plan because it has a good look and details but it does not scale out enough. ie I want to scale out 1-2 years not just 4-5 months potentially. Anyone have any recommendations>
https://redd.it/oa5tuh
@r_devops
Hi,
​
Anyone have any ideas how to plot your epics/features ala Gannt chart? I like delivery plan because it has a good look and details but it does not scale out enough. ie I want to scale out 1-2 years not just 4-5 months potentially. Anyone have any recommendations>
https://redd.it/oa5tuh
@r_devops
reddit
r/devops - Delivery Plan Expanded
1 vote and 7 comments so far on Reddit