Jailer, a universal database tool.
# Jailer Database Tools.
Jailer is a tool for database subsetting and relational data browsing.
It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Subsetter creates small slices from your database (consistent and reverentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.
# Features
Exports consistent and reverentially intact row-sets from your productive database and imports the data into your development and test environment.
Improves database performance by removing and archiving obsolete data without violating integrity.
Generates topologically sorted SQL-DML, hierarchically structured XML and DbUnit datasets.
Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.
SQL Console with code completion, syntax highlighting and database metadata visualization.
A demo database is included with which you can get a first impression without any configuration effort.
https://redd.it/10dezsn
@r_devops
# Jailer Database Tools.
Jailer is a tool for database subsetting and relational data browsing.
It creates small slices from your database and lets you navigate through your database following the relationships.Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Subsetter creates small slices from your database (consistent and reverentially intact) as SQL (topologically sorted), DbUnit records or XML. Ideal for creating small samples of test data or for local problem analysis with relevant production data.
The Data Browser lets you navigate through your database following the relationships (foreign key-based or user-defined) between tables.
# Features
Exports consistent and reverentially intact row-sets from your productive database and imports the data into your development and test environment.
Improves database performance by removing and archiving obsolete data without violating integrity.
Generates topologically sorted SQL-DML, hierarchically structured XML and DbUnit datasets.
Data Browsing. Navigate bidirectionally through the database by following foreign-key-based or user-defined relationships.
SQL Console with code completion, syntax highlighting and database metadata visualization.
A demo database is included with which you can get a first impression without any configuration effort.
https://redd.it/10dezsn
@r_devops
wisser.github.io
Open Jail - The Jailer Project Web Site
Data Export Tool
How to organize credentials of so many tools?
I work as a senior software engineer in a company that uses k8s daily. We have many components that requires password, like grafana, dev/prod machines, cloud provider accs, vpn, cluster config, etc.
I would like to know what should be an efficient effective way to organize all those credentials so that I dont need to go through all the slack conversations to find the message that the Devop guy send me the username and password?
https://redd.it/10dhqeu
@r_devops
I work as a senior software engineer in a company that uses k8s daily. We have many components that requires password, like grafana, dev/prod machines, cloud provider accs, vpn, cluster config, etc.
I would like to know what should be an efficient effective way to organize all those credentials so that I dont need to go through all the slack conversations to find the message that the Devop guy send me the username and password?
https://redd.it/10dhqeu
@r_devops
reddit
How to organize credentials of so many tools?
I work as a senior software engineer in a company that uses k8s daily. We have many components that requires password, like grafana, dev/prod...
VPS hosting provider recommendation
Hi! I am thinking moving from digital ocean droplet VM which costs me $56 prt month pre tax for 4 cpu ( dedicated chip ) and 8 ram. I am thinking about cheaper provider and probably increase RAM to 12-16 GB. Any recommendations?
https://redd.it/10dq9u3
@r_devops
Hi! I am thinking moving from digital ocean droplet VM which costs me $56 prt month pre tax for 4 cpu ( dedicated chip ) and 8 ram. I am thinking about cheaper provider and probably increase RAM to 12-16 GB. Any recommendations?
https://redd.it/10dq9u3
@r_devops
reddit
VPS hosting provider recommendation
Hi! I am thinking moving from digital ocean droplet VM which costs me $56 prt month pre tax for 4 cpu ( dedicated chip ) and 8 ram. I am thinking...
Send physical mail using Terraform (terraform-provider-mailform)
Hey folks! You may remember some of my posts. I made the mcbroken Terraform provider and the Grafana dashboard to track Elon Musk's jet :D. Well I have another stupid project to share. I thought it would be hilarious to be able to send physical mail using Terraform so I built a Terraform provider for https://mailform.io
I hope you all find it as funny as do. Enjoy!
https://github.com/circa10a/terraform-provider-mailform
https://redd.it/10dtvno
@r_devops
Hey folks! You may remember some of my posts. I made the mcbroken Terraform provider and the Grafana dashboard to track Elon Musk's jet :D. Well I have another stupid project to share. I thought it would be hilarious to be able to send physical mail using Terraform so I built a Terraform provider for https://mailform.io
I hope you all find it as funny as do. Enjoy!
https://github.com/circa10a/terraform-provider-mailform
https://redd.it/10dtvno
@r_devops
Mailform
Send a letter (or document) online, right from your computer, with Mailform.
Easily send mail online. Mail a letter, send invoices, statements, bills, documents and more online. Turn your PDFs into snail mail, and check out our bulk business mail tools for easily mailing QuickBooks invoices.
IaC management at large organizations?
Wanted to get a discussion going and some perspective on how large companies manage and organize their IaC. For example, are all teams required to use the same types of tools (Terraform vs CDK or ECS vs k8s) or are individual teams allowed to pick and choose the tools that fit their needs. How do you ensure a tool that is picked is well supported within the company, etc.
I find it challenging and unwieldy getting teams all aligned on architecture and tooling as orgs expand and grow to be very large. Curious if anyone else deals with this or has experience with getting everyone on the same page.
https://redd.it/10dfu13
@r_devops
Wanted to get a discussion going and some perspective on how large companies manage and organize their IaC. For example, are all teams required to use the same types of tools (Terraform vs CDK or ECS vs k8s) or are individual teams allowed to pick and choose the tools that fit their needs. How do you ensure a tool that is picked is well supported within the company, etc.
I find it challenging and unwieldy getting teams all aligned on architecture and tooling as orgs expand and grow to be very large. Curious if anyone else deals with this or has experience with getting everyone on the same page.
https://redd.it/10dfu13
@r_devops
reddit
IaC management at large organizations?
Wanted to get a discussion going and some perspective on how large companies manage and organize their IaC. For example, are all teams required to...
General Company Backend Setup
I just want to become more familiar with how company backend's work(in a very general sense). Can you guys tell me if I have anything glaringly wrong?
You have 1 developer team who develop the products by writing code.(these are your typical programmers)
You have 1 platform team(devops) that takes care of the CI/CD pipeline in which the developer team uses for their products. This team also takes care of provisioning and configuring the deployment environments used by the developer teams products(i.e. qa, staging, production).
​
However, since the platform team also writes code in order to provision and configure the environments(via tools like Terraform and Chef), the platform team also uses CI with tools like jenkins/bamboo. However, the platform does not use the CD part of CI/CD since there is nothing to deploy. They are just writing Terraform or Chef code.
Did I get the general gist correct, or am I misunderstanding how DevOps generally works?
https://redd.it/10dztmf
@r_devops
I just want to become more familiar with how company backend's work(in a very general sense). Can you guys tell me if I have anything glaringly wrong?
You have 1 developer team who develop the products by writing code.(these are your typical programmers)
You have 1 platform team(devops) that takes care of the CI/CD pipeline in which the developer team uses for their products. This team also takes care of provisioning and configuring the deployment environments used by the developer teams products(i.e. qa, staging, production).
​
However, since the platform team also writes code in order to provision and configure the environments(via tools like Terraform and Chef), the platform team also uses CI with tools like jenkins/bamboo. However, the platform does not use the CD part of CI/CD since there is nothing to deploy. They are just writing Terraform or Chef code.
Did I get the general gist correct, or am I misunderstanding how DevOps generally works?
https://redd.it/10dztmf
@r_devops
reddit
General Company Backend Setup
I just want to become more familiar with how company backend's work(in a very general sense). Can you guys tell me if I have anything glaringly...
tgenv is dead, long live tgenv!
For any current or former tgenv users out there, just letting you know that while the OG project (cunymatthieu/tgenv) appears to have been abandoned for a couple of years now, it has recently
been forked and revived at tgenv/tgenv for ongoing maintenance and improvement.
Most of the issues and pull requests opened against the original have been resolved and merged in the fork.
https://redd.it/10drh8y
@r_devops
For any current or former tgenv users out there, just letting you know that while the OG project (cunymatthieu/tgenv) appears to have been abandoned for a couple of years now, it has recently
been forked and revived at tgenv/tgenv for ongoing maintenance and improvement.
Most of the issues and pull requests opened against the original have been resolved and merged in the fork.
https://redd.it/10drh8y
@r_devops
GitHub
GitHub - cunymatthieu/tgenv: Terragrunt version manager
Terragrunt version manager. Contribute to cunymatthieu/tgenv development by creating an account on GitHub.
Is the ultimate endpoint of devops a homebrewed web app?
Warning: Incoherent train of thought ahead.
So I'm learning as best I can terraform and ansible and how they relate to app platforms.
No matter what it seems like there isn't a dev ops build pipeline going from code push through to deploy and config. Are you supposed to write up a bunch of github actions to handle this for you?
It seems like you would need someone internal to your organization make a web app that handles the github hooks, has terraform, docker repo, and ansible on it - and then you have some custom web forms that show the state of your apps - what is their build status - metadata. Then you have a page that has buttons on it
​
* "stand up infrastructure" (specify which db snapshot to use)
* "tear down infrastructure"
* "deploy artifact to blue",
* "deploy artifact to green",
* "point balance at blue or green"
* migrate database
* holding feature flag files to pass to built apps? - I'm sure there's some web app that handles feature flags - but would you want to depend on it? What if it goes down?
Then the manual parts would be updating the terraform, ansible, and custom scripts to do what's needed. The only thing that would have to be manually set up would be this devops orchestration tool server.
I just don't currently see how all the tools we have are tied together without someone writing an app.
I don't even want to bring kubernetes or microservices into this either. I'm just thinking of all the parts that would go into creating a build tool pipeline even WITH all the advanced tools we have today.
https://redd.it/10e0xua
@r_devops
Warning: Incoherent train of thought ahead.
So I'm learning as best I can terraform and ansible and how they relate to app platforms.
No matter what it seems like there isn't a dev ops build pipeline going from code push through to deploy and config. Are you supposed to write up a bunch of github actions to handle this for you?
It seems like you would need someone internal to your organization make a web app that handles the github hooks, has terraform, docker repo, and ansible on it - and then you have some custom web forms that show the state of your apps - what is their build status - metadata. Then you have a page that has buttons on it
​
* "stand up infrastructure" (specify which db snapshot to use)
* "tear down infrastructure"
* "deploy artifact to blue",
* "deploy artifact to green",
* "point balance at blue or green"
* migrate database
* holding feature flag files to pass to built apps? - I'm sure there's some web app that handles feature flags - but would you want to depend on it? What if it goes down?
Then the manual parts would be updating the terraform, ansible, and custom scripts to do what's needed. The only thing that would have to be manually set up would be this devops orchestration tool server.
I just don't currently see how all the tools we have are tied together without someone writing an app.
I don't even want to bring kubernetes or microservices into this either. I'm just thinking of all the parts that would go into creating a build tool pipeline even WITH all the advanced tools we have today.
https://redd.it/10e0xua
@r_devops
reddit
Is the ultimate endpoint of devops a homebrewed web app?
Warning: Incoherent train of thought ahead. So I'm learning as best I can terraform and ansible and how they relate to app platforms. No matter...
qemu-img resize & virt-customize // cloudinit FS size
Hello,
These last days I try to automate the creation of my labs (vms) with cloud images (debian, ubuntu, fedora)
And I have a problem with
With :
Fedora 36
Debian 12
Debian 9
Ubuntu 22-04
Ubuntu 20-04
I DL my base images(cloud provider images) that I copy and resize (qemu-img resize +10G)
Which works very well for Debian 10 and 11, because when I run "df -h" I see 12\~go but not in other distributions.
I use
qemu-img resize & virt-customize
To get around the problem I add to virt-customize
\--firstboot-command "growpart /dev/vda 1 -u auto"
\--firstboot-command "resize2fs /dev/vda1"
​
https://redd.it/10dtsnp
@r_devops
Hello,
These last days I try to automate the creation of my labs (vms) with cloud images (debian, ubuntu, fedora)
And I have a problem with
With :
Fedora 36
Debian 12
Debian 9
Ubuntu 22-04
Ubuntu 20-04
I DL my base images(cloud provider images) that I copy and resize (qemu-img resize +10G)
Which works very well for Debian 10 and 11, because when I run "df -h" I see 12\~go but not in other distributions.
I use
qemu-img resize & virt-customize
To get around the problem I add to virt-customize
\--firstboot-command "growpart /dev/vda 1 -u auto"
\--firstboot-command "resize2fs /dev/vda1"
​
https://redd.it/10dtsnp
@r_devops
reddit
qemu-img resize & virt-customize // cloudinit FS size
Hello, These last days I try to automate the creation of my labs (vms) with cloud images (debian, ubuntu, fedora) And I have a problem...
Free server to deploy api backend and job writen in Node.js
Free server to deploy a node.js api backend and a node.js job/worker that runs every week possibly 100 monthly users
https://redd.it/10e148t
@r_devops
Free server to deploy a node.js api backend and a node.js job/worker that runs every week possibly 100 monthly users
https://redd.it/10e148t
@r_devops
reddit
Free server to deploy api backend and job writen in Node.js
Free server to deploy a node.js api backend and a node.js job/worker that runs every week possibly 100 monthly users
What is the best course/courses to learn pipeline as code with Groovy and Jenkins
Title
https://redd.it/10e6f0e
@r_devops
Title
https://redd.it/10e6f0e
@r_devops
reddit
What is the best course/courses to learn pipeline as code with...
Title
My company gives me $3000 for training materials. What should I buy?
Quick about me:
* Skill level: Advanced
* Years of experience: 14
* Field: DevOps, SRE, Test Automation Engineering
_______
I'm basically allowed to buy almost anything as long as it's somehow relevant to **programming, devops, sre etc.** This money also includes travel and conference costs to _relevant_ conferences. It does not include "hardware".
My company also already covers certification fees for free, for most major providers (Google, AWS, Azure, RedHat, Terraform etc) and I have most of the ones I want already.
What would likely be the best value thing I could get? I'm thinking if there is some kind of "lifetime subscription" something I could get.
P.S. If your suggestion has a trial before taking payment that would be great.
____
Edit: Asking here because /r/programming does not allow questions.
https://redd.it/10e7il4
@r_devops
Quick about me:
* Skill level: Advanced
* Years of experience: 14
* Field: DevOps, SRE, Test Automation Engineering
_______
I'm basically allowed to buy almost anything as long as it's somehow relevant to **programming, devops, sre etc.** This money also includes travel and conference costs to _relevant_ conferences. It does not include "hardware".
My company also already covers certification fees for free, for most major providers (Google, AWS, Azure, RedHat, Terraform etc) and I have most of the ones I want already.
What would likely be the best value thing I could get? I'm thinking if there is some kind of "lifetime subscription" something I could get.
P.S. If your suggestion has a trial before taking payment that would be great.
____
Edit: Asking here because /r/programming does not allow questions.
https://redd.it/10e7il4
@r_devops
reddit
My company gives me $3000 for training materials. What should I buy?
Quick about me: * Skill level: Advanced * Years of experience: 14 * Field: DevOps, SRE, Test Automation Engineering _______ I'm basically...
Scaling OPA
Hi folks, I’m using OpenPolicyAgent for authorization (I like the policy as code thing) - but I’m unclear how we’re supposed to manage it in scale. Like how do I manage loading different policy / data for different agents for different microservices? Polling on bundle servers sucks. Any help? How are you doing it?
https://redd.it/10ebbjp
@r_devops
Hi folks, I’m using OpenPolicyAgent for authorization (I like the policy as code thing) - but I’m unclear how we’re supposed to manage it in scale. Like how do I manage loading different policy / data for different agents for different microservices? Polling on bundle servers sucks. Any help? How are you doing it?
https://redd.it/10ebbjp
@r_devops
reddit
Scaling OPA
Hi folks, I’m using OpenPolicyAgent for authorization (I like the policy as code thing) - but I’m unclear how we’re supposed to manage it in...
Kubernetes Cluster Replication & Disaster Recovery
Is it possible to replicate a kubernetes cluster of a large scale enterprise mobile application deployed on kubernetes running on premise. We are tasked to come up with a disaster recovery plan, is this possible using some tool like Valero?
https://redd.it/10eb599
@r_devops
Is it possible to replicate a kubernetes cluster of a large scale enterprise mobile application deployed on kubernetes running on premise. We are tasked to come up with a disaster recovery plan, is this possible using some tool like Valero?
https://redd.it/10eb599
@r_devops
reddit
Kubernetes Cluster Replication & Disaster Recovery
Is it possible to replicate a kubernetes cluster of a large scale enterprise mobile application deployed on kubernetes running on premise. We are...
need help improving ansible playbook readability
hello Ansible mates. I am completely new to Ansible and just finished writing my first playbook and would appreciate if someone can look at my play book and give some points on how to improve readability, simplicity and modularization
​
1. what it does: automate nifi self signed cert renewalreads host name from target nifi.properties
2. generates new ssl
3. replaces keystore and truststore in existing location
4. replaces old passwords in nifi.property file from new generated nifi.property file
​
#ansible playbook to update nifi server self signed certs
#TODO: need to modularize
#TODO: need to externalize the paths and software versions
- name: ssl updation
hosts: lower
tasks:
- name: reading old nifi.properties
slurp:
src: /app/software/nifi-1.12.0/conf/nifi.properties
register: nifiproperties
- name: convert old property file
setfact:
content: "{{ nifiproperties.content | b64decode }}"
- name: find host line
setfact:
hostline: "{{ content | regexsearch('(https.host)+.') }}"
- name: find host
set_fact:
host: "{{ host_line.split('=')[1] }}"
- name: find key store password line
set_fact:
keystorePasswd_line: "{{ content | regex_search('(keystorePasswd=)+.') }}"
- name: find key store password
setfact:
keystorePasswd: "{{ keystorePasswdline.split('=')1 }}"
- name: find trust store password line
setfact:
truststorePasswdline: "{{ content | regexsearch('(truststorePasswd=)+.*') }}"
- name: find trust store password
setfact:
truststorePasswd: "{{ truststorePasswdline.split('=')[1] }}"
- name: execute tls-toolkit.sh
shell:
chdir: /app/platform/nifi-toolkit-1.15.3
cmd: "./bin/tls-toolkit.sh standalone -n {{ host }} -o /app/software/nifi-1.12.0 -O"
- name: reading new nifi.properties
become: yes
becomeuser: root
slurp:
src: "/app/software/nifi-1.12.0/{{ host }}/nifi.properties"
register: newnifiproperties
- name: convert new nifi.properties
setfact:
newcontent: "{{ newnifiproperties.content | b64decode }}"
- name: find key store password line in new nifi.properties
setfact:
newkeystorePasswdline: "{{ newcontent | regexsearch('(keystorePasswd=)+.*') }}"
- name: find key store password in new nifi.properties
setfact:
newkeystorePasswd: "{{ newkeystorePasswdline.split('=')[1] }}"
- name: find trust store password line in new nifi.properties
setfact:
newtruststorePasswdline: "{{ newcontent | regexsearch('(truststorePasswd=)+.') }}"
- name: find trust store password in new nifi.properties
set_fact:
new_truststorePasswd: "{{ new_truststorePasswd_line.split('=')[1] }}"
- name: copy keystore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/keystore.jks"
dest: /app/software/nifi-1.12.0/certs/keystore.jks
backup: true
- name: copy truststore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/truststore.jks"
dest: /app/software/nifi-1.12.0/certs/truststore.jks
backup: true
- name: replace key store password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keystorePasswd=).'
replace: "keystorePasswd={{ newkeystorePasswd }}"
backup: true
- name: replace key password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keyPasswd=).*'
replace: "keyPasswd={{ newkeystorePasswd }}"
- name: replace trust store password
hello Ansible mates. I am completely new to Ansible and just finished writing my first playbook and would appreciate if someone can look at my play book and give some points on how to improve readability, simplicity and modularization
​
1. what it does: automate nifi self signed cert renewalreads host name from target nifi.properties
2. generates new ssl
3. replaces keystore and truststore in existing location
4. replaces old passwords in nifi.property file from new generated nifi.property file
​
#ansible playbook to update nifi server self signed certs
#TODO: need to modularize
#TODO: need to externalize the paths and software versions
- name: ssl updation
hosts: lower
tasks:
- name: reading old nifi.properties
slurp:
src: /app/software/nifi-1.12.0/conf/nifi.properties
register: nifiproperties
- name: convert old property file
setfact:
content: "{{ nifiproperties.content | b64decode }}"
- name: find host line
setfact:
hostline: "{{ content | regexsearch('(https.host)+.') }}"
- name: find host
set_fact:
host: "{{ host_line.split('=')[1] }}"
- name: find key store password line
set_fact:
keystorePasswd_line: "{{ content | regex_search('(keystorePasswd=)+.') }}"
- name: find key store password
setfact:
keystorePasswd: "{{ keystorePasswdline.split('=')1 }}"
- name: find trust store password line
setfact:
truststorePasswdline: "{{ content | regexsearch('(truststorePasswd=)+.*') }}"
- name: find trust store password
setfact:
truststorePasswd: "{{ truststorePasswdline.split('=')[1] }}"
- name: execute tls-toolkit.sh
shell:
chdir: /app/platform/nifi-toolkit-1.15.3
cmd: "./bin/tls-toolkit.sh standalone -n {{ host }} -o /app/software/nifi-1.12.0 -O"
- name: reading new nifi.properties
become: yes
becomeuser: root
slurp:
src: "/app/software/nifi-1.12.0/{{ host }}/nifi.properties"
register: newnifiproperties
- name: convert new nifi.properties
setfact:
newcontent: "{{ newnifiproperties.content | b64decode }}"
- name: find key store password line in new nifi.properties
setfact:
newkeystorePasswdline: "{{ newcontent | regexsearch('(keystorePasswd=)+.*') }}"
- name: find key store password in new nifi.properties
setfact:
newkeystorePasswd: "{{ newkeystorePasswdline.split('=')[1] }}"
- name: find trust store password line in new nifi.properties
setfact:
newtruststorePasswdline: "{{ newcontent | regexsearch('(truststorePasswd=)+.') }}"
- name: find trust store password in new nifi.properties
set_fact:
new_truststorePasswd: "{{ new_truststorePasswd_line.split('=')[1] }}"
- name: copy keystore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/keystore.jks"
dest: /app/software/nifi-1.12.0/certs/keystore.jks
backup: true
- name: copy truststore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/truststore.jks"
dest: /app/software/nifi-1.12.0/certs/truststore.jks
backup: true
- name: replace key store password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keystorePasswd=).'
replace: "keystorePasswd={{ newkeystorePasswd }}"
backup: true
- name: replace key password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keyPasswd=).*'
replace: "keyPasswd={{ newkeystorePasswd }}"
- name: replace trust store password
need help improving ansible playbook readability
hello Ansible mates. I am completely new to Ansible and just finished writing my first playbook and would appreciate if someone can look at my play book and give some points on how to improve readability, simplicity and modularization
​
1. what it does: automate nifi self signed cert renewalreads host name from target [nifi.properties](https://nifi.properties)
2. generates new ssl
3. replaces keystore and truststore in existing location
4. replaces old passwords in [nifi.property](https://nifi.property) file from new generated [nifi.property](https://nifi.property) file
​
#ansible playbook to update nifi server self signed certs
#TODO: need to modularize
#TODO: need to externalize the paths and software versions
- name: ssl updation
hosts: lower
tasks:
- name: reading old nifi.properties
slurp:
src: /app/software/nifi-1.12.0/conf/nifi.properties
register: nifi_properties
- name: convert old property file
set_fact:
content: "{{ nifi_properties.content | b64decode }}"
- name: find host line
set_fact:
host_line: "{{ content | regex_search('(https.host)+.*') }}"
- name: find host
set_fact:
host: "{{ host_line.split('=')[1] }}"
- name: find key store password line
set_fact:
keystorePasswd_line: "{{ content | regex_search('(keystorePasswd=)+.*') }}"
- name: find key store password
set_fact:
keystorePasswd: "{{ keystorePasswd_line.split('=')[1] }}"
- name: find trust store password line
set_fact:
truststorePasswd_line: "{{ content | regex_search('(truststorePasswd=)+.*') }}"
- name: find trust store password
set_fact:
truststorePasswd: "{{ truststorePasswd_line.split('=')[1] }}"
- name: execute tls-toolkit.sh
shell:
chdir: /app/platform/nifi-toolkit-1.15.3
cmd: "./bin/tls-toolkit.sh standalone -n {{ host }} -o /app/software/nifi-1.12.0 -O"
- name: reading new nifi.properties
become: yes
become_user: root
slurp:
src: "/app/software/nifi-1.12.0/{{ host }}/nifi.properties"
register: new_nifi_properties
- name: convert new nifi.properties
set_fact:
new_content: "{{ new_nifi_properties.content | b64decode }}"
- name: find key store password line in new nifi.properties
set_fact:
new_keystorePasswd_line: "{{ new_content | regex_search('(keystorePasswd=)+.*') }}"
- name: find key store password in new nifi.properties
set_fact:
new_keystorePasswd: "{{ new_keystorePasswd_line.split('=')[1] }}"
- name: find trust store password line in new nifi.properties
set_fact:
new_truststorePasswd_line: "{{ new_content | regex_search('(truststorePasswd=)+.*') }}"
- name: find trust store password in new nifi.properties
set_fact:
new_truststorePasswd: "{{ new_truststorePasswd_line.split('=')[1] }}"
- name: copy keystore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/keystore.jks"
dest: /app/software/nifi-1.12.0/certs/keystore.jks
backup: true
- name: copy truststore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/truststore.jks"
dest: /app/software/nifi-1.12.0/certs/truststore.jks
backup: true
- name: replace key store password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keystorePasswd=).*'
replace: "keystorePasswd={{ new_keystorePasswd }}"
backup: true
- name: replace key password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keyPasswd=).*'
replace: "keyPasswd={{ new_keystorePasswd }}"
- name: replace trust store password
hello Ansible mates. I am completely new to Ansible and just finished writing my first playbook and would appreciate if someone can look at my play book and give some points on how to improve readability, simplicity and modularization
​
1. what it does: automate nifi self signed cert renewalreads host name from target [nifi.properties](https://nifi.properties)
2. generates new ssl
3. replaces keystore and truststore in existing location
4. replaces old passwords in [nifi.property](https://nifi.property) file from new generated [nifi.property](https://nifi.property) file
​
#ansible playbook to update nifi server self signed certs
#TODO: need to modularize
#TODO: need to externalize the paths and software versions
- name: ssl updation
hosts: lower
tasks:
- name: reading old nifi.properties
slurp:
src: /app/software/nifi-1.12.0/conf/nifi.properties
register: nifi_properties
- name: convert old property file
set_fact:
content: "{{ nifi_properties.content | b64decode }}"
- name: find host line
set_fact:
host_line: "{{ content | regex_search('(https.host)+.*') }}"
- name: find host
set_fact:
host: "{{ host_line.split('=')[1] }}"
- name: find key store password line
set_fact:
keystorePasswd_line: "{{ content | regex_search('(keystorePasswd=)+.*') }}"
- name: find key store password
set_fact:
keystorePasswd: "{{ keystorePasswd_line.split('=')[1] }}"
- name: find trust store password line
set_fact:
truststorePasswd_line: "{{ content | regex_search('(truststorePasswd=)+.*') }}"
- name: find trust store password
set_fact:
truststorePasswd: "{{ truststorePasswd_line.split('=')[1] }}"
- name: execute tls-toolkit.sh
shell:
chdir: /app/platform/nifi-toolkit-1.15.3
cmd: "./bin/tls-toolkit.sh standalone -n {{ host }} -o /app/software/nifi-1.12.0 -O"
- name: reading new nifi.properties
become: yes
become_user: root
slurp:
src: "/app/software/nifi-1.12.0/{{ host }}/nifi.properties"
register: new_nifi_properties
- name: convert new nifi.properties
set_fact:
new_content: "{{ new_nifi_properties.content | b64decode }}"
- name: find key store password line in new nifi.properties
set_fact:
new_keystorePasswd_line: "{{ new_content | regex_search('(keystorePasswd=)+.*') }}"
- name: find key store password in new nifi.properties
set_fact:
new_keystorePasswd: "{{ new_keystorePasswd_line.split('=')[1] }}"
- name: find trust store password line in new nifi.properties
set_fact:
new_truststorePasswd_line: "{{ new_content | regex_search('(truststorePasswd=)+.*') }}"
- name: find trust store password in new nifi.properties
set_fact:
new_truststorePasswd: "{{ new_truststorePasswd_line.split('=')[1] }}"
- name: copy keystore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/keystore.jks"
dest: /app/software/nifi-1.12.0/certs/keystore.jks
backup: true
- name: copy truststore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/truststore.jks"
dest: /app/software/nifi-1.12.0/certs/truststore.jks
backup: true
- name: replace key store password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keystorePasswd=).*'
replace: "keystorePasswd={{ new_keystorePasswd }}"
backup: true
- name: replace key password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keyPasswd=).*'
replace: "keyPasswd={{ new_keystorePasswd }}"
- name: replace trust store password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(truststorePasswd=).*'
replace: "truststorePasswd={{ new_truststorePasswd }}"
- name: restart server
shell:
chdir: /app/software/nifi-1.12.0/
cmd: ./bin/nifi.sh restart
register: restart_output
- name: debug output
debug:
var: restart_output
https://redd.it/10ec4it
@r_devops
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(truststorePasswd=).*'
replace: "truststorePasswd={{ new_truststorePasswd }}"
- name: restart server
shell:
chdir: /app/software/nifi-1.12.0/
cmd: ./bin/nifi.sh restart
register: restart_output
- name: debug output
debug:
var: restart_output
https://redd.it/10ec4it
@r_devops
reddit
need help improving ansible playbook readability
hello Ansible mates. I am completely new to Ansible and just finished writing my first playbook and would appreciate if someone can look at my...
Prepping for your first on-call shift
Hey /r/devops,
I wrote a post titled Prepping for your first on-call shift. It's written more for software engineers (who also do Ops) in mind but I think the content would be almost equally applicable to DevOps Engineers as well.
If anyone has any additional tips for prepping for your first on-call shift, I'd love to hear about them.
https://redd.it/10eewfh
@r_devops
Hey /r/devops,
I wrote a post titled Prepping for your first on-call shift. It's written more for software engineers (who also do Ops) in mind but I think the content would be almost equally applicable to DevOps Engineers as well.
If anyone has any additional tips for prepping for your first on-call shift, I'd love to hear about them.
https://redd.it/10eewfh
@r_devops
Sheep Code
DevLife #2: Prepping for your on-call shift
Don't start your first shift unprepared!
Bash or Z Shell?
Z Shell is the default for Mac now but I’m so used to using
I know they’re pretty much the same, so this might be a dumb question, but what does your setup look like for local aliases and functions?
https://redd.it/10efpar
@r_devops
Z Shell is the default for Mac now but I’m so used to using
.bash_profile and everything. I know they’re pretty much the same, so this might be a dumb question, but what does your setup look like for local aliases and functions?
https://redd.it/10efpar
@r_devops
reddit
Bash or Z Shell?
Z Shell is the default for Mac now but I’m so used to using `.bash_profile` and everything. I know they’re pretty much the same, so this might...
Getting Atlantis-style change previews with Argo CD
Great writeup from /u/kkapelon on how to get Atlantis style preview for changes made with Argo CD. https://codefresh.io/blog/argo-cd-preview-diff/
I'm a big fan of the Atlantis way of showing Terraform plans in pull requests and love seeing this kind of functionality with Argo CD.
https://redd.it/10egjxh
@r_devops
Great writeup from /u/kkapelon on how to get Atlantis style preview for changes made with Argo CD. https://codefresh.io/blog/argo-cd-preview-diff/
I'm a big fan of the Atlantis way of showing Terraform plans in pull requests and love seeing this kind of functionality with Argo CD.
https://redd.it/10egjxh
@r_devops
Codefresh
How to Preview and Diff Your Argo CD Deployments | Codefresh
Learn how to preview your Argo CD changes before syncing them in the target Kubernetes cluster and how to use enhanced diffs.
Casual, off-the-record hangout with Netflix productivity team
Hi, everyone. Hope this is allowed. Aviator is hosting a casual, off-the-record hangout session for senior engineers and devops folks from various orgs to chat with each other, learn about how things are done at other companies, etc. No sales or Aviator product talk, no recording, no repurposing of content. Just an opportunity to learn from one another.
We usually keep attendance limited to a small group so that folks get to know each other better.
Nadeem from Netflix's eng productivity team will be doing an AMA-style chat. He's also worked on similar stuff at Box before, so if you want to chat about how things were done there, he'd be happy to tell you about it!
Sign up at dx.community or the tweet: https://twitter.com/Aviatorco/status/1613300589881868291?s=20
One is a simple google form and the other is a tweet pointing to the same form. Just want to make sure everyone understands there's no landing page etc. Hope to see some of you there.
https://redd.it/10elayg
@r_devops
Hi, everyone. Hope this is allowed. Aviator is hosting a casual, off-the-record hangout session for senior engineers and devops folks from various orgs to chat with each other, learn about how things are done at other companies, etc. No sales or Aviator product talk, no recording, no repurposing of content. Just an opportunity to learn from one another.
We usually keep attendance limited to a small group so that folks get to know each other better.
Nadeem from Netflix's eng productivity team will be doing an AMA-style chat. He's also worked on similar stuff at Box before, so if you want to chat about how things were done there, he'd be happy to tell you about it!
Sign up at dx.community or the tweet: https://twitter.com/Aviatorco/status/1613300589881868291?s=20
One is a simple google form and the other is a tweet pointing to the same form. Just want to make sure everyone understands there's no landing page etc. Hope to see some of you there.
https://redd.it/10elayg
@r_devops