My company gives me $3000 for training materials. What should I buy?
Quick about me:
* Skill level: Advanced
* Years of experience: 14
* Field: DevOps, SRE, Test Automation Engineering
_______
I'm basically allowed to buy almost anything as long as it's somehow relevant to **programming, devops, sre etc.** This money also includes travel and conference costs to _relevant_ conferences. It does not include "hardware".
My company also already covers certification fees for free, for most major providers (Google, AWS, Azure, RedHat, Terraform etc) and I have most of the ones I want already.
What would likely be the best value thing I could get? I'm thinking if there is some kind of "lifetime subscription" something I could get.
P.S. If your suggestion has a trial before taking payment that would be great.
____
Edit: Asking here because /r/programming does not allow questions.
https://redd.it/10e7il4
@r_devops
Quick about me:
* Skill level: Advanced
* Years of experience: 14
* Field: DevOps, SRE, Test Automation Engineering
_______
I'm basically allowed to buy almost anything as long as it's somehow relevant to **programming, devops, sre etc.** This money also includes travel and conference costs to _relevant_ conferences. It does not include "hardware".
My company also already covers certification fees for free, for most major providers (Google, AWS, Azure, RedHat, Terraform etc) and I have most of the ones I want already.
What would likely be the best value thing I could get? I'm thinking if there is some kind of "lifetime subscription" something I could get.
P.S. If your suggestion has a trial before taking payment that would be great.
____
Edit: Asking here because /r/programming does not allow questions.
https://redd.it/10e7il4
@r_devops
reddit
My company gives me $3000 for training materials. What should I buy?
Quick about me: * Skill level: Advanced * Years of experience: 14 * Field: DevOps, SRE, Test Automation Engineering _______ I'm basically...
Scaling OPA
Hi folks, I’m using OpenPolicyAgent for authorization (I like the policy as code thing) - but I’m unclear how we’re supposed to manage it in scale. Like how do I manage loading different policy / data for different agents for different microservices? Polling on bundle servers sucks. Any help? How are you doing it?
https://redd.it/10ebbjp
@r_devops
Hi folks, I’m using OpenPolicyAgent for authorization (I like the policy as code thing) - but I’m unclear how we’re supposed to manage it in scale. Like how do I manage loading different policy / data for different agents for different microservices? Polling on bundle servers sucks. Any help? How are you doing it?
https://redd.it/10ebbjp
@r_devops
reddit
Scaling OPA
Hi folks, I’m using OpenPolicyAgent for authorization (I like the policy as code thing) - but I’m unclear how we’re supposed to manage it in...
Kubernetes Cluster Replication & Disaster Recovery
Is it possible to replicate a kubernetes cluster of a large scale enterprise mobile application deployed on kubernetes running on premise. We are tasked to come up with a disaster recovery plan, is this possible using some tool like Valero?
https://redd.it/10eb599
@r_devops
Is it possible to replicate a kubernetes cluster of a large scale enterprise mobile application deployed on kubernetes running on premise. We are tasked to come up with a disaster recovery plan, is this possible using some tool like Valero?
https://redd.it/10eb599
@r_devops
reddit
Kubernetes Cluster Replication & Disaster Recovery
Is it possible to replicate a kubernetes cluster of a large scale enterprise mobile application deployed on kubernetes running on premise. We are...
need help improving ansible playbook readability
hello Ansible mates. I am completely new to Ansible and just finished writing my first playbook and would appreciate if someone can look at my play book and give some points on how to improve readability, simplicity and modularization
​
1. what it does: automate nifi self signed cert renewalreads host name from target nifi.properties
2. generates new ssl
3. replaces keystore and truststore in existing location
4. replaces old passwords in nifi.property file from new generated nifi.property file
​
#ansible playbook to update nifi server self signed certs
#TODO: need to modularize
#TODO: need to externalize the paths and software versions
- name: ssl updation
hosts: lower
tasks:
- name: reading old nifi.properties
slurp:
src: /app/software/nifi-1.12.0/conf/nifi.properties
register: nifiproperties
- name: convert old property file
setfact:
content: "{{ nifiproperties.content | b64decode }}"
- name: find host line
setfact:
hostline: "{{ content | regexsearch('(https.host)+.') }}"
- name: find host
set_fact:
host: "{{ host_line.split('=')[1] }}"
- name: find key store password line
set_fact:
keystorePasswd_line: "{{ content | regex_search('(keystorePasswd=)+.') }}"
- name: find key store password
setfact:
keystorePasswd: "{{ keystorePasswdline.split('=')1 }}"
- name: find trust store password line
setfact:
truststorePasswdline: "{{ content | regexsearch('(truststorePasswd=)+.*') }}"
- name: find trust store password
setfact:
truststorePasswd: "{{ truststorePasswdline.split('=')[1] }}"
- name: execute tls-toolkit.sh
shell:
chdir: /app/platform/nifi-toolkit-1.15.3
cmd: "./bin/tls-toolkit.sh standalone -n {{ host }} -o /app/software/nifi-1.12.0 -O"
- name: reading new nifi.properties
become: yes
becomeuser: root
slurp:
src: "/app/software/nifi-1.12.0/{{ host }}/nifi.properties"
register: newnifiproperties
- name: convert new nifi.properties
setfact:
newcontent: "{{ newnifiproperties.content | b64decode }}"
- name: find key store password line in new nifi.properties
setfact:
newkeystorePasswdline: "{{ newcontent | regexsearch('(keystorePasswd=)+.*') }}"
- name: find key store password in new nifi.properties
setfact:
newkeystorePasswd: "{{ newkeystorePasswdline.split('=')[1] }}"
- name: find trust store password line in new nifi.properties
setfact:
newtruststorePasswdline: "{{ newcontent | regexsearch('(truststorePasswd=)+.') }}"
- name: find trust store password in new nifi.properties
set_fact:
new_truststorePasswd: "{{ new_truststorePasswd_line.split('=')[1] }}"
- name: copy keystore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/keystore.jks"
dest: /app/software/nifi-1.12.0/certs/keystore.jks
backup: true
- name: copy truststore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/truststore.jks"
dest: /app/software/nifi-1.12.0/certs/truststore.jks
backup: true
- name: replace key store password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keystorePasswd=).'
replace: "keystorePasswd={{ newkeystorePasswd }}"
backup: true
- name: replace key password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keyPasswd=).*'
replace: "keyPasswd={{ newkeystorePasswd }}"
- name: replace trust store password
hello Ansible mates. I am completely new to Ansible and just finished writing my first playbook and would appreciate if someone can look at my play book and give some points on how to improve readability, simplicity and modularization
​
1. what it does: automate nifi self signed cert renewalreads host name from target nifi.properties
2. generates new ssl
3. replaces keystore and truststore in existing location
4. replaces old passwords in nifi.property file from new generated nifi.property file
​
#ansible playbook to update nifi server self signed certs
#TODO: need to modularize
#TODO: need to externalize the paths and software versions
- name: ssl updation
hosts: lower
tasks:
- name: reading old nifi.properties
slurp:
src: /app/software/nifi-1.12.0/conf/nifi.properties
register: nifiproperties
- name: convert old property file
setfact:
content: "{{ nifiproperties.content | b64decode }}"
- name: find host line
setfact:
hostline: "{{ content | regexsearch('(https.host)+.') }}"
- name: find host
set_fact:
host: "{{ host_line.split('=')[1] }}"
- name: find key store password line
set_fact:
keystorePasswd_line: "{{ content | regex_search('(keystorePasswd=)+.') }}"
- name: find key store password
setfact:
keystorePasswd: "{{ keystorePasswdline.split('=')1 }}"
- name: find trust store password line
setfact:
truststorePasswdline: "{{ content | regexsearch('(truststorePasswd=)+.*') }}"
- name: find trust store password
setfact:
truststorePasswd: "{{ truststorePasswdline.split('=')[1] }}"
- name: execute tls-toolkit.sh
shell:
chdir: /app/platform/nifi-toolkit-1.15.3
cmd: "./bin/tls-toolkit.sh standalone -n {{ host }} -o /app/software/nifi-1.12.0 -O"
- name: reading new nifi.properties
become: yes
becomeuser: root
slurp:
src: "/app/software/nifi-1.12.0/{{ host }}/nifi.properties"
register: newnifiproperties
- name: convert new nifi.properties
setfact:
newcontent: "{{ newnifiproperties.content | b64decode }}"
- name: find key store password line in new nifi.properties
setfact:
newkeystorePasswdline: "{{ newcontent | regexsearch('(keystorePasswd=)+.*') }}"
- name: find key store password in new nifi.properties
setfact:
newkeystorePasswd: "{{ newkeystorePasswdline.split('=')[1] }}"
- name: find trust store password line in new nifi.properties
setfact:
newtruststorePasswdline: "{{ newcontent | regexsearch('(truststorePasswd=)+.') }}"
- name: find trust store password in new nifi.properties
set_fact:
new_truststorePasswd: "{{ new_truststorePasswd_line.split('=')[1] }}"
- name: copy keystore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/keystore.jks"
dest: /app/software/nifi-1.12.0/certs/keystore.jks
backup: true
- name: copy truststore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/truststore.jks"
dest: /app/software/nifi-1.12.0/certs/truststore.jks
backup: true
- name: replace key store password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keystorePasswd=).'
replace: "keystorePasswd={{ newkeystorePasswd }}"
backup: true
- name: replace key password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keyPasswd=).*'
replace: "keyPasswd={{ newkeystorePasswd }}"
- name: replace trust store password
need help improving ansible playbook readability
hello Ansible mates. I am completely new to Ansible and just finished writing my first playbook and would appreciate if someone can look at my play book and give some points on how to improve readability, simplicity and modularization
​
1. what it does: automate nifi self signed cert renewalreads host name from target [nifi.properties](https://nifi.properties)
2. generates new ssl
3. replaces keystore and truststore in existing location
4. replaces old passwords in [nifi.property](https://nifi.property) file from new generated [nifi.property](https://nifi.property) file
​
#ansible playbook to update nifi server self signed certs
#TODO: need to modularize
#TODO: need to externalize the paths and software versions
- name: ssl updation
hosts: lower
tasks:
- name: reading old nifi.properties
slurp:
src: /app/software/nifi-1.12.0/conf/nifi.properties
register: nifi_properties
- name: convert old property file
set_fact:
content: "{{ nifi_properties.content | b64decode }}"
- name: find host line
set_fact:
host_line: "{{ content | regex_search('(https.host)+.*') }}"
- name: find host
set_fact:
host: "{{ host_line.split('=')[1] }}"
- name: find key store password line
set_fact:
keystorePasswd_line: "{{ content | regex_search('(keystorePasswd=)+.*') }}"
- name: find key store password
set_fact:
keystorePasswd: "{{ keystorePasswd_line.split('=')[1] }}"
- name: find trust store password line
set_fact:
truststorePasswd_line: "{{ content | regex_search('(truststorePasswd=)+.*') }}"
- name: find trust store password
set_fact:
truststorePasswd: "{{ truststorePasswd_line.split('=')[1] }}"
- name: execute tls-toolkit.sh
shell:
chdir: /app/platform/nifi-toolkit-1.15.3
cmd: "./bin/tls-toolkit.sh standalone -n {{ host }} -o /app/software/nifi-1.12.0 -O"
- name: reading new nifi.properties
become: yes
become_user: root
slurp:
src: "/app/software/nifi-1.12.0/{{ host }}/nifi.properties"
register: new_nifi_properties
- name: convert new nifi.properties
set_fact:
new_content: "{{ new_nifi_properties.content | b64decode }}"
- name: find key store password line in new nifi.properties
set_fact:
new_keystorePasswd_line: "{{ new_content | regex_search('(keystorePasswd=)+.*') }}"
- name: find key store password in new nifi.properties
set_fact:
new_keystorePasswd: "{{ new_keystorePasswd_line.split('=')[1] }}"
- name: find trust store password line in new nifi.properties
set_fact:
new_truststorePasswd_line: "{{ new_content | regex_search('(truststorePasswd=)+.*') }}"
- name: find trust store password in new nifi.properties
set_fact:
new_truststorePasswd: "{{ new_truststorePasswd_line.split('=')[1] }}"
- name: copy keystore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/keystore.jks"
dest: /app/software/nifi-1.12.0/certs/keystore.jks
backup: true
- name: copy truststore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/truststore.jks"
dest: /app/software/nifi-1.12.0/certs/truststore.jks
backup: true
- name: replace key store password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keystorePasswd=).*'
replace: "keystorePasswd={{ new_keystorePasswd }}"
backup: true
- name: replace key password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keyPasswd=).*'
replace: "keyPasswd={{ new_keystorePasswd }}"
- name: replace trust store password
hello Ansible mates. I am completely new to Ansible and just finished writing my first playbook and would appreciate if someone can look at my play book and give some points on how to improve readability, simplicity and modularization
​
1. what it does: automate nifi self signed cert renewalreads host name from target [nifi.properties](https://nifi.properties)
2. generates new ssl
3. replaces keystore and truststore in existing location
4. replaces old passwords in [nifi.property](https://nifi.property) file from new generated [nifi.property](https://nifi.property) file
​
#ansible playbook to update nifi server self signed certs
#TODO: need to modularize
#TODO: need to externalize the paths and software versions
- name: ssl updation
hosts: lower
tasks:
- name: reading old nifi.properties
slurp:
src: /app/software/nifi-1.12.0/conf/nifi.properties
register: nifi_properties
- name: convert old property file
set_fact:
content: "{{ nifi_properties.content | b64decode }}"
- name: find host line
set_fact:
host_line: "{{ content | regex_search('(https.host)+.*') }}"
- name: find host
set_fact:
host: "{{ host_line.split('=')[1] }}"
- name: find key store password line
set_fact:
keystorePasswd_line: "{{ content | regex_search('(keystorePasswd=)+.*') }}"
- name: find key store password
set_fact:
keystorePasswd: "{{ keystorePasswd_line.split('=')[1] }}"
- name: find trust store password line
set_fact:
truststorePasswd_line: "{{ content | regex_search('(truststorePasswd=)+.*') }}"
- name: find trust store password
set_fact:
truststorePasswd: "{{ truststorePasswd_line.split('=')[1] }}"
- name: execute tls-toolkit.sh
shell:
chdir: /app/platform/nifi-toolkit-1.15.3
cmd: "./bin/tls-toolkit.sh standalone -n {{ host }} -o /app/software/nifi-1.12.0 -O"
- name: reading new nifi.properties
become: yes
become_user: root
slurp:
src: "/app/software/nifi-1.12.0/{{ host }}/nifi.properties"
register: new_nifi_properties
- name: convert new nifi.properties
set_fact:
new_content: "{{ new_nifi_properties.content | b64decode }}"
- name: find key store password line in new nifi.properties
set_fact:
new_keystorePasswd_line: "{{ new_content | regex_search('(keystorePasswd=)+.*') }}"
- name: find key store password in new nifi.properties
set_fact:
new_keystorePasswd: "{{ new_keystorePasswd_line.split('=')[1] }}"
- name: find trust store password line in new nifi.properties
set_fact:
new_truststorePasswd_line: "{{ new_content | regex_search('(truststorePasswd=)+.*') }}"
- name: find trust store password in new nifi.properties
set_fact:
new_truststorePasswd: "{{ new_truststorePasswd_line.split('=')[1] }}"
- name: copy keystore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/keystore.jks"
dest: /app/software/nifi-1.12.0/certs/keystore.jks
backup: true
- name: copy truststore.jks
copy:
remote_src: true
src: "/app/software/nifi-1.12.0/{{ host }}/truststore.jks"
dest: /app/software/nifi-1.12.0/certs/truststore.jks
backup: true
- name: replace key store password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keystorePasswd=).*'
replace: "keystorePasswd={{ new_keystorePasswd }}"
backup: true
- name: replace key password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(keyPasswd=).*'
replace: "keyPasswd={{ new_keystorePasswd }}"
- name: replace trust store password
replace:
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(truststorePasswd=).*'
replace: "truststorePasswd={{ new_truststorePasswd }}"
- name: restart server
shell:
chdir: /app/software/nifi-1.12.0/
cmd: ./bin/nifi.sh restart
register: restart_output
- name: debug output
debug:
var: restart_output
https://redd.it/10ec4it
@r_devops
path: /app/software/nifi-1.12.0/conf/nifi.properties
regexp: '(truststorePasswd=).*'
replace: "truststorePasswd={{ new_truststorePasswd }}"
- name: restart server
shell:
chdir: /app/software/nifi-1.12.0/
cmd: ./bin/nifi.sh restart
register: restart_output
- name: debug output
debug:
var: restart_output
https://redd.it/10ec4it
@r_devops
reddit
need help improving ansible playbook readability
hello Ansible mates. I am completely new to Ansible and just finished writing my first playbook and would appreciate if someone can look at my...
Prepping for your first on-call shift
Hey /r/devops,
I wrote a post titled Prepping for your first on-call shift. It's written more for software engineers (who also do Ops) in mind but I think the content would be almost equally applicable to DevOps Engineers as well.
If anyone has any additional tips for prepping for your first on-call shift, I'd love to hear about them.
https://redd.it/10eewfh
@r_devops
Hey /r/devops,
I wrote a post titled Prepping for your first on-call shift. It's written more for software engineers (who also do Ops) in mind but I think the content would be almost equally applicable to DevOps Engineers as well.
If anyone has any additional tips for prepping for your first on-call shift, I'd love to hear about them.
https://redd.it/10eewfh
@r_devops
Sheep Code
DevLife #2: Prepping for your on-call shift
Don't start your first shift unprepared!
Bash or Z Shell?
Z Shell is the default for Mac now but I’m so used to using
I know they’re pretty much the same, so this might be a dumb question, but what does your setup look like for local aliases and functions?
https://redd.it/10efpar
@r_devops
Z Shell is the default for Mac now but I’m so used to using
.bash_profile and everything. I know they’re pretty much the same, so this might be a dumb question, but what does your setup look like for local aliases and functions?
https://redd.it/10efpar
@r_devops
reddit
Bash or Z Shell?
Z Shell is the default for Mac now but I’m so used to using `.bash_profile` and everything. I know they’re pretty much the same, so this might...
Getting Atlantis-style change previews with Argo CD
Great writeup from /u/kkapelon on how to get Atlantis style preview for changes made with Argo CD. https://codefresh.io/blog/argo-cd-preview-diff/
I'm a big fan of the Atlantis way of showing Terraform plans in pull requests and love seeing this kind of functionality with Argo CD.
https://redd.it/10egjxh
@r_devops
Great writeup from /u/kkapelon on how to get Atlantis style preview for changes made with Argo CD. https://codefresh.io/blog/argo-cd-preview-diff/
I'm a big fan of the Atlantis way of showing Terraform plans in pull requests and love seeing this kind of functionality with Argo CD.
https://redd.it/10egjxh
@r_devops
Codefresh
How to Preview and Diff Your Argo CD Deployments | Codefresh
Learn how to preview your Argo CD changes before syncing them in the target Kubernetes cluster and how to use enhanced diffs.
Casual, off-the-record hangout with Netflix productivity team
Hi, everyone. Hope this is allowed. Aviator is hosting a casual, off-the-record hangout session for senior engineers and devops folks from various orgs to chat with each other, learn about how things are done at other companies, etc. No sales or Aviator product talk, no recording, no repurposing of content. Just an opportunity to learn from one another.
We usually keep attendance limited to a small group so that folks get to know each other better.
Nadeem from Netflix's eng productivity team will be doing an AMA-style chat. He's also worked on similar stuff at Box before, so if you want to chat about how things were done there, he'd be happy to tell you about it!
Sign up at dx.community or the tweet: https://twitter.com/Aviatorco/status/1613300589881868291?s=20
One is a simple google form and the other is a tweet pointing to the same form. Just want to make sure everyone understands there's no landing page etc. Hope to see some of you there.
https://redd.it/10elayg
@r_devops
Hi, everyone. Hope this is allowed. Aviator is hosting a casual, off-the-record hangout session for senior engineers and devops folks from various orgs to chat with each other, learn about how things are done at other companies, etc. No sales or Aviator product talk, no recording, no repurposing of content. Just an opportunity to learn from one another.
We usually keep attendance limited to a small group so that folks get to know each other better.
Nadeem from Netflix's eng productivity team will be doing an AMA-style chat. He's also worked on similar stuff at Box before, so if you want to chat about how things were done there, he'd be happy to tell you about it!
Sign up at dx.community or the tweet: https://twitter.com/Aviatorco/status/1613300589881868291?s=20
One is a simple google form and the other is a tweet pointing to the same form. Just want to make sure everyone understands there's no landing page etc. Hope to see some of you there.
https://redd.it/10elayg
@r_devops
Bounded Rationality in Software Development
Software systems are huge and complex systems with lots of moving parts. When we (developers, devops engineers, etc ...) look at the problem from a limited view; we can't help our team.
We make decisions that we think is good but it is bounded by knowledge we have. This is known as bounded rationality, a concept in economics and decision-making that suggests that people and organizations make rational decisions, but their rationality is limited by the constraints they face.
So, that is why I believe T-shaped skill set is important -- you can zoom out and see the moving parts instead of only one component, which is also related to systems thinking.
Here, I am opening the discussion. Any opinions?
Note: Here is an article of mine: https://blog.demir.io/bounded-rationality-in-software-development-the-importance-of-learning-and-understanding-different-a082b5845f25
https://redd.it/10eef48
@r_devops
Software systems are huge and complex systems with lots of moving parts. When we (developers, devops engineers, etc ...) look at the problem from a limited view; we can't help our team.
We make decisions that we think is good but it is bounded by knowledge we have. This is known as bounded rationality, a concept in economics and decision-making that suggests that people and organizations make rational decisions, but their rationality is limited by the constraints they face.
So, that is why I believe T-shaped skill set is important -- you can zoom out and see the moving parts instead of only one component, which is also related to systems thinking.
Here, I am opening the discussion. Any opinions?
Note: Here is an article of mine: https://blog.demir.io/bounded-rationality-in-software-development-the-importance-of-learning-and-understanding-different-a082b5845f25
https://redd.it/10eef48
@r_devops
Medium
Bounded Rationality in Software Development: The Importance of Learning and Understanding Different Areas
Exploring the role of T-shaped skills, systems thinking, and lifelong learning in navigating complex and dynamic problems
Release pipelines -- smooth as silk or still a pain (sometimes?)
Asking for a friend.
deployment pipelines to dev/staging/prod are super stable for you or sometimes (rarely even?) break. but a super pain when they do break?
Or smooth sailing all the way?
https://redd.it/10eoms9
@r_devops
Asking for a friend.
deployment pipelines to dev/staging/prod are super stable for you or sometimes (rarely even?) break. but a super pain when they do break?
Or smooth sailing all the way?
https://redd.it/10eoms9
@r_devops
reddit
Release pipelines -- smooth as silk or still a pain (sometimes?)
Asking for a friend. deployment pipelines to dev/staging/prod are super stable for you or sometimes (rarely even?) break. but a super pain when...
Any good contents/course for serverless framework
My company infrastructure is based on serverless and I definitely want to get involved in that. Does anyone have any good contents they know related to this?
https://redd.it/10elf1d
@r_devops
My company infrastructure is based on serverless and I definitely want to get involved in that. Does anyone have any good contents they know related to this?
https://redd.it/10elf1d
@r_devops
reddit
Any good contents/course for serverless framework
My company infrastructure is based on serverless and I definitely want to get involved in that. Does anyone have any good contents they know...
Free version of SonarQube broken??
I've been wasting entirely too much time trying to get Jenkins and SonarQube to work together. Has anyone been able to get the two to work recently? I am currently stuck on the SQ quality gate returning a 401 error but the credentials work for the actual scan.. I absolutely hate SonarQube at this point so I am open to other open source static analysis tools as well.
https://redd.it/10eojgx
@r_devops
I've been wasting entirely too much time trying to get Jenkins and SonarQube to work together. Has anyone been able to get the two to work recently? I am currently stuck on the SQ quality gate returning a 401 error but the credentials work for the actual scan.. I absolutely hate SonarQube at this point so I am open to other open source static analysis tools as well.
https://redd.it/10eojgx
@r_devops
reddit
Free version of SonarQube broken??
I've been wasting entirely too much time trying to get Jenkins and SonarQube to work together. Has anyone been able to get the two to work...
How to master in Devops
I recently moved from IT help desk ( 5 years experience in it ) to Devops engineer via Internal movement in my company. I learnt Devops mainly via YouTube & online ( Learnt Cloud Platform : AWS/AZURE , along with Jenkins, Docker, basic Unix / Python, Ansible ). how can I expertise in Devops / Devsecops ?
https://redd.it/10eno0t
@r_devops
I recently moved from IT help desk ( 5 years experience in it ) to Devops engineer via Internal movement in my company. I learnt Devops mainly via YouTube & online ( Learnt Cloud Platform : AWS/AZURE , along with Jenkins, Docker, basic Unix / Python, Ansible ). how can I expertise in Devops / Devsecops ?
https://redd.it/10eno0t
@r_devops
reddit
How to master in Devops
I recently moved from IT help desk ( 5 years experience in it ) to Devops engineer via Internal movement in my company. I learnt Devops mainly via...
Noob CICD Question
In the CI portion of the pipeline, once a build is triggered by a change in a repository, does that source code typically get delivered to a Dev Environment, where it gets built, and then delivered to a Test Environment, where unit testing occurs? Also, are these Dev and Test environments provisioned users themselves(for instance, ec2 instances), or do tools like Jenkins somehow have these environments already set up?
In other words, with a tyical CICD tool like Jenkins or Bamboo, where does the building, compiling, and testing take place?
https://redd.it/10ez5ka
@r_devops
In the CI portion of the pipeline, once a build is triggered by a change in a repository, does that source code typically get delivered to a Dev Environment, where it gets built, and then delivered to a Test Environment, where unit testing occurs? Also, are these Dev and Test environments provisioned users themselves(for instance, ec2 instances), or do tools like Jenkins somehow have these environments already set up?
In other words, with a tyical CICD tool like Jenkins or Bamboo, where does the building, compiling, and testing take place?
https://redd.it/10ez5ka
@r_devops
reddit
Noob CICD Question
In the CI portion of the pipeline, once a build is triggered by a change in a repository, does that source code typically get delivered to a Dev...
What kind of self service tools did you build for your dev teams?
We are looking to build more automation and self service tools for developers to use so DevOps doesn't block them. I've been trying to brain storm what this looks like. Anyone mind sharing useful tools that have been valuable and time saving? Where does the request happen at?
I went to re:Invent in November and went to a conference where they showed off 'DevOps as a service', which I thought was interesting.
https://redd.it/10ed1l1
@r_devops
We are looking to build more automation and self service tools for developers to use so DevOps doesn't block them. I've been trying to brain storm what this looks like. Anyone mind sharing useful tools that have been valuable and time saving? Where does the request happen at?
I went to re:Invent in November and went to a conference where they showed off 'DevOps as a service', which I thought was interesting.
https://redd.it/10ed1l1
@r_devops
reddit
What kind of self service tools did you build for your dev teams?
We are looking to build more automation and self service tools for developers to use so DevOps doesn't block them. I've been trying to brain storm...
TypeScript CI Tool to Identify Nested Loops
I'm looking for a CI tool that can identify certain code patterns in TypeScript and enforce that they have unit tests written. For instance, enforcing that any function with nested for loops has unit tests before it passes CI.
Do any tools like this exist? I'm only newly familiar with ESLint, but this functionality seems outside the scope of it.
Thank you!
https://redd.it/10f22ee
@r_devops
I'm looking for a CI tool that can identify certain code patterns in TypeScript and enforce that they have unit tests written. For instance, enforcing that any function with nested for loops has unit tests before it passes CI.
Do any tools like this exist? I'm only newly familiar with ESLint, but this functionality seems outside the scope of it.
Thank you!
https://redd.it/10f22ee
@r_devops
reddit
[TypeScript] CI Tool to Identify Nested Loops
I'm looking for a CI tool that can identify certain code patterns in TypeScript and enforce that they have unit tests written. For instance,...
Architectural concerns about our CI/CD pipeline
Hi all, Hope you are having a great week :).
So i just arrived at a new position as the only devops here, and i am having some concerns about the decision that were made in the past around here, let me explain.
We are in an environment with 3 microservices : A, B and C. All of which is built and deployed using Gitlab CI on EKS, using Karpenter as an autoscaling solution.
In order to assure that the devs can test their environments without having to disturb others, it has been decided that every branch would be deployed on the cluster so it can be tested. This means that if Microservice A has branches A1,A2,..,An; then all these branches will be deployed. Moreover, for each one of those branches, we will deploy a B:latest, and C:latest so we can assure that the WHOLE ecosystem is up and running. Now apply the same thing I said about A to B and C, in every possible combination.
My first Question is : Why ? why can't we just validated a PR and then, when it's merged, deploy the "Developpers" version on the cluster ? adding more replicas/changing the rollout strategy would be enough to assure availability for the devs to not disturb each others right ?
I wish this was the end of my troubles; Those microservices are in React.
And in order for microservice A to work properly, it needs the FQDN of B and C DURING RUNTIME. and since A,B and C are deployed PER BRANCH, all the resources and ingresses and namespaces are a composition of the branch name.
For example :
Dev created branch called "feature-234-foo" for microservice B, than the namespace would be called "feature-234-foo" and the ingress would be called "feature-234-foo.something.sth.com". And A would need to have that DURING runtime.
​
The solution they came up with ? Rebuilding microservice A, after B and C are deployed and have their ingresses "generated".
​
If you are having headaches reading this, it's fine, i already found some solutions for these using javascript files as config map where i would inject directly the React environment variables for it to be consumed by A.
​
I feel like the team before me tried to solve all the solutions they had using gitlab CI, which turned it into a clusterFuck of a code that i need to live with / completely change.
https://redd.it/10efa9a
@r_devops
Hi all, Hope you are having a great week :).
So i just arrived at a new position as the only devops here, and i am having some concerns about the decision that were made in the past around here, let me explain.
We are in an environment with 3 microservices : A, B and C. All of which is built and deployed using Gitlab CI on EKS, using Karpenter as an autoscaling solution.
In order to assure that the devs can test their environments without having to disturb others, it has been decided that every branch would be deployed on the cluster so it can be tested. This means that if Microservice A has branches A1,A2,..,An; then all these branches will be deployed. Moreover, for each one of those branches, we will deploy a B:latest, and C:latest so we can assure that the WHOLE ecosystem is up and running. Now apply the same thing I said about A to B and C, in every possible combination.
My first Question is : Why ? why can't we just validated a PR and then, when it's merged, deploy the "Developpers" version on the cluster ? adding more replicas/changing the rollout strategy would be enough to assure availability for the devs to not disturb each others right ?
I wish this was the end of my troubles; Those microservices are in React.
And in order for microservice A to work properly, it needs the FQDN of B and C DURING RUNTIME. and since A,B and C are deployed PER BRANCH, all the resources and ingresses and namespaces are a composition of the branch name.
For example :
Dev created branch called "feature-234-foo" for microservice B, than the namespace would be called "feature-234-foo" and the ingress would be called "feature-234-foo.something.sth.com". And A would need to have that DURING runtime.
​
The solution they came up with ? Rebuilding microservice A, after B and C are deployed and have their ingresses "generated".
​
If you are having headaches reading this, it's fine, i already found some solutions for these using javascript files as config map where i would inject directly the React environment variables for it to be consumed by A.
​
I feel like the team before me tried to solve all the solutions they had using gitlab CI, which turned it into a clusterFuck of a code that i need to live with / completely change.
https://redd.it/10efa9a
@r_devops
reddit
Architectural concerns about our CI/CD pipeline
Hi all, Hope you are having a great week :). So i just arrived at a new position as the only devops here, and i am having some concerns about the...
Question: How do companies document their business processes?
For example, do companies keep who is responsible for what processes, how it’s started, what department is doing processes? Etc
Is there a standard/single source of truth for the managers or C suite that I’m not aware of? Or is it just memorized mainly by heart by a few employees?
https://redd.it/10ejxsq
@r_devops
For example, do companies keep who is responsible for what processes, how it’s started, what department is doing processes? Etc
Is there a standard/single source of truth for the managers or C suite that I’m not aware of? Or is it just memorized mainly by heart by a few employees?
https://redd.it/10ejxsq
@r_devops
reddit
Question: How do companies document their business processes?
For example, do companies keep who is responsible for what processes, how it’s started, what department is doing processes? Etc Is there a...
hosted Plastic SCM and AWS CodePipeline
We're researching possibly going from hosting our pipeline building on-site to hosted with AWS as they have some packages specifically built around game development. We currently use the cloud hosted package for Plastic SCM. What I'm wondering is can AWS communicate with this service easily or will something custom have to be built or host the whole repository and service on AWS? I wasn't able to find anything on Plastic's documentation or AWS'.
https://redd.it/10ejsl2
@r_devops
We're researching possibly going from hosting our pipeline building on-site to hosted with AWS as they have some packages specifically built around game development. We currently use the cloud hosted package for Plastic SCM. What I'm wondering is can AWS communicate with this service easily or will something custom have to be built or host the whole repository and service on AWS? I wasn't able to find anything on Plastic's documentation or AWS'.
https://redd.it/10ejsl2
@r_devops
reddit
hosted Plastic SCM and AWS CodePipeline
We're researching possibly going from hosting our pipeline building on-site to hosted with AWS as they have some packages specifically built...