Terraform CI workflow
How do people CI their Terraform repos?
We are about to add a "prod" environment, and are looking for a very simple workflow at this stage (i.e. we don't need any fancy features at this stage)
Do you use a branch per env? Do you have a conventional master/develop structure, but a folder for each env?
https://redd.it/fna2oe
@r_devops
How do people CI their Terraform repos?
We are about to add a "prod" environment, and are looking for a very simple workflow at this stage (i.e. we don't need any fancy features at this stage)
Do you use a branch per env? Do you have a conventional master/develop structure, but a folder for each env?
https://redd.it/fna2oe
@r_devops
reddit
r/devops - Terraform CI workflow
0 votes and 0 comments so far on Reddit
Looking to build a highly scalable scheduling service. Would like to get feedback
Would like to know if scheduling up to millions of jobs is a problem for you today. If it is, I would like to know what solutions you use today as well as what you like/dislike about current options.
https://redd.it/fn68fw
@r_devops
Would like to know if scheduling up to millions of jobs is a problem for you today. If it is, I would like to know what solutions you use today as well as what you like/dislike about current options.
https://redd.it/fn68fw
@r_devops
reddit
r/devops - Looking to build a highly scalable scheduling service. Would like to get feedback
0 votes and 3 comments so far on Reddit
QA Engineer -> DevOps. Where do i start?
I've been looking for some advice/tips on where to begin with my own DevOps journey.
Im an QA Engineer for about 3 years(first job out of college). So far, i feel like i have pretty good understanding of Automation and now i want to start learn/get my feet wet into DevOps.
I have been learning CI/CD(bamboo) at my work. Ive been starting to do few small releases as some hands on experience.
I want to learn more about devops, where should i begin?
1. How to get hands on experience with demo project(i do own few website where i can practice this)
2. Should i start looking into Certs(Azure or AWS)?
3. What are some most important skills to learn for DevOps starters?
Any advice/tips are welcome.
Stay Safe, thank you for your time!
https://redd.it/fn5um6
@r_devops
I've been looking for some advice/tips on where to begin with my own DevOps journey.
Im an QA Engineer for about 3 years(first job out of college). So far, i feel like i have pretty good understanding of Automation and now i want to start learn/get my feet wet into DevOps.
I have been learning CI/CD(bamboo) at my work. Ive been starting to do few small releases as some hands on experience.
I want to learn more about devops, where should i begin?
1. How to get hands on experience with demo project(i do own few website where i can practice this)
2. Should i start looking into Certs(Azure or AWS)?
3. What are some most important skills to learn for DevOps starters?
Any advice/tips are welcome.
Stay Safe, thank you for your time!
https://redd.it/fn5um6
@r_devops
reddit
r/devops - QA Engineer -> DevOps. Where do i start?
1 vote and 1 comment so far on Reddit
Best practice to use cache for Gitlab CI
Hi guys!
​
What is your best practice to use cache for Gitlab CI, witha a node.js app. I tried a lot of tutorial
but time has only increased.
Here is how i used:
`cache:`
`untracked: true`
`key: ${CI_COMMIT_REF_SLUG}`
`paths:`
`- node_modules/`
https://redd.it/fnhzux
@r_devops
Hi guys!
​
What is your best practice to use cache for Gitlab CI, witha a node.js app. I tried a lot of tutorial
but time has only increased.
Here is how i used:
`cache:`
`untracked: true`
`key: ${CI_COMMIT_REF_SLUG}`
`paths:`
`- node_modules/`
https://redd.it/fnhzux
@r_devops
reddit
Best practice to use cache for Gitlab CI
Hi guys! What is your best practice to use cache for Gitlab CI, witha a node.js app. I tried a lot of tutorial but time has only...
Flask, uWSGI, Kubernetes: A sanity check
We are working on a distributed system, where we have a backend written in Flask. We eventually want to run this is parallel. We already have a Kubernetes cluster, and we can scale this backend horizontally inside the cluster. Inside the backend pod, we run flask behind uWSGI and Nginx. The system is not yet in production, so the number of uWSGI processes are set to 2. However, we have had some issues (in particular with health checks) when running mutiple instances inside the same pod.
So, my question: Is this multiple-point horizontal scaling even sane? Can we reduce the number of instances in each pod to 1, and do all instance scaling in kubernetes, or would be wasting resources? If we can reduce the instances, does uWSGI still serve a purpose? I understand that we shouldn't run flask with the development server, but perhaps there is an alternative to uWSGI meant for running it in single-instance?
https://redd.it/fnkc4d
@r_devops
We are working on a distributed system, where we have a backend written in Flask. We eventually want to run this is parallel. We already have a Kubernetes cluster, and we can scale this backend horizontally inside the cluster. Inside the backend pod, we run flask behind uWSGI and Nginx. The system is not yet in production, so the number of uWSGI processes are set to 2. However, we have had some issues (in particular with health checks) when running mutiple instances inside the same pod.
So, my question: Is this multiple-point horizontal scaling even sane? Can we reduce the number of instances in each pod to 1, and do all instance scaling in kubernetes, or would be wasting resources? If we can reduce the instances, does uWSGI still serve a purpose? I understand that we shouldn't run flask with the development server, but perhaps there is an alternative to uWSGI meant for running it in single-instance?
https://redd.it/fnkc4d
@r_devops
reddit
Flask, uWSGI, Kubernetes: A sanity check
We are working on a distributed system, where we have a backend written in Flask. We eventually want to run this is parallel. We already have a...
CICD with jenkins inside kubernetes
How can I acheive CICD in k8s with jenkins deployed as a pod inside it, all the resources I've found online show setting up a standalone jenkins server that can access kubernetes...
I've built the cluster on aws using Kops and I want to make cicd with jenkins inside kubernetes.
Help would be appreciated.
https://redd.it/fnj71v
@r_devops
How can I acheive CICD in k8s with jenkins deployed as a pod inside it, all the resources I've found online show setting up a standalone jenkins server that can access kubernetes...
I've built the cluster on aws using Kops and I want to make cicd with jenkins inside kubernetes.
Help would be appreciated.
https://redd.it/fnj71v
@r_devops
reddit
CICD with jenkins inside kubernetes
How can I acheive CICD in k8s with jenkins deployed as a pod inside it, all the resources I've found online show setting up a standalone jenkins...
Packer + Ansible + WinRM to create Windows images
Hey, I am trying to create Windows images using Packer and Ansible. I have little experience with both tools and experience some problems. I hope someone can help me.
The error i'm getting is the following:
`==> openstack: Connected to WinRM!`
`==> openstack: Provisioning with Ansible...`
`==> openstack: Executing Ansible: ansible-playbook --extra-vars packer_build_name=openstack packer_builder_type=openstack -i /tmp/packer-provisioner-ansibl`
`e397519993 /home/ubuntu/winim/2019/ansible/main.yaml --private-key /tmp/ansible-key881940738 --connection packer -vvvv --extra-vars ansible_shell_type=powe`
`rshell ansible_shell_executable=None`
`openstack: ansible-playbook 2.9.6`
`openstack: config file = /etc/ansible/ansible.cfg`
`openstack: configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']`
`openstack: ansible python module location = /usr/lib/python2.7/dist-packages/ansible`
`openstack: executable location = /usr/bin/ansible-playbook`
`openstack: python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]`
`openstack: Using /etc/ansible/ansible.cfg as config file`
`openstack: setting up inventory plugins`
`openstack: host_list declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: script declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: auto declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: Parsed /tmp/packer-provisioner-ansible397519993 inventory source with ini plugin`
`openstack: [WARNING]: Skipping plugin`
`openstack: (/home/ubuntu/.ansible/plugins/connection_plugins/packer.py) as it seems to be`
`openstack: invalid: while scanning an alias in "<byte string>", line 9, column 7 did not`
`openstack: find expected alphabetic or numeric character in "<byte string>", line 9,`
`openstack: column 8`
`openstack: Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/default.pyc`
`openstack:`
`openstack: PLAYBOOK: main.yaml ************************************************************`
`openstack: Positional arguments: /home/ubuntu/winim/2019/ansible/main.yaml`
`openstack: private_key_file: /tmp/ansible-key881940738`
`openstack: become_method: sudo`
`openstack: inventory: (u'/tmp/packer-provisioner-ansible397519993',)`
`openstack: forks: 5`
`openstack: tags: (u'all',)`
`openstack: extra_vars: (u'packer_build_name=openstack packer_builder_type=openstack', u'ansible_shell_type=powershell ansible_shell_executable=None')`
`openstack: verbosity: 4`
`openstack: connection: packer`
`openstack: timeout: 10`
`openstack: 1 plays in /home/ubuntu/winim/2019/ansible/main.yaml`
`openstack:`
`openstack: PLAY [Start of Ansible playbook] ***********************************************`
`openstack:`
`openstack: TASK [Gathering Facts] *********************************************************`
`openstack: task path: /home/ubuntu/winim/2019/ansible/main.yaml:1`
`openstack: The full traceback is:`
`openstack: Traceback (most recent call last):`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 146, in run`
`openstack: res = self._execute()`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 601, in _execute`
`openstack: self._connection = self._get_connection(variables=variables, templar=templar)`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 895, in _get_connection`
`openstack: ansible_playbook_pid=to_text(os.getppid())`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/plugins/loader.py", line 558, in get`
`openstack: self._load_config_defs(name, self._module_cache[path], path)`
`openstack: File
Hey, I am trying to create Windows images using Packer and Ansible. I have little experience with both tools and experience some problems. I hope someone can help me.
The error i'm getting is the following:
`==> openstack: Connected to WinRM!`
`==> openstack: Provisioning with Ansible...`
`==> openstack: Executing Ansible: ansible-playbook --extra-vars packer_build_name=openstack packer_builder_type=openstack -i /tmp/packer-provisioner-ansibl`
`e397519993 /home/ubuntu/winim/2019/ansible/main.yaml --private-key /tmp/ansible-key881940738 --connection packer -vvvv --extra-vars ansible_shell_type=powe`
`rshell ansible_shell_executable=None`
`openstack: ansible-playbook 2.9.6`
`openstack: config file = /etc/ansible/ansible.cfg`
`openstack: configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']`
`openstack: ansible python module location = /usr/lib/python2.7/dist-packages/ansible`
`openstack: executable location = /usr/bin/ansible-playbook`
`openstack: python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]`
`openstack: Using /etc/ansible/ansible.cfg as config file`
`openstack: setting up inventory plugins`
`openstack: host_list declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: script declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: auto declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: Parsed /tmp/packer-provisioner-ansible397519993 inventory source with ini plugin`
`openstack: [WARNING]: Skipping plugin`
`openstack: (/home/ubuntu/.ansible/plugins/connection_plugins/packer.py) as it seems to be`
`openstack: invalid: while scanning an alias in "<byte string>", line 9, column 7 did not`
`openstack: find expected alphabetic or numeric character in "<byte string>", line 9,`
`openstack: column 8`
`openstack: Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/default.pyc`
`openstack:`
`openstack: PLAYBOOK: main.yaml ************************************************************`
`openstack: Positional arguments: /home/ubuntu/winim/2019/ansible/main.yaml`
`openstack: private_key_file: /tmp/ansible-key881940738`
`openstack: become_method: sudo`
`openstack: inventory: (u'/tmp/packer-provisioner-ansible397519993',)`
`openstack: forks: 5`
`openstack: tags: (u'all',)`
`openstack: extra_vars: (u'packer_build_name=openstack packer_builder_type=openstack', u'ansible_shell_type=powershell ansible_shell_executable=None')`
`openstack: verbosity: 4`
`openstack: connection: packer`
`openstack: timeout: 10`
`openstack: 1 plays in /home/ubuntu/winim/2019/ansible/main.yaml`
`openstack:`
`openstack: PLAY [Start of Ansible playbook] ***********************************************`
`openstack:`
`openstack: TASK [Gathering Facts] *********************************************************`
`openstack: task path: /home/ubuntu/winim/2019/ansible/main.yaml:1`
`openstack: The full traceback is:`
`openstack: Traceback (most recent call last):`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 146, in run`
`openstack: res = self._execute()`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 601, in _execute`
`openstack: self._connection = self._get_connection(variables=variables, templar=templar)`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 895, in _get_connection`
`openstack: ansible_playbook_pid=to_text(os.getppid())`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/plugins/loader.py", line 558, in get`
`openstack: self._load_config_defs(name, self._module_cache[path], path)`
`openstack: File
"/usr/lib/python2.7/dist-packages/ansible/plugins/loader.py", line 293, in _load_config_defs`
`openstack: dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()`
`openstack: File "/home/ubuntu/.local/lib/python2.7/site-packages/yaml/constructor.py", line 74, in get_single_data`
`openstack: node = self.get_single_node()`
`openstack: File "ext/_yaml.pyx", line 707, in _yaml.CParser.get_single_node (ext/_yaml.c:10484)`
`openstack: File "ext/_yaml.pyx", line 725, in _yaml.CParser._compose_document (ext/_yaml.c:10831)`
`openstack: File "ext/_yaml.pyx", line 776, in _yaml.CParser._compose_node (ext/_yaml.c:11813)`
`openstack: File "ext/_yaml.pyx", line 890, in _yaml.CParser._compose_mapping_node (ext/_yaml.c:13717)`
`openstack: File "ext/_yaml.pyx", line 732, in _yaml.CParser._compose_node (ext/_yaml.c:10932)`
`openstack: File "ext/_yaml.pyx", line 905, in _yaml.CParser._parse_next_event (ext/_yaml.c:13923)`
`openstack: ScannerError: while scanning an alias`
`openstack: in "<byte string>", line 9, column 7`
`openstack: did not find expected alphabetic or numeric character`
`openstack: in "<byte string>", line 9, column 8`
`openstack: fatal: [default]: FAILED! => {`
`openstack: "msg": "Unexpected failure during module execution.",`
`openstack: "stdout": ""`
`openstack: }`
Below my ansible yaml file:
`- name: Start of Ansible playbook`
`hosts: all`
`tasks:`
`- name: Pingerdeping`
`win_ping:`
`data: crash`
Below my packer json file (private information is removed and displayed as <some\_text>):
`{`
`"variables": {`
`"os_username": "{{env \`OS_USERNAME\`}}",`
`"os_tenantid": "{{env \`OS_PROJECT_ID\`}}",`
`"os_domainname": "{{env \`OS_USER_DOMAIN_NAME\`}}",`
`"creator": "{{env \`USER\`}}",`
`"av_zone": "<some_zone>",`
`"flavor": "<some_flavor>",`
`"security_groups": "allow-all",`
`"network": "<some_network>",`
`"source_image": "<some_image>",`
`"instance_build": "windows_2019_std_base_packer_builder-{{isotime \"02-Jan-06 03:04:05\"}}",`
`"dest_image": "windows_2019_std_base_packer {{isotime \"02-Jan-06 03:04:05\"}}"`
`},`
`"provisioners": [`
`{`
`"type": "ansible",`
`"playbook_file": "/home/ubuntu/winim/2019/ansible/main.yaml",`
`"extra_arguments": [`
`"--connection",`
`"packer",`
`"-vvvv",`
`"--extra-vars",`
`"ansible_shell_type=powershell ansible_shell_executable=None"`
`]`
`}`
`],`
`"builders": [`
`{`
`"type": "openstack",`
`"communicator": "winrm",`
`"winrm_username": "administrator",`
`"winrm_use_ssl": true,`
`"winrm_insecure": true,`
`"winrm_port": 5986,`
`"winrm_timeout": "12h",`
`"domain_name": "{{user \`os_domainname\`}}",`
`"username": "{{user \`os_username\`}}",`
`"tenant_id": "{{user \`os_tenantid\`}}",`
`"identity_endpoint": "<some_endpoint>",`
`"availability_zone": "{{user \`av_zone\`}}",`
`"image_name": "{{user \`dest_image\`}}",`
`"source_image": "{{user \`source_image\`}}",`
`"networks": "{{user \`network\`}}",`
`"security_groups": "{{user \`security_groups\`}}",`
`"flavor": "{{user \`flavor\`}}"`
`}`
`]`
`}`
Hope someone can help me. Thanks in advance.
Cheers
https://redd.it/fnir07
@r_devops
`openstack: dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()`
`openstack: File "/home/ubuntu/.local/lib/python2.7/site-packages/yaml/constructor.py", line 74, in get_single_data`
`openstack: node = self.get_single_node()`
`openstack: File "ext/_yaml.pyx", line 707, in _yaml.CParser.get_single_node (ext/_yaml.c:10484)`
`openstack: File "ext/_yaml.pyx", line 725, in _yaml.CParser._compose_document (ext/_yaml.c:10831)`
`openstack: File "ext/_yaml.pyx", line 776, in _yaml.CParser._compose_node (ext/_yaml.c:11813)`
`openstack: File "ext/_yaml.pyx", line 890, in _yaml.CParser._compose_mapping_node (ext/_yaml.c:13717)`
`openstack: File "ext/_yaml.pyx", line 732, in _yaml.CParser._compose_node (ext/_yaml.c:10932)`
`openstack: File "ext/_yaml.pyx", line 905, in _yaml.CParser._parse_next_event (ext/_yaml.c:13923)`
`openstack: ScannerError: while scanning an alias`
`openstack: in "<byte string>", line 9, column 7`
`openstack: did not find expected alphabetic or numeric character`
`openstack: in "<byte string>", line 9, column 8`
`openstack: fatal: [default]: FAILED! => {`
`openstack: "msg": "Unexpected failure during module execution.",`
`openstack: "stdout": ""`
`openstack: }`
Below my ansible yaml file:
`- name: Start of Ansible playbook`
`hosts: all`
`tasks:`
`- name: Pingerdeping`
`win_ping:`
`data: crash`
Below my packer json file (private information is removed and displayed as <some\_text>):
`{`
`"variables": {`
`"os_username": "{{env \`OS_USERNAME\`}}",`
`"os_tenantid": "{{env \`OS_PROJECT_ID\`}}",`
`"os_domainname": "{{env \`OS_USER_DOMAIN_NAME\`}}",`
`"creator": "{{env \`USER\`}}",`
`"av_zone": "<some_zone>",`
`"flavor": "<some_flavor>",`
`"security_groups": "allow-all",`
`"network": "<some_network>",`
`"source_image": "<some_image>",`
`"instance_build": "windows_2019_std_base_packer_builder-{{isotime \"02-Jan-06 03:04:05\"}}",`
`"dest_image": "windows_2019_std_base_packer {{isotime \"02-Jan-06 03:04:05\"}}"`
`},`
`"provisioners": [`
`{`
`"type": "ansible",`
`"playbook_file": "/home/ubuntu/winim/2019/ansible/main.yaml",`
`"extra_arguments": [`
`"--connection",`
`"packer",`
`"-vvvv",`
`"--extra-vars",`
`"ansible_shell_type=powershell ansible_shell_executable=None"`
`]`
`}`
`],`
`"builders": [`
`{`
`"type": "openstack",`
`"communicator": "winrm",`
`"winrm_username": "administrator",`
`"winrm_use_ssl": true,`
`"winrm_insecure": true,`
`"winrm_port": 5986,`
`"winrm_timeout": "12h",`
`"domain_name": "{{user \`os_domainname\`}}",`
`"username": "{{user \`os_username\`}}",`
`"tenant_id": "{{user \`os_tenantid\`}}",`
`"identity_endpoint": "<some_endpoint>",`
`"availability_zone": "{{user \`av_zone\`}}",`
`"image_name": "{{user \`dest_image\`}}",`
`"source_image": "{{user \`source_image\`}}",`
`"networks": "{{user \`network\`}}",`
`"security_groups": "{{user \`security_groups\`}}",`
`"flavor": "{{user \`flavor\`}}"`
`}`
`]`
`}`
Hope someone can help me. Thanks in advance.
Cheers
https://redd.it/fnir07
@r_devops
reddit
r/devops - Packer + Ansible + WinRM to create Windows images
2 votes and 1 comment so far on Reddit
Ansible 101 Streaming Series by Jeff Geerling on YouTube
u/geerlingguy continues to be awesome. He will have a 1 hour live-stream every week starting this Wednesday at 3 pm UTC going through Ansible for DevOps on YouTube.
[https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geerling-new-series-on-youtube](https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geerling-new-series-on-youtube)
​
**Credit**
Just found out I can't crosspost a post with links to this community. Original post was on r/ansible [https://www.reddit.com/r/ansible/comments/fn3sfg/ansible\_101\_by\_jeff\_geerling\_new\_series\_on\_youtube/](https://www.reddit.com/r/ansible/comments/fn3sfg/ansible_101_by_jeff_geerling_new_series_on_youtube/?utm_source=share&utm_medium=web2x)
https://redd.it/fnefmw
@r_devops
u/geerlingguy continues to be awesome. He will have a 1 hour live-stream every week starting this Wednesday at 3 pm UTC going through Ansible for DevOps on YouTube.
[https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geerling-new-series-on-youtube](https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geerling-new-series-on-youtube)
​
**Credit**
Just found out I can't crosspost a post with links to this community. Original post was on r/ansible [https://www.reddit.com/r/ansible/comments/fn3sfg/ansible\_101\_by\_jeff\_geerling\_new\_series\_on\_youtube/](https://www.reddit.com/r/ansible/comments/fn3sfg/ansible_101_by_jeff_geerling_new_series_on_youtube/?utm_source=share&utm_medium=web2x)
https://redd.it/fnefmw
@r_devops
reddit
Ansible 101 by Jeff Geerling - new series on YouTube
Posted in r/ansible by u/geerlingguy • 184 points and 28 comments
Has anyone here setup Minikube before? Is it easy to install offline?
Sorry if this seems like a basic question, but we're currently working with Docker Swarm for local development and I've been tasked with bringing across and installing Minikube to test out its features and working out the install pains. I work on an air-gap network which has no internet connection, so I'm just trying to find out the best way to bring it across in a way we can install it without issues.
My end goal here is have all the files I need on our secure network, so when we deploy a new CentOS VM via Ansible for developing with, we can simply have a playbook that runs the commands to setup and install minikube locally on that VM for developers to start using.
My questions are basically:
1. Can I simply download the binary as outlined [here](https://kubernetes.io/docs/tasks/tools/install-minikube/#install-minikube-via-direct-download) and run the install command on our secure network and that will setup everything, or does it require internet access to download additional packages/libraries when installing?
2. If the answer to 1 is yes it requires internet, how can I solve this problem? Will I need to download the source and build locally first, and then someone package it and bring that version across to our secure network? Their (sparse) offline documentation refers to a [disk cache](https://minikube.sigs.k8s.io/docs/reference/disk_cache/) where it stores all downloaded information, but I don't understand how I can use this to achieve my goals
I should also note I am just a developer and not a devops engineer, so please bare with me if I am missing any obvious solutions here.
https://redd.it/fnizhe
@r_devops
Sorry if this seems like a basic question, but we're currently working with Docker Swarm for local development and I've been tasked with bringing across and installing Minikube to test out its features and working out the install pains. I work on an air-gap network which has no internet connection, so I'm just trying to find out the best way to bring it across in a way we can install it without issues.
My end goal here is have all the files I need on our secure network, so when we deploy a new CentOS VM via Ansible for developing with, we can simply have a playbook that runs the commands to setup and install minikube locally on that VM for developers to start using.
My questions are basically:
1. Can I simply download the binary as outlined [here](https://kubernetes.io/docs/tasks/tools/install-minikube/#install-minikube-via-direct-download) and run the install command on our secure network and that will setup everything, or does it require internet access to download additional packages/libraries when installing?
2. If the answer to 1 is yes it requires internet, how can I solve this problem? Will I need to download the source and build locally first, and then someone package it and bring that version across to our secure network? Their (sparse) offline documentation refers to a [disk cache](https://minikube.sigs.k8s.io/docs/reference/disk_cache/) where it stores all downloaded information, but I don't understand how I can use this to achieve my goals
I should also note I am just a developer and not a devops engineer, so please bare with me if I am missing any obvious solutions here.
https://redd.it/fnizhe
@r_devops
minikube
minikube start
minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes.
All you need is Docker (or similarly compatible) container or a Virtual Machine environment, and Kubernetes is a single command away: minikube start
What you’ll…
All you need is Docker (or similarly compatible) container or a Virtual Machine environment, and Kubernetes is a single command away: minikube start
What you’ll…
Cloud-based Virtual Desktops on Google Cloud Platform
I recently spent some time getting Cloud-based virtual desktops running on Google Cloud Platform via:
- OS Login (GSuite authentication to the instance instead of SSH)
- Chrome Remote Desktop
I hope this helps anyone tasked with providing virtual desktops for remote working!
https://github.com/VJftw/cloud-desktops
https://redd.it/fntzof
@r_devops
I recently spent some time getting Cloud-based virtual desktops running on Google Cloud Platform via:
- OS Login (GSuite authentication to the instance instead of SSH)
- Chrome Remote Desktop
I hope this helps anyone tasked with providing virtual desktops for remote working!
https://github.com/VJftw/cloud-desktops
https://redd.it/fntzof
@r_devops
GitHub
VJftw/cloud-desktops
Contribute to VJftw/cloud-desktops development by creating an account on GitHub.
Terraform and OCTOPUS DEPLOY
Hello all!
I'm currently trying to improve DevOps knowledge and was trying to think of a project to implement terraform with octopus deploy.
I have setup a repo with soruce code of a nodejs application that has a webhook to listen to changes and builds the package that is then pushed to octopus deploy, but am unsure how to move forward. I understand octopus deploy can help push packages to the dev, testing and production environment, but I'm uncertain how to implement this, would it be possible to create a terraform template that would automate setting up these environments for me in my CI/CD pipeline then push the package to them?
Thanks for any help!
https://redd.it/fniiip
@r_devops
Hello all!
I'm currently trying to improve DevOps knowledge and was trying to think of a project to implement terraform with octopus deploy.
I have setup a repo with soruce code of a nodejs application that has a webhook to listen to changes and builds the package that is then pushed to octopus deploy, but am unsure how to move forward. I understand octopus deploy can help push packages to the dev, testing and production environment, but I'm uncertain how to implement this, would it be possible to create a terraform template that would automate setting up these environments for me in my CI/CD pipeline then push the package to them?
Thanks for any help!
https://redd.it/fniiip
@r_devops
reddit
r/devops - Terraform and OCTOPUS DEPLOY
1 vote and 1 comment so far on Reddit
Any ideas on how to release AMIs or Azure Managed Images to customers?
How is everyone releasing/managing AMIs or Azure Managed Images to their customers?
https://redd.it/fnua0r
@r_devops
How is everyone releasing/managing AMIs or Azure Managed Images to their customers?
https://redd.it/fnua0r
@r_devops
reddit
r/devops - Any ideas on how to release AMIs or Azure Managed Images to customers?
1 vote and 0 comments so far on Reddit
Does kubernetes restart failed resources with kind:pod automatically or it must be managed by a controller like a deployment to maintain its desired state ?
https://redd.it/fnr0cj
@r_devops
https://redd.it/fnr0cj
@r_devops
reddit
r/devops - Does kubernetes restart failed resources with kind:pod automatically or it must be managed by a controller like a deployment…
2 votes and 2 comments so far on Reddit
Google SRE-SE Interview
I have a 15 min phone interview with Google for a SRE-SE role and I have been asked to study NEtworking, Linux, ds and algorithms. What is the best way to prepare considering I have only 4 days?
https://redd.it/fnq1pu
@r_devops
I have a 15 min phone interview with Google for a SRE-SE role and I have been asked to study NEtworking, Linux, ds and algorithms. What is the best way to prepare considering I have only 4 days?
https://redd.it/fnq1pu
@r_devops
reddit
Google SRE-SE Interview
I have a 15 min phone interview with Google for a SRE-SE role and I have been asked to study NEtworking, Linux, ds and algorithms. What is the...
Security applications that can be added to Atlantis Terraform relatively easily?
Basically what the title says.
Work on a small/relatively new underfunded InfoSec team, looking to expand security into our Atlantis pipeline on a limited budget. After doing some research there are a lot of duplicate "Code reviewing/security vulnerability reporters/apps" so I'm curious if anyone uses a specific one in joint with Atlantis/ can offer some guidance on where to look. Thanks!
https://redd.it/fnr4fk
@r_devops
Basically what the title says.
Work on a small/relatively new underfunded InfoSec team, looking to expand security into our Atlantis pipeline on a limited budget. After doing some research there are a lot of duplicate "Code reviewing/security vulnerability reporters/apps" so I'm curious if anyone uses a specific one in joint with Atlantis/ can offer some guidance on where to look. Thanks!
https://redd.it/fnr4fk
@r_devops
reddit
Security applications that can be added to Atlantis Terraform...
Basically what the title says. Work on a small/relatively new underfunded InfoSec team, looking to expand security into our Atlantis pipeline on...
CI builds for windows and MacOS
I am trying to do desktop builds for MacOS and Windows. I am trying to Jenkins, but I wanted to know what other people are using to do this?
https://redd.it/fnq6gv
@r_devops
I am trying to do desktop builds for MacOS and Windows. I am trying to Jenkins, but I wanted to know what other people are using to do this?
https://redd.it/fnq6gv
@r_devops
reddit
CI builds for windows and MacOS
I am trying to do desktop builds for MacOS and Windows. I am trying to Jenkins, but I wanted to know what other people are using to do this?
Deployment workflow for multiple Kubernetes clusters
As a DevOps engineer I am currently maintaining a large website of an insurance company. At the moment we are in the migration phase of the whole application stack into a Kubernetes cluster.
More specifically, I am talking about several clusters. The environments for Dev, Testing and Production are each deployed in a separate clusters.
Each Git branch is deployed into its own cluster (development => dev, stage => testing, master => production, feature-1 => dev-f1).
Additionally, more clusters for load testing and for new developments of (large) features will be set up.
Currently, I use [Buddy](https://buddy.works/) as CI/CD tool. I have set up several pipelines to build the docker images, additionally there is one deployment pipeline per level and application. As you can imagine, I quickly come up with a considerable amount of different pipelines.
To deploy the docker image to the correct Kubernetes cluster, I check the current branch with a shell script and then set the commit ID in a variable (e.g. `USER_SERVICE_IMAGE_DEV`, `USER_SERVICE_IMAGE_TEST`, `USER_SERVICE_IMAGE_PRODUCTION`). Unfortunately, the variables cannot be created dynamically, so I need to manually create a new variable when a new Git branch is added.
I then use this variable to build the Docker Image and push it into the Docker Registry.
In the build pipeline (which I run separately) I read the variable again to load the current image and deploy the corresponding version to Kubernetes.
I started with this method to quickly start provisioning the Kubernetes clusters, but now I realize that the management of the different branches, clusters and pipelines becomes very complex.
As soon as a new cluster is set up, I have to adjust the build scripts to account for the new git branch.
Do you have a similar setup in your environment? How do your CI/CD processes look like? Are there any tools that can improve my workflow?
https://redd.it/fnfooh
@r_devops
As a DevOps engineer I am currently maintaining a large website of an insurance company. At the moment we are in the migration phase of the whole application stack into a Kubernetes cluster.
More specifically, I am talking about several clusters. The environments for Dev, Testing and Production are each deployed in a separate clusters.
Each Git branch is deployed into its own cluster (development => dev, stage => testing, master => production, feature-1 => dev-f1).
Additionally, more clusters for load testing and for new developments of (large) features will be set up.
Currently, I use [Buddy](https://buddy.works/) as CI/CD tool. I have set up several pipelines to build the docker images, additionally there is one deployment pipeline per level and application. As you can imagine, I quickly come up with a considerable amount of different pipelines.
To deploy the docker image to the correct Kubernetes cluster, I check the current branch with a shell script and then set the commit ID in a variable (e.g. `USER_SERVICE_IMAGE_DEV`, `USER_SERVICE_IMAGE_TEST`, `USER_SERVICE_IMAGE_PRODUCTION`). Unfortunately, the variables cannot be created dynamically, so I need to manually create a new variable when a new Git branch is added.
I then use this variable to build the Docker Image and push it into the Docker Registry.
In the build pipeline (which I run separately) I read the variable again to load the current image and deploy the corresponding version to Kubernetes.
I started with this method to quickly start provisioning the Kubernetes clusters, but now I realize that the management of the different branches, clusters and pipelines becomes very complex.
As soon as a new cluster is set up, I have to adjust the build scripts to account for the new git branch.
Do you have a similar setup in your environment? How do your CI/CD processes look like? Are there any tools that can improve my workflow?
https://redd.it/fnfooh
@r_devops
Buddy
DevOps & Platform Engineering Suite
DevOps & Platform Engineering Copilot
This Week In DevOps
Google Cloud Next was just postponed "until further notice". Does anyone have an interest in online conferences focused on DevOps?
Other announcements were fairly light this week but some preview releases went out and we did have a new Terraform Provider announcement from Hashicorp. To read more check out: [https://thisweekindevops.com/2020/03/23/weekly-roundup-march-23rd-2020/](https://thisweekindevops.com/2020/03/23/weekly-roundup-march-23rd-2020/)
https://redd.it/fnzw58
@r_devops
Google Cloud Next was just postponed "until further notice". Does anyone have an interest in online conferences focused on DevOps?
Other announcements were fairly light this week but some preview releases went out and we did have a new Terraform Provider announcement from Hashicorp. To read more check out: [https://thisweekindevops.com/2020/03/23/weekly-roundup-march-23rd-2020/](https://thisweekindevops.com/2020/03/23/weekly-roundup-march-23rd-2020/)
https://redd.it/fnzw58
@r_devops
This Week In DevOps
Weekly Roundup: March 23rd, 2020 - This Week In DevOps
This week in DevOps we have new features for Elasticache Redis on AWS, gRPC comes to Google Cloud’s Cloud Run and Cloud Next has been postponed. Azure released a new preview version of Connection Monitor and HashiCorp released an alpha version of a Terraform…
Help with Jenkins and 'npm test'
Hello.
I am trying to run npm test on a Jenkins pipeline, but as soon as it tries to run, I get an error message saying "Cannot find module ./env.js". Any ideas as to what is going on? I've been stuck on this for weeks now.
Thanks.
https://redd.it/fnwvrb
@r_devops
Hello.
I am trying to run npm test on a Jenkins pipeline, but as soon as it tries to run, I get an error message saying "Cannot find module ./env.js". Any ideas as to what is going on? I've been stuck on this for weeks now.
Thanks.
https://redd.it/fnwvrb
@r_devops
reddit
Help with Jenkins and 'npm test'
Hello. I am trying to run npm test on a Jenkins pipeline, but as soon as it tries to run, I get an error message saying "Cannot find module...
Need Recommendation for Secrets Management
My company has several pieces of data that contains sensitive information that our employees use on a regular basis. It's not gigabytes of data, but rather just a few spreadsheets worth of stuff. We want to isolate each "document" of data which are of the following types:
* Server Info
* Username/PW for Customer Administration websites
* Spreadsheets with contact details, contract details, etc.
Additionally, we would like to use the same solution as a credentials manager for our users, so plugins for Chrome and Firefox are a must.
Currently I am leaning towards LastPass because it allows me to do all of this.
Other features we need:
* Data ownership (assign a user to own a Datum)
* Ability to share/deny access to any Data by user
* Ability to immediately revoke access to any Data by user
We are using Azure AD for user management and if the solution can use Windows Credentials to authenticate the user and not nag them for credentials all the time would be great.
We are not married to any vendor or platform. Non-Windows solutions need to have a Docker container we can host on Azure.
Thanks!
https://redd.it/fnw5zl
@r_devops
My company has several pieces of data that contains sensitive information that our employees use on a regular basis. It's not gigabytes of data, but rather just a few spreadsheets worth of stuff. We want to isolate each "document" of data which are of the following types:
* Server Info
* Username/PW for Customer Administration websites
* Spreadsheets with contact details, contract details, etc.
Additionally, we would like to use the same solution as a credentials manager for our users, so plugins for Chrome and Firefox are a must.
Currently I am leaning towards LastPass because it allows me to do all of this.
Other features we need:
* Data ownership (assign a user to own a Datum)
* Ability to share/deny access to any Data by user
* Ability to immediately revoke access to any Data by user
We are using Azure AD for user management and if the solution can use Windows Credentials to authenticate the user and not nag them for credentials all the time would be great.
We are not married to any vendor or platform. Non-Windows solutions need to have a Docker container we can host on Azure.
Thanks!
https://redd.it/fnw5zl
@r_devops
reddit
Need Recommendation for Secrets Management
My company has several pieces of data that contains sensitive information that our employees use on a regular basis. It's not gigabytes of data,...