Install software and configure windows firewall on win10 client
Good day all! I’m trying do installation of softwares and configure windows firewall using automation. Most likely a 1 time off install and configure and create local accounts with password. Is this possible with puppets or chef? Appreciate if someone can point me to some resources or module. I’m new to puppet and chef from reading and watch some tutorials but does not know what else they can really do.
It would be good if it can see all workstation in a portal and manage from the web portal.
Thank you very much! Appreciate your enlightenment! :)
https://redd.it/fmz1po
@r_devops
Good day all! I’m trying do installation of softwares and configure windows firewall using automation. Most likely a 1 time off install and configure and create local accounts with password. Is this possible with puppets or chef? Appreciate if someone can point me to some resources or module. I’m new to puppet and chef from reading and watch some tutorials but does not know what else they can really do.
It would be good if it can see all workstation in a portal and manage from the web portal.
Thank you very much! Appreciate your enlightenment! :)
https://redd.it/fmz1po
@r_devops
reddit
Install software and configure windows firewall on win10 client
Good day all! I’m trying do installation of softwares and configure windows firewall using automation. Most likely a 1 time off install and...
Spinnaker - pass value on manual judgement stage
Is it possible to pass a value to a parameter on a manual judgement stage? I'd like to be able to define a tag name for Git and Docker if I intend to deploy into production.
https://redd.it/fm1nky
@r_devops
Is it possible to pass a value to a parameter on a manual judgement stage? I'd like to be able to define a tag name for Git and Docker if I intend to deploy into production.
https://redd.it/fm1nky
@r_devops
reddit
Spinnaker - pass value on manual judgement stage
Is it possible to pass a value to a parameter on a manual judgement stage? I'd like to be able to define a tag name for Git and Docker if I intend...
cron weekly - super useful topics ... a fav blog newletter
[https://ma.ttias.be/cronweekly/issue-126/](https://ma.ttias.be/cronweekly/issue-126/)
https://redd.it/fn0jnj
@r_devops
[https://ma.ttias.be/cronweekly/issue-126/](https://ma.ttias.be/cronweekly/issue-126/)
https://redd.it/fn0jnj
@r_devops
ma.ttias.be
cron.weekly issue #126: Firefox, Ansible, iSH, jitsi, IMAP & more
Hi everyone! 👋
Welcome to cron.weekly issue #126.
There wasn’t a lot of news to spot this week, things have been a bit quiet.
Welcome to cron.weekly issue #126.
There wasn’t a lot of news to spot this week, things have been a bit quiet.
How do I get out of tutorial hell
Hi Guys,
I decided to start learning DevOps, starting with fundamentals like learning Linux. I have no clue however on how I can make my way through all the resources available. I noticed the beginners resources in this subreddit but I am not sure what I should prioritize. Any tips on where I should start?
https://redd.it/fn1jwz
@r_devops
Hi Guys,
I decided to start learning DevOps, starting with fundamentals like learning Linux. I have no clue however on how I can make my way through all the resources available. I noticed the beginners resources in this subreddit but I am not sure what I should prioritize. Any tips on where I should start?
https://redd.it/fn1jwz
@r_devops
reddit
How do I get out of tutorial hell
Hi Guys, I decided to start learning DevOps, starting with fundamentals like learning Linux. I have no clue however on how I can make my way...
Developing your own Kubernetes controller in Java
In the previous post, we laid out the foundations to create our own custom Kubernetes controller. We detailed what a controller was, and that its only requirement is to be able to communicate with HTTP/JSON. In this post, we are going to finally start developing it.
The technology stack can be Python, NodeJS or Ruby.
As a use-case, we will implement the sidecar pattern: every time a pod gets scheduled, a sidecar pod will be scheduled along it as well. If the former is removed, the latter needs to be as well.
Read on there https://blog.frankel.ch/your-own-kubernetes-controller/2/
https://redd.it/fn3shs
@r_devops
In the previous post, we laid out the foundations to create our own custom Kubernetes controller. We detailed what a controller was, and that its only requirement is to be able to communicate with HTTP/JSON. In this post, we are going to finally start developing it.
The technology stack can be Python, NodeJS or Ruby.
As a use-case, we will implement the sidecar pattern: every time a pod gets scheduled, a sidecar pod will be scheduled along it as well. If the former is removed, the latter needs to be as well.
Read on there https://blog.frankel.ch/your-own-kubernetes-controller/2/
https://redd.it/fn3shs
@r_devops
A Java geek
Your own Kubernetes controller - Developing in Java
In the previous post, we laid out the foundations to create our own custom Kubernetes controller. We detailed what a controller was, and that its only requirement is to be able to communicate with HTTP/JSON. In this post, we are going to finally start developing…
Tool to manage multiple ansible vault password for DevOps
Hi,
I write this little topic to share you a possible way to manage your ansible vault passphrases. If you already use ansible vault you know its a good way to secure your sensitive vars but it's also complicated to work in team / with CI / with different accreditation levels.
Alternatives are lookup plugins (very verbose to use them in playbooks, and complicated in case of group\_vars), or poorly knowns vars plugins (you need to write your own for your use case).
Ansible-vault is complicated in large teams, because one vaulted file (or string) has only one passphrase to encrypt/decrypt it.
So I propose **a new tool to manage ansible keys automatically** and decrypt vaulted files (or string) automatically without need to know for end user, where are stored keys. Accreditation, is delegated to the keyring systems of your choice.
In brief :
`pip install ansible-vault-manager`, then Use `ansible-vault-manager-client create [...]` Instead of `ansible-vault create [...]`, then Execute `ansible-vault-manager-client get-usable-ids [...]` before each ansible run.
​
* It will store automatically ansible vault keys in a keystore with one of created plugins (actually AWS SSM, filesystem, but S3, gpg file, bitwarden, and other in todolist), and manage a local `_metadata.yml` file (this file must be versionned for all ansible users).
* It will test all possible keys storage (regarding `_metadata.yml` file), to verify your accreditation without fails.
* It will provide all possible keys to ansible for its runtime (using native vault-id feature).
Take a look at [https://github.com/Smile-SA/ansible-vault-manager](https://github.com/Smile-SA/ansible-vault-manager)
If you have questions / suggestions don't hesitate. I think this tool is not complete for now but this MVP was very useful for me.
https://redd.it/fmyr2a
@r_devops
Hi,
I write this little topic to share you a possible way to manage your ansible vault passphrases. If you already use ansible vault you know its a good way to secure your sensitive vars but it's also complicated to work in team / with CI / with different accreditation levels.
Alternatives are lookup plugins (very verbose to use them in playbooks, and complicated in case of group\_vars), or poorly knowns vars plugins (you need to write your own for your use case).
Ansible-vault is complicated in large teams, because one vaulted file (or string) has only one passphrase to encrypt/decrypt it.
So I propose **a new tool to manage ansible keys automatically** and decrypt vaulted files (or string) automatically without need to know for end user, where are stored keys. Accreditation, is delegated to the keyring systems of your choice.
In brief :
`pip install ansible-vault-manager`, then Use `ansible-vault-manager-client create [...]` Instead of `ansible-vault create [...]`, then Execute `ansible-vault-manager-client get-usable-ids [...]` before each ansible run.
​
* It will store automatically ansible vault keys in a keystore with one of created plugins (actually AWS SSM, filesystem, but S3, gpg file, bitwarden, and other in todolist), and manage a local `_metadata.yml` file (this file must be versionned for all ansible users).
* It will test all possible keys storage (regarding `_metadata.yml` file), to verify your accreditation without fails.
* It will provide all possible keys to ansible for its runtime (using native vault-id feature).
Take a look at [https://github.com/Smile-SA/ansible-vault-manager](https://github.com/Smile-SA/ansible-vault-manager)
If you have questions / suggestions don't hesitate. I think this tool is not complete for now but this MVP was very useful for me.
https://redd.it/fmyr2a
@r_devops
GitHub
Smile-SA/ansible-vault-manager
Ansible Vault passwords manager. Contribute to Smile-SA/ansible-vault-manager development by creating an account on GitHub.
CI with credentials
I'm using GCP and AWS for a project, and I have my authentication stored on my local machines. I want to be able to upload the project to Github for testing on [circle.ci](https://circle.ci) and Github actions.
I don't want to upload the keys to GitHub. I've looked into secret keys for circle ci and GitHub actions. That sounds great, but I'm not sure how to properly use it so that i can run on both on my local machine and on circle ci and Github actions.
For example, if I change the code to read SECRET.AUTH this would work for Github actions or [circle.ci](https://circle.ci) , but I don't have that path on my machine.
# python script
def upload(file, key):
client.auth(key)
do something
upload(train.csv, key.json) -- Not uploading json key to github
# options for github actions or circle ciI
upload(train.csv, SECRET.AUTH) -- How would i run this on my local machine?I
https://redd.it/fn3czu
@r_devops
I'm using GCP and AWS for a project, and I have my authentication stored on my local machines. I want to be able to upload the project to Github for testing on [circle.ci](https://circle.ci) and Github actions.
I don't want to upload the keys to GitHub. I've looked into secret keys for circle ci and GitHub actions. That sounds great, but I'm not sure how to properly use it so that i can run on both on my local machine and on circle ci and Github actions.
For example, if I change the code to read SECRET.AUTH this would work for Github actions or [circle.ci](https://circle.ci) , but I don't have that path on my machine.
# python script
def upload(file, key):
client.auth(key)
do something
upload(train.csv, key.json) -- Not uploading json key to github
# options for github actions or circle ciI
upload(train.csv, SECRET.AUTH) -- How would i run this on my local machine?I
https://redd.it/fn3czu
@r_devops
CircleCI
Autonomous validation for the AI era
Deliver production-ready software at AI speed. CircleCI helps modern teams validate, test, and ship every change with intelligent automation.
[Microsoft Azure] I have a few questions regarding some tools. Terraform, Salt Stack.
Why would I need to use Terraform or Salt, if Azure comes with tools such as Batch?
Do DevOps Teams get more out of these external tools, instead of using the built-in functions and tools?
https://redd.it/fn71au
@r_devops
Why would I need to use Terraform or Salt, if Azure comes with tools such as Batch?
Do DevOps Teams get more out of these external tools, instead of using the built-in functions and tools?
https://redd.it/fn71au
@r_devops
reddit
[Microsoft Azure] I have a few questions regarding some tools....
Why would I need to use Terraform or Salt, if Azure comes with tools such as Batch? Do DevOps Teams get more out of these external tools, instead...
Need guidance
Hello guys O/
First of all, this is an **amazing** subreddit, I've been casually following this without an account for a while, but I want to get serious now.
I work as a Support Engineer is a SaaS company and want to switch towards DevOps/SRE. I am looking for tips/resources from the experienced people on here about where I should begin. I've read many threads online but all of them seem to differ in some way. Currently, I have the following:
• Basic Linux administration
• Intermediate Python, Basic Ruby and web development basics.
• Understanding of Networking (TCP/IP, OSI, etc), and the Cloud in general.
I am thinking of joining Linux Academy and start a career path on DevOps, but since it's a significant investment for me, I would love to hear any suggestions from this community on how I should go about it.
To the **Senior members, Team Leads and Managers**: What are the skills that you're looking for in a person starting in this field? and how likely are you to hire a person switching their career paths like mine (assuming that the individual might have the knowledge but not the experience)?
Your input is highly appreciated :)
https://redd.it/fmzzzr
@r_devops
Hello guys O/
First of all, this is an **amazing** subreddit, I've been casually following this without an account for a while, but I want to get serious now.
I work as a Support Engineer is a SaaS company and want to switch towards DevOps/SRE. I am looking for tips/resources from the experienced people on here about where I should begin. I've read many threads online but all of them seem to differ in some way. Currently, I have the following:
• Basic Linux administration
• Intermediate Python, Basic Ruby and web development basics.
• Understanding of Networking (TCP/IP, OSI, etc), and the Cloud in general.
I am thinking of joining Linux Academy and start a career path on DevOps, but since it's a significant investment for me, I would love to hear any suggestions from this community on how I should go about it.
To the **Senior members, Team Leads and Managers**: What are the skills that you're looking for in a person starting in this field? and how likely are you to hire a person switching their career paths like mine (assuming that the individual might have the knowledge but not the experience)?
Your input is highly appreciated :)
https://redd.it/fmzzzr
@r_devops
reddit
Need guidance
Hello guys O/ First of all, this is an **amazing** subreddit, I've been casually following this without an account for a while, but I want to get...
Help me create a chaos script
The goal is to be able to run this script on a newly created, soon to be configured, linux server, so that alerts may be properly configured.
For example, I want to alert on thresholds of metrics (queue depth, reads, writes etc) related to disk, what would a script look like to exercise each of these (and other metrics)?
I know there are out of the box settings the cover 80% of the scenarios, but I want to be able to further customize these alerts since we have a lot of different types of servers used for a lot of different things (we are slowly maturing in our devops journey).
https://redd.it/fn977j
@r_devops
The goal is to be able to run this script on a newly created, soon to be configured, linux server, so that alerts may be properly configured.
For example, I want to alert on thresholds of metrics (queue depth, reads, writes etc) related to disk, what would a script look like to exercise each of these (and other metrics)?
I know there are out of the box settings the cover 80% of the scenarios, but I want to be able to further customize these alerts since we have a lot of different types of servers used for a lot of different things (we are slowly maturing in our devops journey).
https://redd.it/fn977j
@r_devops
reddit
r/devops - Help me create a chaos script
0 votes and 1 comment so far on Reddit
Terraform CI workflow
How do people CI their Terraform repos?
We are about to add a "prod" environment, and are looking for a very simple workflow at this stage (i.e. we don't need any fancy features at this stage)
Do you use a branch per env? Do you have a conventional master/develop structure, but a folder for each env?
https://redd.it/fna2oe
@r_devops
How do people CI their Terraform repos?
We are about to add a "prod" environment, and are looking for a very simple workflow at this stage (i.e. we don't need any fancy features at this stage)
Do you use a branch per env? Do you have a conventional master/develop structure, but a folder for each env?
https://redd.it/fna2oe
@r_devops
reddit
r/devops - Terraform CI workflow
0 votes and 0 comments so far on Reddit
Looking to build a highly scalable scheduling service. Would like to get feedback
Would like to know if scheduling up to millions of jobs is a problem for you today. If it is, I would like to know what solutions you use today as well as what you like/dislike about current options.
https://redd.it/fn68fw
@r_devops
Would like to know if scheduling up to millions of jobs is a problem for you today. If it is, I would like to know what solutions you use today as well as what you like/dislike about current options.
https://redd.it/fn68fw
@r_devops
reddit
r/devops - Looking to build a highly scalable scheduling service. Would like to get feedback
0 votes and 3 comments so far on Reddit
QA Engineer -> DevOps. Where do i start?
I've been looking for some advice/tips on where to begin with my own DevOps journey.
Im an QA Engineer for about 3 years(first job out of college). So far, i feel like i have pretty good understanding of Automation and now i want to start learn/get my feet wet into DevOps.
I have been learning CI/CD(bamboo) at my work. Ive been starting to do few small releases as some hands on experience.
I want to learn more about devops, where should i begin?
1. How to get hands on experience with demo project(i do own few website where i can practice this)
2. Should i start looking into Certs(Azure or AWS)?
3. What are some most important skills to learn for DevOps starters?
Any advice/tips are welcome.
Stay Safe, thank you for your time!
https://redd.it/fn5um6
@r_devops
I've been looking for some advice/tips on where to begin with my own DevOps journey.
Im an QA Engineer for about 3 years(first job out of college). So far, i feel like i have pretty good understanding of Automation and now i want to start learn/get my feet wet into DevOps.
I have been learning CI/CD(bamboo) at my work. Ive been starting to do few small releases as some hands on experience.
I want to learn more about devops, where should i begin?
1. How to get hands on experience with demo project(i do own few website where i can practice this)
2. Should i start looking into Certs(Azure or AWS)?
3. What are some most important skills to learn for DevOps starters?
Any advice/tips are welcome.
Stay Safe, thank you for your time!
https://redd.it/fn5um6
@r_devops
reddit
r/devops - QA Engineer -> DevOps. Where do i start?
1 vote and 1 comment so far on Reddit
Best practice to use cache for Gitlab CI
Hi guys!
​
What is your best practice to use cache for Gitlab CI, witha a node.js app. I tried a lot of tutorial
but time has only increased.
Here is how i used:
`cache:`
`untracked: true`
`key: ${CI_COMMIT_REF_SLUG}`
`paths:`
`- node_modules/`
https://redd.it/fnhzux
@r_devops
Hi guys!
​
What is your best practice to use cache for Gitlab CI, witha a node.js app. I tried a lot of tutorial
but time has only increased.
Here is how i used:
`cache:`
`untracked: true`
`key: ${CI_COMMIT_REF_SLUG}`
`paths:`
`- node_modules/`
https://redd.it/fnhzux
@r_devops
reddit
Best practice to use cache for Gitlab CI
Hi guys! What is your best practice to use cache for Gitlab CI, witha a node.js app. I tried a lot of tutorial but time has only...
Flask, uWSGI, Kubernetes: A sanity check
We are working on a distributed system, where we have a backend written in Flask. We eventually want to run this is parallel. We already have a Kubernetes cluster, and we can scale this backend horizontally inside the cluster. Inside the backend pod, we run flask behind uWSGI and Nginx. The system is not yet in production, so the number of uWSGI processes are set to 2. However, we have had some issues (in particular with health checks) when running mutiple instances inside the same pod.
So, my question: Is this multiple-point horizontal scaling even sane? Can we reduce the number of instances in each pod to 1, and do all instance scaling in kubernetes, or would be wasting resources? If we can reduce the instances, does uWSGI still serve a purpose? I understand that we shouldn't run flask with the development server, but perhaps there is an alternative to uWSGI meant for running it in single-instance?
https://redd.it/fnkc4d
@r_devops
We are working on a distributed system, where we have a backend written in Flask. We eventually want to run this is parallel. We already have a Kubernetes cluster, and we can scale this backend horizontally inside the cluster. Inside the backend pod, we run flask behind uWSGI and Nginx. The system is not yet in production, so the number of uWSGI processes are set to 2. However, we have had some issues (in particular with health checks) when running mutiple instances inside the same pod.
So, my question: Is this multiple-point horizontal scaling even sane? Can we reduce the number of instances in each pod to 1, and do all instance scaling in kubernetes, or would be wasting resources? If we can reduce the instances, does uWSGI still serve a purpose? I understand that we shouldn't run flask with the development server, but perhaps there is an alternative to uWSGI meant for running it in single-instance?
https://redd.it/fnkc4d
@r_devops
reddit
Flask, uWSGI, Kubernetes: A sanity check
We are working on a distributed system, where we have a backend written in Flask. We eventually want to run this is parallel. We already have a...
CICD with jenkins inside kubernetes
How can I acheive CICD in k8s with jenkins deployed as a pod inside it, all the resources I've found online show setting up a standalone jenkins server that can access kubernetes...
I've built the cluster on aws using Kops and I want to make cicd with jenkins inside kubernetes.
Help would be appreciated.
https://redd.it/fnj71v
@r_devops
How can I acheive CICD in k8s with jenkins deployed as a pod inside it, all the resources I've found online show setting up a standalone jenkins server that can access kubernetes...
I've built the cluster on aws using Kops and I want to make cicd with jenkins inside kubernetes.
Help would be appreciated.
https://redd.it/fnj71v
@r_devops
reddit
CICD with jenkins inside kubernetes
How can I acheive CICD in k8s with jenkins deployed as a pod inside it, all the resources I've found online show setting up a standalone jenkins...
Packer + Ansible + WinRM to create Windows images
Hey, I am trying to create Windows images using Packer and Ansible. I have little experience with both tools and experience some problems. I hope someone can help me.
The error i'm getting is the following:
`==> openstack: Connected to WinRM!`
`==> openstack: Provisioning with Ansible...`
`==> openstack: Executing Ansible: ansible-playbook --extra-vars packer_build_name=openstack packer_builder_type=openstack -i /tmp/packer-provisioner-ansibl`
`e397519993 /home/ubuntu/winim/2019/ansible/main.yaml --private-key /tmp/ansible-key881940738 --connection packer -vvvv --extra-vars ansible_shell_type=powe`
`rshell ansible_shell_executable=None`
`openstack: ansible-playbook 2.9.6`
`openstack: config file = /etc/ansible/ansible.cfg`
`openstack: configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']`
`openstack: ansible python module location = /usr/lib/python2.7/dist-packages/ansible`
`openstack: executable location = /usr/bin/ansible-playbook`
`openstack: python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]`
`openstack: Using /etc/ansible/ansible.cfg as config file`
`openstack: setting up inventory plugins`
`openstack: host_list declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: script declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: auto declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: Parsed /tmp/packer-provisioner-ansible397519993 inventory source with ini plugin`
`openstack: [WARNING]: Skipping plugin`
`openstack: (/home/ubuntu/.ansible/plugins/connection_plugins/packer.py) as it seems to be`
`openstack: invalid: while scanning an alias in "<byte string>", line 9, column 7 did not`
`openstack: find expected alphabetic or numeric character in "<byte string>", line 9,`
`openstack: column 8`
`openstack: Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/default.pyc`
`openstack:`
`openstack: PLAYBOOK: main.yaml ************************************************************`
`openstack: Positional arguments: /home/ubuntu/winim/2019/ansible/main.yaml`
`openstack: private_key_file: /tmp/ansible-key881940738`
`openstack: become_method: sudo`
`openstack: inventory: (u'/tmp/packer-provisioner-ansible397519993',)`
`openstack: forks: 5`
`openstack: tags: (u'all',)`
`openstack: extra_vars: (u'packer_build_name=openstack packer_builder_type=openstack', u'ansible_shell_type=powershell ansible_shell_executable=None')`
`openstack: verbosity: 4`
`openstack: connection: packer`
`openstack: timeout: 10`
`openstack: 1 plays in /home/ubuntu/winim/2019/ansible/main.yaml`
`openstack:`
`openstack: PLAY [Start of Ansible playbook] ***********************************************`
`openstack:`
`openstack: TASK [Gathering Facts] *********************************************************`
`openstack: task path: /home/ubuntu/winim/2019/ansible/main.yaml:1`
`openstack: The full traceback is:`
`openstack: Traceback (most recent call last):`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 146, in run`
`openstack: res = self._execute()`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 601, in _execute`
`openstack: self._connection = self._get_connection(variables=variables, templar=templar)`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 895, in _get_connection`
`openstack: ansible_playbook_pid=to_text(os.getppid())`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/plugins/loader.py", line 558, in get`
`openstack: self._load_config_defs(name, self._module_cache[path], path)`
`openstack: File
Hey, I am trying to create Windows images using Packer and Ansible. I have little experience with both tools and experience some problems. I hope someone can help me.
The error i'm getting is the following:
`==> openstack: Connected to WinRM!`
`==> openstack: Provisioning with Ansible...`
`==> openstack: Executing Ansible: ansible-playbook --extra-vars packer_build_name=openstack packer_builder_type=openstack -i /tmp/packer-provisioner-ansibl`
`e397519993 /home/ubuntu/winim/2019/ansible/main.yaml --private-key /tmp/ansible-key881940738 --connection packer -vvvv --extra-vars ansible_shell_type=powe`
`rshell ansible_shell_executable=None`
`openstack: ansible-playbook 2.9.6`
`openstack: config file = /etc/ansible/ansible.cfg`
`openstack: configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']`
`openstack: ansible python module location = /usr/lib/python2.7/dist-packages/ansible`
`openstack: executable location = /usr/bin/ansible-playbook`
`openstack: python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]`
`openstack: Using /etc/ansible/ansible.cfg as config file`
`openstack: setting up inventory plugins`
`openstack: host_list declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: script declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: auto declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: Parsed /tmp/packer-provisioner-ansible397519993 inventory source with ini plugin`
`openstack: [WARNING]: Skipping plugin`
`openstack: (/home/ubuntu/.ansible/plugins/connection_plugins/packer.py) as it seems to be`
`openstack: invalid: while scanning an alias in "<byte string>", line 9, column 7 did not`
`openstack: find expected alphabetic or numeric character in "<byte string>", line 9,`
`openstack: column 8`
`openstack: Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/default.pyc`
`openstack:`
`openstack: PLAYBOOK: main.yaml ************************************************************`
`openstack: Positional arguments: /home/ubuntu/winim/2019/ansible/main.yaml`
`openstack: private_key_file: /tmp/ansible-key881940738`
`openstack: become_method: sudo`
`openstack: inventory: (u'/tmp/packer-provisioner-ansible397519993',)`
`openstack: forks: 5`
`openstack: tags: (u'all',)`
`openstack: extra_vars: (u'packer_build_name=openstack packer_builder_type=openstack', u'ansible_shell_type=powershell ansible_shell_executable=None')`
`openstack: verbosity: 4`
`openstack: connection: packer`
`openstack: timeout: 10`
`openstack: 1 plays in /home/ubuntu/winim/2019/ansible/main.yaml`
`openstack:`
`openstack: PLAY [Start of Ansible playbook] ***********************************************`
`openstack:`
`openstack: TASK [Gathering Facts] *********************************************************`
`openstack: task path: /home/ubuntu/winim/2019/ansible/main.yaml:1`
`openstack: The full traceback is:`
`openstack: Traceback (most recent call last):`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 146, in run`
`openstack: res = self._execute()`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 601, in _execute`
`openstack: self._connection = self._get_connection(variables=variables, templar=templar)`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 895, in _get_connection`
`openstack: ansible_playbook_pid=to_text(os.getppid())`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/plugins/loader.py", line 558, in get`
`openstack: self._load_config_defs(name, self._module_cache[path], path)`
`openstack: File
"/usr/lib/python2.7/dist-packages/ansible/plugins/loader.py", line 293, in _load_config_defs`
`openstack: dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()`
`openstack: File "/home/ubuntu/.local/lib/python2.7/site-packages/yaml/constructor.py", line 74, in get_single_data`
`openstack: node = self.get_single_node()`
`openstack: File "ext/_yaml.pyx", line 707, in _yaml.CParser.get_single_node (ext/_yaml.c:10484)`
`openstack: File "ext/_yaml.pyx", line 725, in _yaml.CParser._compose_document (ext/_yaml.c:10831)`
`openstack: File "ext/_yaml.pyx", line 776, in _yaml.CParser._compose_node (ext/_yaml.c:11813)`
`openstack: File "ext/_yaml.pyx", line 890, in _yaml.CParser._compose_mapping_node (ext/_yaml.c:13717)`
`openstack: File "ext/_yaml.pyx", line 732, in _yaml.CParser._compose_node (ext/_yaml.c:10932)`
`openstack: File "ext/_yaml.pyx", line 905, in _yaml.CParser._parse_next_event (ext/_yaml.c:13923)`
`openstack: ScannerError: while scanning an alias`
`openstack: in "<byte string>", line 9, column 7`
`openstack: did not find expected alphabetic or numeric character`
`openstack: in "<byte string>", line 9, column 8`
`openstack: fatal: [default]: FAILED! => {`
`openstack: "msg": "Unexpected failure during module execution.",`
`openstack: "stdout": ""`
`openstack: }`
Below my ansible yaml file:
`- name: Start of Ansible playbook`
`hosts: all`
`tasks:`
`- name: Pingerdeping`
`win_ping:`
`data: crash`
Below my packer json file (private information is removed and displayed as <some\_text>):
`{`
`"variables": {`
`"os_username": "{{env \`OS_USERNAME\`}}",`
`"os_tenantid": "{{env \`OS_PROJECT_ID\`}}",`
`"os_domainname": "{{env \`OS_USER_DOMAIN_NAME\`}}",`
`"creator": "{{env \`USER\`}}",`
`"av_zone": "<some_zone>",`
`"flavor": "<some_flavor>",`
`"security_groups": "allow-all",`
`"network": "<some_network>",`
`"source_image": "<some_image>",`
`"instance_build": "windows_2019_std_base_packer_builder-{{isotime \"02-Jan-06 03:04:05\"}}",`
`"dest_image": "windows_2019_std_base_packer {{isotime \"02-Jan-06 03:04:05\"}}"`
`},`
`"provisioners": [`
`{`
`"type": "ansible",`
`"playbook_file": "/home/ubuntu/winim/2019/ansible/main.yaml",`
`"extra_arguments": [`
`"--connection",`
`"packer",`
`"-vvvv",`
`"--extra-vars",`
`"ansible_shell_type=powershell ansible_shell_executable=None"`
`]`
`}`
`],`
`"builders": [`
`{`
`"type": "openstack",`
`"communicator": "winrm",`
`"winrm_username": "administrator",`
`"winrm_use_ssl": true,`
`"winrm_insecure": true,`
`"winrm_port": 5986,`
`"winrm_timeout": "12h",`
`"domain_name": "{{user \`os_domainname\`}}",`
`"username": "{{user \`os_username\`}}",`
`"tenant_id": "{{user \`os_tenantid\`}}",`
`"identity_endpoint": "<some_endpoint>",`
`"availability_zone": "{{user \`av_zone\`}}",`
`"image_name": "{{user \`dest_image\`}}",`
`"source_image": "{{user \`source_image\`}}",`
`"networks": "{{user \`network\`}}",`
`"security_groups": "{{user \`security_groups\`}}",`
`"flavor": "{{user \`flavor\`}}"`
`}`
`]`
`}`
Hope someone can help me. Thanks in advance.
Cheers
https://redd.it/fnir07
@r_devops
`openstack: dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()`
`openstack: File "/home/ubuntu/.local/lib/python2.7/site-packages/yaml/constructor.py", line 74, in get_single_data`
`openstack: node = self.get_single_node()`
`openstack: File "ext/_yaml.pyx", line 707, in _yaml.CParser.get_single_node (ext/_yaml.c:10484)`
`openstack: File "ext/_yaml.pyx", line 725, in _yaml.CParser._compose_document (ext/_yaml.c:10831)`
`openstack: File "ext/_yaml.pyx", line 776, in _yaml.CParser._compose_node (ext/_yaml.c:11813)`
`openstack: File "ext/_yaml.pyx", line 890, in _yaml.CParser._compose_mapping_node (ext/_yaml.c:13717)`
`openstack: File "ext/_yaml.pyx", line 732, in _yaml.CParser._compose_node (ext/_yaml.c:10932)`
`openstack: File "ext/_yaml.pyx", line 905, in _yaml.CParser._parse_next_event (ext/_yaml.c:13923)`
`openstack: ScannerError: while scanning an alias`
`openstack: in "<byte string>", line 9, column 7`
`openstack: did not find expected alphabetic or numeric character`
`openstack: in "<byte string>", line 9, column 8`
`openstack: fatal: [default]: FAILED! => {`
`openstack: "msg": "Unexpected failure during module execution.",`
`openstack: "stdout": ""`
`openstack: }`
Below my ansible yaml file:
`- name: Start of Ansible playbook`
`hosts: all`
`tasks:`
`- name: Pingerdeping`
`win_ping:`
`data: crash`
Below my packer json file (private information is removed and displayed as <some\_text>):
`{`
`"variables": {`
`"os_username": "{{env \`OS_USERNAME\`}}",`
`"os_tenantid": "{{env \`OS_PROJECT_ID\`}}",`
`"os_domainname": "{{env \`OS_USER_DOMAIN_NAME\`}}",`
`"creator": "{{env \`USER\`}}",`
`"av_zone": "<some_zone>",`
`"flavor": "<some_flavor>",`
`"security_groups": "allow-all",`
`"network": "<some_network>",`
`"source_image": "<some_image>",`
`"instance_build": "windows_2019_std_base_packer_builder-{{isotime \"02-Jan-06 03:04:05\"}}",`
`"dest_image": "windows_2019_std_base_packer {{isotime \"02-Jan-06 03:04:05\"}}"`
`},`
`"provisioners": [`
`{`
`"type": "ansible",`
`"playbook_file": "/home/ubuntu/winim/2019/ansible/main.yaml",`
`"extra_arguments": [`
`"--connection",`
`"packer",`
`"-vvvv",`
`"--extra-vars",`
`"ansible_shell_type=powershell ansible_shell_executable=None"`
`]`
`}`
`],`
`"builders": [`
`{`
`"type": "openstack",`
`"communicator": "winrm",`
`"winrm_username": "administrator",`
`"winrm_use_ssl": true,`
`"winrm_insecure": true,`
`"winrm_port": 5986,`
`"winrm_timeout": "12h",`
`"domain_name": "{{user \`os_domainname\`}}",`
`"username": "{{user \`os_username\`}}",`
`"tenant_id": "{{user \`os_tenantid\`}}",`
`"identity_endpoint": "<some_endpoint>",`
`"availability_zone": "{{user \`av_zone\`}}",`
`"image_name": "{{user \`dest_image\`}}",`
`"source_image": "{{user \`source_image\`}}",`
`"networks": "{{user \`network\`}}",`
`"security_groups": "{{user \`security_groups\`}}",`
`"flavor": "{{user \`flavor\`}}"`
`}`
`]`
`}`
Hope someone can help me. Thanks in advance.
Cheers
https://redd.it/fnir07
@r_devops
reddit
r/devops - Packer + Ansible + WinRM to create Windows images
2 votes and 1 comment so far on Reddit
Ansible 101 Streaming Series by Jeff Geerling on YouTube
u/geerlingguy continues to be awesome. He will have a 1 hour live-stream every week starting this Wednesday at 3 pm UTC going through Ansible for DevOps on YouTube.
[https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geerling-new-series-on-youtube](https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geerling-new-series-on-youtube)
​
**Credit**
Just found out I can't crosspost a post with links to this community. Original post was on r/ansible [https://www.reddit.com/r/ansible/comments/fn3sfg/ansible\_101\_by\_jeff\_geerling\_new\_series\_on\_youtube/](https://www.reddit.com/r/ansible/comments/fn3sfg/ansible_101_by_jeff_geerling_new_series_on_youtube/?utm_source=share&utm_medium=web2x)
https://redd.it/fnefmw
@r_devops
u/geerlingguy continues to be awesome. He will have a 1 hour live-stream every week starting this Wednesday at 3 pm UTC going through Ansible for DevOps on YouTube.
[https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geerling-new-series-on-youtube](https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geerling-new-series-on-youtube)
​
**Credit**
Just found out I can't crosspost a post with links to this community. Original post was on r/ansible [https://www.reddit.com/r/ansible/comments/fn3sfg/ansible\_101\_by\_jeff\_geerling\_new\_series\_on\_youtube/](https://www.reddit.com/r/ansible/comments/fn3sfg/ansible_101_by_jeff_geerling_new_series_on_youtube/?utm_source=share&utm_medium=web2x)
https://redd.it/fnefmw
@r_devops
reddit
Ansible 101 by Jeff Geerling - new series on YouTube
Posted in r/ansible by u/geerlingguy • 184 points and 28 comments
Has anyone here setup Minikube before? Is it easy to install offline?
Sorry if this seems like a basic question, but we're currently working with Docker Swarm for local development and I've been tasked with bringing across and installing Minikube to test out its features and working out the install pains. I work on an air-gap network which has no internet connection, so I'm just trying to find out the best way to bring it across in a way we can install it without issues.
My end goal here is have all the files I need on our secure network, so when we deploy a new CentOS VM via Ansible for developing with, we can simply have a playbook that runs the commands to setup and install minikube locally on that VM for developers to start using.
My questions are basically:
1. Can I simply download the binary as outlined [here](https://kubernetes.io/docs/tasks/tools/install-minikube/#install-minikube-via-direct-download) and run the install command on our secure network and that will setup everything, or does it require internet access to download additional packages/libraries when installing?
2. If the answer to 1 is yes it requires internet, how can I solve this problem? Will I need to download the source and build locally first, and then someone package it and bring that version across to our secure network? Their (sparse) offline documentation refers to a [disk cache](https://minikube.sigs.k8s.io/docs/reference/disk_cache/) where it stores all downloaded information, but I don't understand how I can use this to achieve my goals
I should also note I am just a developer and not a devops engineer, so please bare with me if I am missing any obvious solutions here.
https://redd.it/fnizhe
@r_devops
Sorry if this seems like a basic question, but we're currently working with Docker Swarm for local development and I've been tasked with bringing across and installing Minikube to test out its features and working out the install pains. I work on an air-gap network which has no internet connection, so I'm just trying to find out the best way to bring it across in a way we can install it without issues.
My end goal here is have all the files I need on our secure network, so when we deploy a new CentOS VM via Ansible for developing with, we can simply have a playbook that runs the commands to setup and install minikube locally on that VM for developers to start using.
My questions are basically:
1. Can I simply download the binary as outlined [here](https://kubernetes.io/docs/tasks/tools/install-minikube/#install-minikube-via-direct-download) and run the install command on our secure network and that will setup everything, or does it require internet access to download additional packages/libraries when installing?
2. If the answer to 1 is yes it requires internet, how can I solve this problem? Will I need to download the source and build locally first, and then someone package it and bring that version across to our secure network? Their (sparse) offline documentation refers to a [disk cache](https://minikube.sigs.k8s.io/docs/reference/disk_cache/) where it stores all downloaded information, but I don't understand how I can use this to achieve my goals
I should also note I am just a developer and not a devops engineer, so please bare with me if I am missing any obvious solutions here.
https://redd.it/fnizhe
@r_devops
minikube
minikube start
minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes.
All you need is Docker (or similarly compatible) container or a Virtual Machine environment, and Kubernetes is a single command away: minikube start
What you’ll…
All you need is Docker (or similarly compatible) container or a Virtual Machine environment, and Kubernetes is a single command away: minikube start
What you’ll…
Cloud-based Virtual Desktops on Google Cloud Platform
I recently spent some time getting Cloud-based virtual desktops running on Google Cloud Platform via:
- OS Login (GSuite authentication to the instance instead of SSH)
- Chrome Remote Desktop
I hope this helps anyone tasked with providing virtual desktops for remote working!
https://github.com/VJftw/cloud-desktops
https://redd.it/fntzof
@r_devops
I recently spent some time getting Cloud-based virtual desktops running on Google Cloud Platform via:
- OS Login (GSuite authentication to the instance instead of SSH)
- Chrome Remote Desktop
I hope this helps anyone tasked with providing virtual desktops for remote working!
https://github.com/VJftw/cloud-desktops
https://redd.it/fntzof
@r_devops
GitHub
VJftw/cloud-desktops
Contribute to VJftw/cloud-desktops development by creating an account on GitHub.