Reddit DevOps
270 subscribers
8 photos
31.1K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Flask, uWSGI, Kubernetes: A sanity check

We are working on a distributed system, where we have a backend written in Flask. We eventually want to run this is parallel. We already have a Kubernetes cluster, and we can scale this backend horizontally inside the cluster. Inside the backend pod, we run flask behind uWSGI and Nginx. The system is not yet in production, so the number of uWSGI processes are set to 2. However, we have had some issues (in particular with health checks) when running mutiple instances inside the same pod.

So, my question: Is this multiple-point horizontal scaling even sane? Can we reduce the number of instances in each pod to 1, and do all instance scaling in kubernetes, or would be wasting resources? If we can reduce the instances, does uWSGI still serve a purpose? I understand that we shouldn't run flask with the development server, but perhaps there is an alternative to uWSGI meant for running it in single-instance?

https://redd.it/fnkc4d
@r_devops
CICD with jenkins inside kubernetes

How can I acheive CICD in k8s with jenkins deployed as a pod inside it, all the resources I've found online show setting up a standalone jenkins server that can access kubernetes...

I've built the cluster on aws using Kops and I want to make cicd with jenkins inside kubernetes.

Help would be appreciated.

https://redd.it/fnj71v
@r_devops
Packer + Ansible + WinRM to create Windows images


Hey, I am trying to create Windows images using Packer and Ansible. I have little experience with both tools and experience some problems. I hope someone can help me.

The error i'm getting is the following:

`==> openstack: Connected to WinRM!`
`==> openstack: Provisioning with Ansible...`
`==> openstack: Executing Ansible: ansible-playbook --extra-vars packer_build_name=openstack packer_builder_type=openstack -i /tmp/packer-provisioner-ansibl`
`e397519993 /home/ubuntu/winim/2019/ansible/main.yaml --private-key /tmp/ansible-key881940738 --connection packer -vvvv --extra-vars ansible_shell_type=powe`
`rshell ansible_shell_executable=None`
`openstack: ansible-playbook 2.9.6`
`openstack: config file = /etc/ansible/ansible.cfg`
`openstack: configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']`
`openstack: ansible python module location = /usr/lib/python2.7/dist-packages/ansible`
`openstack: executable location = /usr/bin/ansible-playbook`
`openstack: python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]`
`openstack: Using /etc/ansible/ansible.cfg as config file`
`openstack: setting up inventory plugins`
`openstack: host_list declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: script declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: auto declined parsing /tmp/packer-provisioner-ansible397519993 as it did not pass its verify_file() method`
`openstack: Parsed /tmp/packer-provisioner-ansible397519993 inventory source with ini plugin`
`openstack: [WARNING]: Skipping plugin`
`openstack: (/home/ubuntu/.ansible/plugins/connection_plugins/packer.py) as it seems to be`
`openstack: invalid: while scanning an alias in "<byte string>", line 9, column 7 did not`
`openstack: find expected alphabetic or numeric character in "<byte string>", line 9,`
`openstack: column 8`
`openstack: Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/default.pyc`
`openstack:`
`openstack: PLAYBOOK: main.yaml ************************************************************`
`openstack: Positional arguments: /home/ubuntu/winim/2019/ansible/main.yaml`
`openstack: private_key_file: /tmp/ansible-key881940738`
`openstack: become_method: sudo`
`openstack: inventory: (u'/tmp/packer-provisioner-ansible397519993',)`
`openstack: forks: 5`
`openstack: tags: (u'all',)`
`openstack: extra_vars: (u'packer_build_name=openstack packer_builder_type=openstack', u'ansible_shell_type=powershell ansible_shell_executable=None')`
`openstack: verbosity: 4`
`openstack: connection: packer`
`openstack: timeout: 10`
`openstack: 1 plays in /home/ubuntu/winim/2019/ansible/main.yaml`
`openstack:`
`openstack: PLAY [Start of Ansible playbook] ***********************************************`
`openstack:`
`openstack: TASK [Gathering Facts] *********************************************************`
`openstack: task path: /home/ubuntu/winim/2019/ansible/main.yaml:1`
`openstack: The full traceback is:`
`openstack: Traceback (most recent call last):`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 146, in run`
`openstack: res = self._execute()`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 601, in _execute`
`openstack: self._connection = self._get_connection(variables=variables, templar=templar)`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/executor/task_executor.py", line 895, in _get_connection`
`openstack: ansible_playbook_pid=to_text(os.getppid())`
`openstack: File "/usr/lib/python2.7/dist-packages/ansible/plugins/loader.py", line 558, in get`
`openstack: self._load_config_defs(name, self._module_cache[path], path)`
`openstack: File
"/usr/lib/python2.7/dist-packages/ansible/plugins/loader.py", line 293, in _load_config_defs`
`openstack: dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()`
`openstack: File "/home/ubuntu/.local/lib/python2.7/site-packages/yaml/constructor.py", line 74, in get_single_data`
`openstack: node = self.get_single_node()`
`openstack: File "ext/_yaml.pyx", line 707, in _yaml.CParser.get_single_node (ext/_yaml.c:10484)`
`openstack: File "ext/_yaml.pyx", line 725, in _yaml.CParser._compose_document (ext/_yaml.c:10831)`
`openstack: File "ext/_yaml.pyx", line 776, in _yaml.CParser._compose_node (ext/_yaml.c:11813)`
`openstack: File "ext/_yaml.pyx", line 890, in _yaml.CParser._compose_mapping_node (ext/_yaml.c:13717)`
`openstack: File "ext/_yaml.pyx", line 732, in _yaml.CParser._compose_node (ext/_yaml.c:10932)`
`openstack: File "ext/_yaml.pyx", line 905, in _yaml.CParser._parse_next_event (ext/_yaml.c:13923)`
`openstack: ScannerError: while scanning an alias`
`openstack: in "<byte string>", line 9, column 7`
`openstack: did not find expected alphabetic or numeric character`
`openstack: in "<byte string>", line 9, column 8`
`openstack: fatal: [default]: FAILED! => {`
`openstack: "msg": "Unexpected failure during module execution.",`
`openstack: "stdout": ""`
`openstack: }`

Below my ansible yaml file:

`- name: Start of Ansible playbook`
`hosts: all`


`tasks:`
`- name: Pingerdeping`
`win_ping:`
`data: crash`

Below my packer json file (private information is removed and displayed as <some\_text>):

`{`

`"variables": {`

`"os_username": "{{env \`OS_USERNAME\`}}",`

`"os_tenantid": "{{env \`OS_PROJECT_ID\`}}",`

`"os_domainname": "{{env \`OS_USER_DOMAIN_NAME\`}}",`

`"creator": "{{env \`USER\`}}",`

`"av_zone": "<some_zone>",`

`"flavor": "<some_flavor>",`

`"security_groups": "allow-all",`

`"network": "<some_network>",`

`"source_image": "<some_image>",`

`"instance_build": "windows_2019_std_base_packer_builder-{{isotime \"02-Jan-06 03:04:05\"}}",`

`"dest_image": "windows_2019_std_base_packer {{isotime \"02-Jan-06 03:04:05\"}}"`

`},`

`"provisioners": [`

`{`

`"type": "ansible",`

`"playbook_file": "/home/ubuntu/winim/2019/ansible/main.yaml",`

`"extra_arguments": [`

`"--connection",`

`"packer",`

`"-vvvv",`

`"--extra-vars",`

`"ansible_shell_type=powershell ansible_shell_executable=None"`

`]`

`}`

`],`

`"builders": [`

`{`

`"type": "openstack",`

`"communicator": "winrm",`

`"winrm_username": "administrator",`

`"winrm_use_ssl": true,`

`"winrm_insecure": true,`

`"winrm_port": 5986,`

`"winrm_timeout": "12h",`

`"domain_name": "{{user \`os_domainname\`}}",`

`"username": "{{user \`os_username\`}}",`

`"tenant_id": "{{user \`os_tenantid\`}}",`

`"identity_endpoint": "<some_endpoint>",`

`"availability_zone": "{{user \`av_zone\`}}",`

`"image_name": "{{user \`dest_image\`}}",`

`"source_image": "{{user \`source_image\`}}",`

`"networks": "{{user \`network\`}}",`

`"security_groups": "{{user \`security_groups\`}}",`

`"flavor": "{{user \`flavor\`}}"`

`}`

`]`

`}`

Hope someone can help me. Thanks in advance.

Cheers

https://redd.it/fnir07
@r_devops
Ansible 101 Streaming Series by Jeff Geerling on YouTube

u/geerlingguy continues to be awesome. He will have a 1 hour live-stream every week starting this Wednesday at 3 pm UTC going through Ansible for DevOps on YouTube.

[https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geerling-new-series-on-youtube](https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geerling-new-series-on-youtube)

&#x200B;

**Credit**

Just found out I can't crosspost a post with links to this community. Original post was on r/ansible [https://www.reddit.com/r/ansible/comments/fn3sfg/ansible\_101\_by\_jeff\_geerling\_new\_series\_on\_youtube/](https://www.reddit.com/r/ansible/comments/fn3sfg/ansible_101_by_jeff_geerling_new_series_on_youtube/?utm_source=share&utm_medium=web2x)

https://redd.it/fnefmw
@r_devops
Has anyone here setup Minikube before? Is it easy to install offline?

Sorry if this seems like a basic question, but we're currently working with Docker Swarm for local development and I've been tasked with bringing across and installing Minikube to test out its features and working out the install pains. I work on an air-gap network which has no internet connection, so I'm just trying to find out the best way to bring it across in a way we can install it without issues.

My end goal here is have all the files I need on our secure network, so when we deploy a new CentOS VM via Ansible for developing with, we can simply have a playbook that runs the commands to setup and install minikube locally on that VM for developers to start using.

My questions are basically:

1. Can I simply download the binary as outlined [here](https://kubernetes.io/docs/tasks/tools/install-minikube/#install-minikube-via-direct-download) and run the install command on our secure network and that will setup everything, or does it require internet access to download additional packages/libraries when installing?
2. If the answer to 1 is yes it requires internet, how can I solve this problem? Will I need to download the source and build locally first, and then someone package it and bring that version across to our secure network? Their (sparse) offline documentation refers to a [disk cache](https://minikube.sigs.k8s.io/docs/reference/disk_cache/) where it stores all downloaded information, but I don't understand how I can use this to achieve my goals

I should also note I am just a developer and not a devops engineer, so please bare with me if I am missing any obvious solutions here.

https://redd.it/fnizhe
@r_devops
Cloud-based Virtual Desktops on Google Cloud Platform

I recently spent some time getting Cloud-based virtual desktops running on Google Cloud Platform via:

- OS Login (GSuite authentication to the instance instead of SSH)
- Chrome Remote Desktop

I hope this helps anyone tasked with providing virtual desktops for remote working!

https://github.com/VJftw/cloud-desktops

https://redd.it/fntzof
@r_devops
Terraform and OCTOPUS DEPLOY

Hello all!

I'm currently trying to improve DevOps knowledge and was trying to think of a project to implement terraform with octopus deploy.

I have setup a repo with soruce code of a nodejs application that has a webhook to listen to changes and builds the package that is then pushed to octopus deploy, but am unsure how to move forward. I understand octopus deploy can help push packages to the dev, testing and production environment, but I'm uncertain how to implement this, would it be possible to create a terraform template that would automate setting up these environments for me in my CI/CD pipeline then push the package to them?

Thanks for any help!

https://redd.it/fniiip
@r_devops
Any ideas on how to release AMIs or Azure Managed Images to customers?

How is everyone releasing/managing AMIs or Azure Managed Images to their customers?

https://redd.it/fnua0r
@r_devops
Does kubernetes restart failed resources with kind:pod automatically or it must be managed by a controller like a deployment to maintain its desired state ?



https://redd.it/fnr0cj
@r_devops
Google SRE-SE Interview

I have a 15 min phone interview with Google for a SRE-SE role and I have been asked to study NEtworking, Linux, ds and algorithms. What is the best way to prepare considering I have only 4 days?

https://redd.it/fnq1pu
@r_devops
Security applications that can be added to Atlantis Terraform relatively easily?

Basically what the title says.

Work on a small/relatively new underfunded InfoSec team, looking to expand security into our Atlantis pipeline on a limited budget. After doing some research there are a lot of duplicate "Code reviewing/security vulnerability reporters/apps" so I'm curious if anyone uses a specific one in joint with Atlantis/ can offer some guidance on where to look. Thanks!

https://redd.it/fnr4fk
@r_devops
CI builds for windows and MacOS

I am trying to do desktop builds for MacOS and Windows. I am trying to Jenkins, but I wanted to know what other people are using to do this?

https://redd.it/fnq6gv
@r_devops
Deployment workflow for multiple Kubernetes clusters

As a DevOps engineer I am currently maintaining a large website of an insurance company. At the moment we are in the migration phase of the whole application stack into a Kubernetes cluster.

More specifically, I am talking about several clusters. The environments for Dev, Testing and Production are each deployed in a separate clusters.

Each Git branch is deployed into its own cluster (development => dev, stage => testing, master => production, feature-1 => dev-f1).

Additionally, more clusters for load testing and for new developments of (large) features will be set up.

Currently, I use [Buddy](https://buddy.works/) as CI/CD tool. I have set up several pipelines to build the docker images, additionally there is one deployment pipeline per level and application. As you can imagine, I quickly come up with a considerable amount of different pipelines.

To deploy the docker image to the correct Kubernetes cluster, I check the current branch with a shell script and then set the commit ID in a variable (e.g. `USER_SERVICE_IMAGE_DEV`, `USER_SERVICE_IMAGE_TEST`, `USER_SERVICE_IMAGE_PRODUCTION`). Unfortunately, the variables cannot be created dynamically, so I need to manually create a new variable when a new Git branch is added.

I then use this variable to build the Docker Image and push it into the Docker Registry.

In the build pipeline (which I run separately) I read the variable again to load the current image and deploy the corresponding version to Kubernetes.

I started with this method to quickly start provisioning the Kubernetes clusters, but now I realize that the management of the different branches, clusters and pipelines becomes very complex.

As soon as a new cluster is set up, I have to adjust the build scripts to account for the new git branch.

Do you have a similar setup in your environment? How do your CI/CD processes look like? Are there any tools that can improve my workflow?

https://redd.it/fnfooh
@r_devops
This Week In DevOps

Google Cloud Next was just postponed "until further notice". Does anyone have an interest in online conferences focused on DevOps?

Other announcements were fairly light this week but some preview releases went out and we did have a new Terraform Provider announcement from Hashicorp. To read more check out: [https://thisweekindevops.com/2020/03/23/weekly-roundup-march-23rd-2020/](https://thisweekindevops.com/2020/03/23/weekly-roundup-march-23rd-2020/)

https://redd.it/fnzw58
@r_devops
Help with Jenkins and 'npm test'

Hello.

I am trying to run npm test on a Jenkins pipeline, but as soon as it tries to run, I get an error message saying "Cannot find module ./env.js". Any ideas as to what is going on? I've been stuck on this for weeks now.

Thanks.

https://redd.it/fnwvrb
@r_devops
Need Recommendation for Secrets Management

My company has several pieces of data that contains sensitive information that our employees use on a regular basis. It's not gigabytes of data, but rather just a few spreadsheets worth of stuff. We want to isolate each "document" of data which are of the following types:

* Server Info
* Username/PW for Customer Administration websites
* Spreadsheets with contact details, contract details, etc.

Additionally, we would like to use the same solution as a credentials manager for our users, so plugins for Chrome and Firefox are a must.

Currently I am leaning towards LastPass because it allows me to do all of this.

Other features we need:

* Data ownership (assign a user to own a Datum)
* Ability to share/deny access to any Data by user
* Ability to immediately revoke access to any Data by user

We are using Azure AD for user management and if the solution can use Windows Credentials to authenticate the user and not nag them for credentials all the time would be great.

We are not married to any vendor or platform. Non-Windows solutions need to have a Docker container we can host on Azure.

Thanks!

https://redd.it/fnw5zl
@r_devops
Ansible 101 by Jeff Geerling - new series on YouTube

Wednesday, March 25, at 10 a.m. US Central (3 p.m. UTC), [Jeff Geerling will be doing a weekly 1-hour live-streaming series, "Ansible 101 with Jeff Geerling."](https://www.jeffgeerling.com/blog/2020/ansible-101-jeff-geerling-new-series-on-youtube)

Twitter [Tweet by Jeff Geerling](https://twitter.com/geerlingguy/status/1241538147126775809?s=19)
Considering adding a weekly livestream “Ansible 101” teaching Ansible automation following the book https://t.co/jk6G0An9gb — would you be interested?

https://redd.it/fnb5iv
@r_devops
Free DevOPS Boos: You can get DevOps books by Jeff Geerling free the rest of month of March'2020

Via Jeff Geerling's post [about free DevOPS eBooks](https://www.jeffgeerling.com/blog/2020/you-can-get-my-devops-books-free-rest-month).

The ongoing Coronavirus/COVID-19 pandemic and bear market made Jeff Geerling - The Author realize how beneficial it has been to be adaptable in the tech industry. There are no guarantees in life, and the ability to earn a livelihood is probably the most underrated important aspect of overall health. Most people take it for granted until they are deeply affected by it.

He made his two books, Ansible for DevOps and Ansible for Kubernetes, free for anyone who wants to learn a new skillset as a buffer against possible coming layoffs.

[You can get my DevOps books free the rest of this month
](https://www.jeffgeerling.com/blog/2020/you-can-get-my-devops-books-free-rest-month)

- [Ansible for DevOPS - Leanpub - eBook](https://leanpub.com/ansible-for-devops)

- [ Ansible for Kubernetes -eBook](https://leanpub.com/ansible-for-kubernetes)

Thank you Jeff Geerling !

https://redd.it/fnbblb
@r_devops
How to Run GitlabCI jobs as a user other than root? (Docker as GLCI Runner executor)

I am a student attempting to learn about Gitlab by working on a hobby project so I apologize in advance if this is not the appropriate sub to ask this question. I will try to make this as short as possible.

I have a Digital Ocean droplet configured as a GitlabCI Runner using the Docker executor. I am still learning about Docker as well so I apologize if my problems turn out to be a misunderstanding of Docker rather than GitlabCI.

Essentially I am trying to execute a pipeline job in which the user within the Docker container for that job is NOT root, and I cannot figure out how to achieve this.

The reason running as root within a job's Docker container is a problem is that certain commands will not function correctly via the root user. Attempting to install/ configure CocoaPods is apparently one such case, yielding the following output.

I have spent hours trying different shenanigans including creating a user within the job and attempting to login as that user, but none of these methods have been successful and the container continues to be run in the context of the "root" user.

Is there a way to run my pipeline jobs as a user other than root, and what are the best practices (or even general practices) regarding this. Once again I apologize if I misunderstand something about Docker, please let me know if this is the case.

My problem is very similar to the question posed here, however, none of the solutions seemed to solve the problem:

https://stackoverflow.com/questions/48576412/running-gitlab-ci-pipeline-jobs-as-non-root-user


/root/.gem/gems/claide-1.0.3/lib/claide/command.rb:439:in \help!': \\e\[31m\
[!\] You cannot run CocoaPods as root.\\e\[39m (CLAide::Help)

\\e\[4mUsage:\\e\[24m

$ \\e\[32mpod\\e\[39m \\e\[32mCOMMAND\\e\[39m

CocoaPods, the Cocoa library package manager.

\\e\[4mCommands:\\e\[24m

\\e\[32m+ cache\\e\[39m Manipulate the CocoaPods cache

\\e\[32m+ env\\e\[39m Display pod environment

\\e\[32m+ init\\e\[39m Generate a Podfile for the current directory

\\e\[32m+ install\\e\[39m Install project dependencies according to
versions from a

Podfile.lock

\\e\[32m+ ipc\\e\[39m Inter-process communication

\\e\[32m+ lib\\e\[39m Develop pods

\\e\[32m+ list\\e\[39m List pods

\\e\[32m+ outdated\\e\[39m Show outdated project dependencies

\\e\[32m+ repo\\e\[39m Manage spec-repositories

\\e\[32m+ setup\\e\[39m Setup the CocoaPods environment

\\e\[32m+ spec\\e\[39m Manage pod specs

\\e\[32m+ update\\e\[39m Update outdated project dependencies and create
new Podfile.lock

\\e\[4mOptions:\\e\[24m

\\e\[34m--silent\\e\[39m Show nothing

\\e\[34m--version\\e\[39m Show the version of the tool

\\e\[34m--verbose\\e\[39m Show more debugging information

\\e\[34m--no-ansi\\e\[39m Show output without ANSI codes

\\e\[34m--help\\e\[39m Show help banner of specified command

from /root/.gem/gems/cocoapods-1.9.1/lib/cocoapods/command.rb:47:in \`run'

from /root/.gem/gems/cocoapods-1.9.1/bin/pod:55:in \`<top (required)>'

from /root/.gem/bin/pod:23:in \`load'

from /root/.gem/bin/pod:23:in \`<main>'

Running after script

00:01

Uploading artifacts for failed job

00:02

ERROR: Job failed: exit code 1



Thank you for your help!

https://redd.it/fnas1v
@r_devops
Team was reOrg'd this year, not getting any direction from Execs/Senior Management, Need advice/guidance/suggestions/etc.

Hi all, reaching out to the community as a last resort of sorts. This year has been a serious struggle for me and has literally sent me into a mental illness spiral of issues due to several things, this topic being a huge part of it, plus being overloaded and forcing me to stay in the weeds to meet/attain goals/tasks and not being able to focus on Managing. Anyway, any suggestions/advice/guidance/etc on how to handle my situation here would be much appreciated. Thanks in Advance!!

So at the end of December last year Execs decided to reOrg my team from Operations to the Development Team to attain a better model for feature delivery and improved availability/sustainability of the system. Some of my teams responsibilities are 24x7 sustain support of the core Business Applications (through a relentless Oncall Rotation), Develop Alarms for the Service, Automate things that weren't, Deploy and Coordinate updates that were delivered, Integrate/Architect solutions (Dev Teams don't have a big picture view of anything).

Previously, due to the different Orgs, there were various lines and processes established which enabled some separation of duties. This enabled Devs to throw things over the fence, in a way, which fell to my team to have to handle (some integrations/architecture/Networking/etc), but this forced some responsibility onto them for Apps that we weren't sustaining/supporting. From an outward prospective, things looked to be running properly internally and that there was a lot of order, but now in the new Org, I have noticed its more Wild West and some take advantage of this to their benefit. This has now lead to them trying to force various items onto an already taxed team.

Come to find out, Execs had no plan, no idea of how to integrate my team into existing Org. But they are asking that I come with a Charter with the only direction being they want us to adopt a more Site Reliability Engineering model. This was something I had attempted in the past but without much Buy-In at the time, so I am optimistic with them stating this. But I do not know where/how to start, what responsibilities should be on my team vs devs vs shared. Essentially how to break the cycle/spiral of crazy that is driving me down a path of worsening mental illness.

If you made it this far, Thanks for reading!!

https://redd.it/fo7rve
@r_devops