Reddit DevOps
266 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
VMware Fusion and Kitchen-CI on Mac

Hi

Is anyone using kitchen-ci to converge cookbooks on a Windows VM with VMWare Fusion (VMF) on Macos Catalina?

I am attempting to migrate from a VBox setup to WMF because VBox crashes the Mac every reboot or shutdown. I read somewhere that for CI to work in Fusion I needed the [vagrant+vmware plugin](https://www.vagrantup.com/vmware/index.html) which I have bought and installed.

So, I already have a W2012R2 VM (not using vagrant for this) and it's configured on our company domain with a static IP address.

I've also setup a custom NAT network (vmnet2) with NAT and WinRM port forwarding:

Host port: 55987
Type: TCP
VM IP address: as configured in the VM
Virtual machine port: 5985

In kitchen.yml :

driver:
name: vagrant
host: 127.0.0.1
reset_command: echo "Starting Test Kitchen."

However when I converge I see this error:

-----> Starting Test Kitchen (v2.4.0)
-----> Converging <APP-W2012>...
Preparing files for transfer
Preparing dna.json
Resolving cookbook dependencies with Berkshelf 7.0.9...
Removing non-cookbook files before transfer
Preparing data_bags
Preparing environments
Preparing nodes
Preparing roles
Preparing validation.pem
Preparing client.rb
Preparing client.rb
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: 1 actions failed.
>>>>>> Failed to complete #converge action: [password is a required option] on APP-W2012
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration

The kitchen / platform is as follows:

platforms:
- name: W2012
driver:
host: 127.0.0.1
port: 55987
guest: windows
transport:
name: winrm
elevated: true
elevated_username: System
elevated_password: null
driver_config:
gui: true
box: TCP_W2012
guest: windows
username: Administrator <<<< as per VM login
password: ******** <<<<
communicator: winrm

Am I missing something or is this not feasible at all?

https://redd.it/gbovpv
@r_devops
Creating a custom Terraform provider

I needed to research how to create a custom provider for my job, so I created a small experiment with a server that provides and API over HTTP and a custom provider that consumes it.

It might be helpful for someone trying to create a custom Terraform provider so here is the code :)

[https://github.com/julianespinel/terraform-custom-provider](https://github.com/julianespinel/terraform-custom-provider)

https://redd.it/gbi79c
@r_devops
How to use Linkerd with Terraform?

Hello,

I am trying to install Linkerd into my cluster using Terraform, but I always get met with the following error after rebooting my deployment:

Message: time="2020-05-01T18:47:46Z" level=info msg="running version stable-2.7.1"
time="2020-05-01T18:47:46Z" level=info msg="Using with pre-existing key: /var/run/linkerd/identity/end-entity/key.p8"
time="2020-05-01T18:47:46Z" level=info msg="Using with pre-existing CSR: /var/run/linkerd/identity/end-entity/key.p8"
[ 0.13589148s] ERROR linkerd2_app::env: Could not read LINKERD2_PROXY_IDENTITY_TOKEN_FILE: No such file or directory (os error 2)
[ 0.13618327s] ERROR linkerd2_app::env: LINKERD2_PROXY_IDENTITY_TOKEN_FILE="/var/run/secrets/kubernetes.io/serviceaccount/token" is not valid: InvalidTokenSource
Invalid configuration: invalid environment variable

Linkerd itself seems to be installed successfully and the `linkerd check` test passes every test.

This is my linkerd install in Terraform:

data "helm_repository" "linkerd" {
name = "linkerd"
url = "https://helm.linkerd.io/stable"
}

resource "helm_release" "linkerd" {
name = "linkerd"
repository = data.helm_repository.linkerd.metadata[0].name
chart = "linkerd/linkerd2"

set {
name = "global.identityTrustAnchorsPEM"
value = tls_self_signed_cert.trustanchor_cert.cert_pem
}

set {
name = "identity.issuer.crtExpiry"
value = tls_locally_signed_cert.issuer_cert.validity_end_time
}

set {
name = "identity.issuer.tls.crtPEM"
value = tls_locally_signed_cert.issuer_cert.cert_pem
}

set {
name = "identity.issuer.tls.keyPEM"
value = tls_private_key.issuer_key.private_key_pem
}
}

It seems to have something to do with service accounts, but I'm not sure how to go about fixing it. Thanks in advance for any assistance.

EDIT: Looking further into this, it's because the secrets volume is not mounted, although I'm not sure why it wouldn't be mounted. Comparing the output between the default emojivoto app and my deployment, the following mount is missing:

/var/run/secrets/kubernetes.io/serviceaccount from emoji-token-h65v7 (ro)

I see that my deployments have service account tokens though so I'm not sure why they are not mounted alongside the pod.

https://redd.it/gbo3pc
@r_devops
I am in a shop that doesn’t think about devops but my job involves automation and the tools are moving more towards making devops a priority for our infrastructure. How do I help shift the culture towards a devops mindset?

Officially, I am a “Infrastructure automation engineer”. Unofficially I am basically a DevOps engineer. I work for a large organization that is SD based and everything revolves around tickets and change requests. Agile isn’t even whispered around the hallways. I can talk to other teams but all the main decisions fall into the hands of the directors. Its somewhat maddening when I can try to plan out a delivery process for a specific API connection but no one can tell me why that API might be useless in a few months.

We’re a VMWare shop and VMW is moving their stuff more towards being cloud agnostic. That’s great...but none of the people above me seem to understand how we could streamline our processes or why I want to have better comms with the dev teams. Or why I care about integrating the CI/CD pipeline in with our automated infrastructure workflows.

Has anyone been in a similar boat? How did you deal with it?

https://redd.it/gbnqjz
@r_devops
How can I get services to service communication working using Nomad / Consul?

I'm a noob to orchestration and working on learning HashiCorp Nomad since it's evidently a lot simpler than Kubernetes.

I got a cluster up and running, but after reading through the docs and guides I still cannot figure out how to have one service access another.

I see that Consul Connect is used for that, but a lot of that is security related, setting up ACLs, etc. which I don't need at all. I just want to have one service be able to reach another.

Is there something I'm missing?

https://redd.it/gbhbs5
@r_devops
infrastructure-as-code: yaml/hcl vs general purpose programming framework

Hi Devops!

As the title suggest what are your preferences and thoughts regarding this? Pro's and con's? Would be interesting to hear your thoughts.

I honestly havn't made up my mind what the best approach is atm. I've been using Terraform and Cloudformation for quite some time (strongly favours Terraform).

As great as Terraform is there are always times when I wished that I had general purpose programming constructs to work with, like if/else statements, loops and what not. Terraform have added some features in this regards however does not feel 100% natural, often feels like I'm fighting the dsl.

Recently Pulumi and aws cdk has popped up, where instead of a dsl (yaml/hcl) you write in javascript or your favourite programming language to provision your infra. From my understanding you get state and resource dependency graphs (the thing that makes an IaC tool worthwhile).

https://redd.it/gbhv65
@r_devops
Why I got rid of our dev, test, staging and prod environment

Hi Reddit, I wanted to share a process/concept I introduced where I work for how we manage our environments.


I'm sure many of you are all aware of the usual dev, test, staging and prod environments and application changes move through these stages to finally get released to the end user. A problem me and my team had was environment bottleneck where for example devs would finish a feature but couldn't move it to the next stage because QA were still testing the previous feature in the next environment. Developers would also develop locally but if they wanted to test on the more closer to production like dev environment they risked wiping out a another devs current changes so there were constant slack messages along the lines of "Can I deploy X to Y" and you hoped someone would reply before you overwrote something you shouldn't have.


We are already a team that embraces infrastructure as code and our environment were brought up in an automated consistent matter. The problem was there was a 1 to many relationship between our environment stages and team members.


So since we can bring up an environment with code, why fix ourselves to 4? I called the concept color environments (but really you can use anything that has an an essentially infinite pool of options to choose from). Now when we work on a feature we deploy to a random color that isn't already in use and our stack gets a domain to access it based on that i.e. "cyan.example.com".


We've been doing this for half a year now and it has drastically changed our development and deployment process for the better.


* Developers can spin up their feature without waiting for an env to be available
* QA can test against a devs color or re-create a new color on their branch
* Our product owner can be given a color env with a feature to review it for as long as they need
* We can do user research and AB testing between colors
* Environment drift is not an issue as colors dont stay up very long and we always create an env from scratch
* Our deployment to prod is just bring up a new color and do a blue green DNS flip


Theres a few hurdles we had to overcome so here's a few of the main ones:


Spinning up infinite of environments can be costly. We're on AWS so took advantage of services like Lambda and other serverless services to keep costs right down. Our environments are also ephemeral by default and after a few days of being brought up they destroy themselves unless configured otherwise (such as prod envs or features that are taking longer to develop).


We gained extra flexibility with our environments but that also came with extra complexity and time spent waiting for an environment to be available. The application stack we did this on was fairly small and we found the sweet spot for time to getting a new env up from scratch ~15 minutes. Enough time to grab a coffee and not too long, updates to an env after are much quicker once its up. For that reason I don't recommend this for large application stacks, maybe this could work for a part of a stack such as a micro-service that is part of a bigger monolith.


Databases and blue green flips can be a bit tricky. Luckily since blue/green deploys are not a new thing there were a few resources out there to help us with this.


Anyways thats a quick rundown of the concept, hope it's something interesting. Has anyone else done something similar? Also if you have any questions about the concept/process let me know :)

https://redd.it/gbhtk0
@r_devops
Does Devops and Windows mix?

Honest question, does anyone practicing DevOps actually enjoy working with Windows? Besides using MS centric languages & frameworks is there any benefit to running over linux?

It seems like more of a hinderance due to the lack of tooling support (Windows brings more costs/licensing so dev resources usually go to *nix first, meaning less feature parity or at least more buggy) and lack of flexibility. My Windows knowledge is limited and I've avoided supporting MS software like the plague for a while now so I'm probably naive to the way things are today.

https://redd.it/gblq1y
@r_devops
Using Docker and Terraform for hermetic AWS Lambda CI/CD

Hi,

I was looking for a good way to make Lambda fit with our existing CI/CD workflows around Terraform and CircleCI, when I ran into another problem of the Lambda being a bit more complicated than just the python files involved. I wrote up our solution and was curious how other people are making these all play together well. Using docker files for lambda like this gave me hermetic builds with very few lines of code.

[https://medium.com/@cep21/using-docker-and-terraform-for-hermetic-aws-lambda-ci-cd-b57a77dcaaf6](https://medium.com/@cep21/using-docker-and-terraform-for-hermetic-aws-lambda-ci-cd-b57a77dcaaf6)

https://redd.it/gbl44l
@r_devops
Question; What are some simple DevOp rules all Web Apps and Websites should follow?

I.e.

1. never editing the production server directly
2. using certain GIT branches for X, Y and Z
3. etc.


I'd love to learn more about this stuff.

https://redd.it/gbn212
@r_devops
Swarmlet - A self-hosted, open-source Platform as a Service based on git and Docker Swarm

Hi r/devops ! I wrote a Heroku/Dokku-like tool for easy app deployment and Docker container orchestration when working with a personal server cluster (it also works fine on a single server).

[https://swarmlet.dev](https://swarmlet.dev/)

[https://github.com/swarmlet/swarmlet](https://github.com/swarmlet/swarmlet)

Swarmlet is a thin wrapper around [Docker Compose](https://docs.docker.com/compose/) and [Docker Swarm mode](https://docs.docker.com/engine/swarm/).
[Traefik](https://github.com/containous/traefik), [Consul](https://www.consul.io/), [Let's Encrypt](https://letsencrypt.org/), [Matamo](https://matomo.org/), [Swarmpit](https://swarmpit.io/) and [Swarmprom](https://github.com/stefanprodan/swarmprom) are included by default.
Swarmlet uses these to provide automatic SSL, load balancing, analytics and various metrics dashboards.

The project is WIP, please let me know if you have any comments or feedback!
Don't hesitate to contact me, this is a **learning project** (a few weeks ago I knew nothing about Docker Swarm mode).
I'm definitely no expert yet, so lots of things to improve.
If you're interested, I'd love to collaborate.

https://redd.it/gbfask
@r_devops
Looking to become a devops team lead

Hello everyone,



I'm thinking of becoming a devops team lead. What are your suggestions and how did you make the jump?



Thank you

https://redd.it/gbft2f
@r_devops
Getting started with Docker Compose - Video Demo and Companion Repo

Have a look at the video tutorial and clone the companion repo to follow along! This is aimed at beginners to Docker and Compose but those with some experience might find some tips in there as well.

https://www.youtube.com/watch?v=\_EV5jLtWX8k

https://redd.it/gbecsq
@r_devops
User Input in order to configure different configuration files for multiple containers

I was successfully able to create a `docker-compose.yml` for the Telegraf-InfluxDB-Grafana-Mosquitto Broker stack.

I need some understanding as to how can I request User input before spinning the containers.

Within the `telegraf.conf` which is a TOML file I need the user to map one-to-one information as follows:

```
sensor_name_1 = meta-data_1
sensor_name_2 = meta-data_2
```
This is part of the [enum processor for telegraf](https://github.com/influxdata/telegraf/tree/master/plugins/processors/enum)

I looked into Jsonnet, which currently has a PR regarding TOML unmerged. But I am not sure where will Jsonnet fit in this scenario?

What are other options that I can look into?

https://redd.it/gbeabo
@r_devops
Saving Your Linux Machine From Certain Death

Hi, /r/devops

Today I published my new article on how to troubleshoot and fix some common problems with Linux systems, like recovering root password or fixing unmountable filesystems and I think it might be useful for some of you here.

Here is a link:

https://medium.com/better-programming/save-your-linux-machine-from-certain-death-24ced335d969

https://redd.it/gbcwvs
@r_devops
Terraform AWS FIPS provider

Hey guys, this was a royal pain in the ass to type up and I figured you guys may find it helpful. I've had some compliance requirements and FIPS 140-2 validated encryption is a requirement. I went through the AWS docs and got every AWS FIPS endpoint into the AWS provider.

You can maybe modularize something like this but I haven't ever tried setting up a provider in a module so I'm not sure if that's possible.

Here's a blog link: https://blog.kwnetapps.com/terraform-aws-fips-provider/

Here's a link to the github repo: https://github.com/Kaydub00/terraform-aws-fips

Now, there's probably more to meeting these requirements for your org, but if you need to meet these requirements and you're using TF and AWS you'll need this. Granted I've never been asked by an auditor to see this stuff, you may get an auditor who knows their stuff.

https://redd.it/gc9lie
@r_devops
Can't link Git repo on Jenkins

Hello guys,

I'm trying to link my git repo to jenkins, and getting the following error.

**Failed to connect to repository : Error performing git command: C:\\Program Files\\Git ls-remote -h https://github.com/xxxxxxxxxxxxxx/xxxxxxxxx.git HEAD**

What am I missing here? I started leaning DevOps recently and still practicing few things. Please excuse me if it's just a silly doubt.

&#x200B;

Namaste!

https://redd.it/gcacoy
@r_devops
Container environment variable

I'm running an ECS Fargate Task.

This is my Entrypoint script in the Dockerfile

startapplication.sh :

#!/bin/bash
set -e
java -jar test.jar --spring.profiles.active=${envparam}

and then I go to AWS ECS Fargate web console > Task Definition > Container definition > declare environment variable "envparam" and value "dev" there.


But ${envparam} in shell script is not resolving when container launches. What is the issue ?

https://redd.it/gc93ir
@r_devops
What are all the alternatives to Jenkins?

Current Technical stack: Python, Java, Scala and NodeJs
Please suggest a commercial and Easy to setup/maintain kind of alternatives.

https://redd.it/gc6ylv
@r_devops
Good tool for monitoring .net app/infrastucture on Azure

I've been using Zabbix in Linux for years and I'm now tasked with adding monitoring to a web app with a .net backend that is hosted in Azure. What are people using to monitor their .net/Azure apps? Ideally it would be opensource or not too expensive and support the standard system metrics such as CPU, mem, storage, I/O and then custom app metrics.

Thanks ahead of time

https://redd.it/gc71gh
@r_devops
Creating a premade work environment similar to Docker

Sorry if I am posting this in the wrong thread. So I just learned about Dockers and started getting them setup for my work projects and everything is working great. After learning about them I was wondering if there was something similar that could be implemented for our development team. So the issue we run into whenever we hire a new developer or get a computer upgrade is that we manually have to go through and install everything needed for these computers to be setup. Github, Putty, SSH Keys, Virtualenv, etc. Is there a way to do the same thing that Docker but in a regular format? By that I mean no need for Docker itself with containers since all our work computers are Windows based.

In simplest terms, I am looking to do something like Ninite but not as fancy as a UI just a script. I know I can do it through a bash script just wanted to check if there were other ways before I take that approach.

https://redd.it/gca7sy
@r_devops