Reddit DevOps
268 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
I am looking for some beginner/intermediate GitLab Ci/CD piplenine guides

Hello,

I am currently in hope of transferring to a new DevOps team that is starting from scratch(something like internship which will be performance profiled based on the results).

Based on what I understand the basics are to build up a pipeline that will do the build, make tests, and push to production. But I don't know and can't find any good sources in doing this.

Note: I still haven't checked the pinned books in the subeditor but I think that they are a very good start in my path to DevOps.

Any help will be appreciated, thanks :)

https://redd.it/kt62nj
@r_devops
login to github from terminal

anybody know the git input on terminal kind of lost here cause nothing is saving

https://redd.it/kt9ieo
@r_devops
Question about pull request CI strategy

I'm working in a company where all CI procedures determined by the DevOps team mostly without asking the developers. We have tests covering almost any place of our applications.

About a week ago one of my pull request branches couldn't pass the CI procedure and I realized that the production branch is being merged to my PR branch (feature) before the CI procedures.

There were no CI issues with the master branch. However, when it gets merged to my PR branch it fails at some CI checks. After digging the errors I found that there is a non-standard configuration included in the production branch and it was conflicting with my PR because what I implement was right (following the RFC's and documentation.)

Now it is my branch that is falling. I asked DevOps to not merge the production branch when running the CI procedures because I want my PR branch to be tested in isolation without changes from the production branch.

However, DevOps declined by saying "This is how it works, otherwise we can't guarantee the production branch stability if we don't merge it to PR branch before CI procedures."

While I see the DevOps argument is valid but I still I'm not convinced to have run CI procedures with the production branch merged to the PR branch.

So what is your CI procedure look like in terms of PR branch testing and do you think what our DevOps team is doing legit?

https://redd.it/ktf7r0
@r_devops
Never bend the rules in an effort to prop up a fundamentally flawed system. Instead, follow the rules to the letter in order expose the systemic problems. It's the only way to bring about change.

Just thought I'd toss that one out there, as I am sure some of you are enduring a serious mess right now.

https://redd.it/ktck69
@r_devops
Azure pipeline Variables between tasks

Hey,
Im really struggling with variables between tasks and hope to find some help.

I have one powershell task which parse a yaml file, and extract a variable. This variable im setting like this:
Write-Host("##vsotask.setvariable variable=APPLICATION_NAME;isOutput=true$tempApplicationName")

where $tempApplicationName holds the parsed yaml variable im looking for. I have debugged it to make sure it has value.

In the next task, I have a Kubernetes task, which takes inputs:

- task: Kubernetes@0
condition: succeeded()
displayName: "Waiting for rollout"
inputs:
connectionType: Kubernetes Service Connection
kubernetesServiceEndpoint: ${{parameters.kubernetesServiceEndpoint}}
namespace: ${{parameters.namespace}}
command: rollout
arguments: status deployment/$(APPLICATIONNAME) -n ${{parameters.namespace}}

Resource I been looking at:
https://medium.com/microsoftazure/how-to-pass-variables-in-azure-pipelines-yaml-tasks-5c81c5d31763

${{ variables['APPLICATIONNAME'] }} // which is compile time, so i know it wont work
$(variables'APPLICATION_NAME') // Runtime but never worked either

Any ideas?

https://redd.it/kt246j
@r_devops
Running BOINC (contribute computation resource for science voluntarily) on when my servers have <50% CPU load in spare hours: Has anyone done this? Is this good or bad (e.g. do harm to server stability / not environmental friendly)?

Running BOINC (contribute computation resource for science voluntarily) on when my servers have <50% CPU load in spare hours: Has anyone done this? Is this good or bad (e.g. do harm to server stability / not environmental friendly)?

https://redd.it/kt0kom
@r_devops
Simple managing on multiple AWS accounts with AssumeRole

As we were struggling with simple management of multiple AWS accounts for our team (dev, beta, production), we have decided to use AssumeRole for it's simplicity:

https://medium.com/russmediaequitypartners/simplifying-account-management-on-aws-cloud-using-assumerole-3d719bafd34f

https://redd.it/ksziyf
@r_devops
How many people here have "true" Continuous Deployment?

By "true" CD, I mean a GitOps flow where commits to master immediately kick off a deployment to the production environment.

If so, how did you get there? I think the main problem is getting your technical leadership to feel comfortable enough with your tests to allow it.

https://redd.it/ktk0cd
@r_devops
Load balancer in multi-master kubernetes

Hi, I am looking for best practices for load balancer configurations in a multi master cluster. Suppose there is also a web server which needs a load balancer endpoint. I have a few questions:
1. I assume that there will be different VIPs for the control plane and the web application endpoint, correct?
2. How do we ensure that the api-server calls are not swamped by the calls to the web server of the same load balancer is used? Or do people use two separate load balancers for this configuration?

I am looking for best practices in this configuration.

https://redd.it/ksym84
@r_devops
How to Continuously Deliver Kubernetes Applications With Flux CD



# How to Continuously Deliver Kubernetes Applications With Flux CD

I took help from below link for fluxcd

https://medium.com/better-programming/how-to-continuously-deliver-kubernetes-applications-with-flux-cd-502e4fb8ccfe

## But I'm getting an error :

Error: can not connect to git repository with URL ssh://[email protected]/xx/nginx-kubernetes.git

Full error message:

git clone --mirror: fatal: Could not read from remote repository., full output: Cloning into bare repository '/tmp/flux-gitclone470494538'... ssh: Could not resolve hostname github.com: Try again fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Run 'fluxctl sync --help' for usage.

https://redd.it/ksxemt
@r_devops
Recommend me a solution to manage application versions, deployments

Background: we are using GitHub with AWS CodeDeploy today to deploy microservices from one central AWS account to multiple AWS accounts (dev, qa, prod etc). In short, these are triggered from the GitHub Actions UI, which invoke AWS Linux runners that build, create artifacts and use AWS CodeDeploy to create and deploy a release (cross-account deployments).

Now, I need a higher level abstraction where devs/product owners can see a list of applications (i.e. microservices), stabilities (dev/qa/prod), customers (X/Y/Z) and group them in any way. For example, it should be possible to see the current version of an app deployed in stability = qa and customer = Y, or should be possible to see all microservices running for customer Z across all stabilities.

It doesn't look like any of the AWS in-built solutions fit the bill (including CodePipeline). When I try to look elsewhere, I saw the feature set of Spinnaker ([www.spinnaker.io](https://www.spinnaker.io)) and while it has the concept of applications, it tries to do the deployments itself (as opposed to leveraging CodeDeploy).

Core capabilities I am looking for:

* Ability to trigger deployments for services (single or multiple together)
* Ability to have manual approvals (Spinnaker supports this)
* Ability to see what versions (i.e. which git sha) is deployed where
* (Good to have) Scheduled deployments
* (Good to have) View logs of builds (good to have because GitHub Actions would store this)

Please let me know if you're aware of such a tool (other than spinnaker). This is not just a CD solution, but a higher level view of the entire software deployed.

My backup plan is to implement this from scratch.

https://redd.it/ksx7ho
@r_devops
Linux Troubleshooting - Why Is the Server So Slow?

Linux Troubleshooting โ€“ Why Is the Server So Slow? (Running Out of CPU, RAM, and Disk I/O)

https://redd.it/kswycf
@r_devops
D-E-V-OoPS! vs A-G-ILE///

What is Agile?

First, we will take a look at what Agile is all about so we can better compare it to DevOps later. Agile is a methodology that is designed to work on an iterative structure so that it can adapt to changes and respond to constant feedback from the end-user. The point here is to provide constant results that meet the needs of the client.

What is DevOps?

Then we are able to move on to DevOps so we can see how this compares to the Agile framework we looked at before. DevOps is actually a method that comes from Agile. It is slightly different though because it takes into account the unique needs that come with increased software velocity compared to Agile.

etc etc etc etc............ ( simple go through this link --- https://starweaver.com/portfolio/devops-vs-agile-everything-you-need-to-know/ )

https://redd.it/ksxob8
@r_devops
Pre-baked docker images within a gitlab-runner job ?

Hey guys ,


So in a pipeline for a microservice we have integration tests which run and do their job fine.
My issue is that we're needlessly losing time every single run by simple things like pulling in the dependancies


so like within the tests it will need postgres / redis / kafka etc depending on the service.
Is there any smart way to "pre-pull" these docker images so that they're already existing ?


Like for example the job would look like


integration-tests:
stage: test
services:
- docker:19.03.0-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
script:
- npm run test:integration
tags:
- test-runner


But running this would attempt the pull the docker images.


The runner which it runs on \`test-runner\` I was attempting unsuccessfully to add the images into the dockerfile when building it



RUN apk add --update docker openrc
RUN rc-update add docker boot
RUN docker pull busybox


would result in


Step 35/38 : RUN docker pull busybox
---> Running in d18a87927aa0
Using default tag: latest
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

&#x200B;

&#x200B;

But even then I'm not even sure if I should be using that, should i instead be using trying to bake the images into `docker:19.03.0-dind` instead ?


Sorry if this seems stupid but any help would be appreciated

https://redd.it/kts8wu
@r_devops
Looking for suggestins for good VPN provider

could somone suggest my a VPN provider with

* 1. a documented API and
* 2. the feature torenew ip, for given city/country. Ideally non US, no US-Server provider.
* 3. cost efficient (annualy payment)

I also accept suggestion for a fitting /r/somefoo to ask/discuss such stuff.

https://redd.it/kttux7
@r_devops
Handy little GitHub action for auto-updating documentation

I spent some time yesterday trying to get a GitHub action that will automatically update my documentation. I thought my final solution was kinda nifty and was wondering if you guys have solved this problem in other ways.


name: Generate Docs

on:
push:
branches: master

jobs:
docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.15

- name: GenDocs
id: GenDocs
continue-on-error: true
run: |
GO111MODULE=on go get -u github.com/princjef/gomarkdoc/cmd/gomarkdoc
~/go/bin/gomarkdoc . > ./docs.md
exit $(git diff docs.md | wc -c)
- name: Commit Changes
if: steps.GenDocs.outcome == 'failure'
run: |
git config --global user.name 'AutoDocAction'
git config --global user.email '[email protected]'
git add docs.md
git commit -m "auto generated docs"
git push


Although as I typed this up I realized I'm an idiot and I could have just allowed the Commit Changes step to continue-on-error.

https://redd.it/ktvnsk
@r_devops
Anyone know how I can use Gitlab CI/CD to auto accept a Chef Inspec license?

ERROR: Chef InSpec cannot execute without accepting the license

Thanks :)

https://redd.it/ktwm9p
@r_devops
CI/CD pipeline creates Helm chart on the fly - anyone done this?

Hi DevOpsers,

looking for a little bit of feedback on this idea:

let's say we have a bunch of containerized apps. Every repo already has a docker-compose.yml file which allows developers to quickly spin up the app with all its dependencies locally.

As a next step, we want to add automation to our CI/CD pipeline which spins up a test instance of an app on a Kubernetes cluster whenever a new PR is created ("PR Environments"). This will be useful for a range of things, such as automated testing, user acceptance testing, customer demos etc.

My thinking here is that the docker-compose.yml files already contain most of what we need to achieve this (service definitions etc) and I'd like to avoid duplication. I.e. I wouldn't wanna maintain a separate set of K8S config files or Helm charts or similar.

The approach I'm trying out right now is to set up a CI/CD step which creates a Helm chart on the fly using kompose.io, using alternative conversion. I.e. the Helm charts never get checked into source control, they always get created as part of the CI/CD build automation. Whatever additional config is needed on top of Docker Compose gets injected via --set, e.g.

helm install --wait --set service.type=LoadBalancer

Any feedback on this approach, is this sensible? Anyone done something similar?

https://redd.it/ktyeak
@r_devops
Help understanding package repositories

Hey all, I'm new to the DevOps world and am working on ways to improve the current workflow of our C++ projects.

One thing I can't seem to wrap my brain around is the use of GitLabs Conan package repository (or any similar concept).

Is the point so that we can store any required external libraries there? And then they can be used during a later stage in the pipeline or another branch so we don't have to build them again and can be sure we have the right versions?

If so, would our conanfiles then reference our packages stored in our GitLab?

And one person would setup the packages initially so the full development team can pull them locally and reference them?

https://redd.it/ktw8uc
@r_devops
Sonarqube scanning in Azuredevops pipeline

I have some backend codes written in scala.I am using sonar for for static code analysis.I have written a sonar.properties file for scala.But when scanner is running, its throwing some java compile issue.Also the codes contain some play framework modules.If anyone can help me with the issue?

https://redd.it/ktv3sx
@r_devops