Reddit DevOps
266 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Managing Continuous Deployment with a Site that Manages Long Term Projects

I've got an interesting deployment problem I'm trying to solve. We have a website that brings you through a project step by step, with many steps relying on the data generated by the pervious step. These projects could take anywhere from a couple days to over 6 months to get through all the steps. Up to this point we have been able to coordinate with our users to time deployment of updates to the process between projects in order to avoid the issue of previous versions of the process not being backwards compatible with the new version of the process. However, this is going to be unsustainable as we begin to scale, and we would like to be able to deploy updates on a much more frequent clip than we currently do.


I'm assuming this must be a somewhat common issue and am hoping you guys might be able to point me in the direction of some of the best proven strategies for handling this type of issue. Mapping projects to a specific version of the site and keeping multiple versions of the site up and running seems like it could quickly become a nightmare for both the deployments and maintaining the code, but somehow ensuring backwards compatibility for all updates seems impossible. Thanks in advance!

https://redd.it/pvlfnk
@r_devops
On-Premise Systems Version Control

Hi folks! I have a problem with having many on-premise customers. My company used to have many different modeled databases,different column types-names..,etc. I have developed an ETL to have an unified postgresql and started to migrate all data. The main problem the system is so complex that I can still get error while deploying my script because I can never know what kind of problem I will face with different customers. I can be okay with 50 customers but 51. will throw an error. I can fix this for this specific customer but it will be different than the rest. My idea is create a repo in gitlab (company currently uses it) and run a airflow scheduler to trigger it everyday and download the python script and run it. This way I can update every customer with the same version. I have never done something like this before so any idea to do this better?

*I dont wanna be a DevOps. It can be simple :)

https://redd.it/pttsg9
@r_devops
What is your experience as a freelancer?

I work since 5 years as a consultant and being involved in many projects working focusing on provisioning, configurating & integrating services like Kafka, Spark, snowflake and many other tools/cloud services & databases. Building CI/CD pipelines (Gitlab, Azure DevOps & Github Actions), the automation tools I use mostly based on Hachicorp stack and Ansible.

I am thinking to leave my current job and start as a freelancer, in Germany. What is your opinion on such move considering the are of foucs I've mentioned above?

I would love to hear your experience and advices if you have shifted from working as full-time employee into opening your own company or Bein a freelancer.

https://redd.it/pw4gsj
@r_devops
How to pin point EKS cluster to a helm chart?

I’m trying to deploy helm on a cluster but don’t know how to pin point it to find it

https://redd.it/ptnrv3
@r_devops
Please Vote : Dynatrace Idea :Pie chart widget for SLO in the Dashboard for visualisation instead of Number

Hi Everyone,

Are there any other Dynatrace users out? How do you find the platform?

​

Also, I submitted a feature https://www.dynatrace.com/support/help/shortlink/service-level-objectives?\_ga=2.56844266.1579419949.1632188484-330688188.1630396371#analyze-problems \- it would be greatly appreciated if the subreddit could vote for it if they find it useful.

https://redd.it/psatei
@r_devops
Ubuntu + DataSourceVMware via Terraform's extra_config

##### Mission statement:
I am attempting to deploy Ubuntu VMs with Terraform on vSphere 7. Unfortunately, I've had no luck using `extra_config` to pass metadata/userdata to DataSourceVMware. It's very easy to use `vApp Properties` to transmit `hostname`, `instance-id` and `user-data` as they're clearly exposed by template. But, user-data is not a valid location for specifying network (https://cloudinit.readthedocs.io/en/latest/topics/network-config.html#default-behavior). So my solution (for network configuration) has been to use `runcmd` to create a `netplan` file in `/etc/netplan`. This is a silly kludge, it seems.

This may be an Ubuntu or Terraform specific question. If so, and this is the wrong sub, my apologies.

##### Question / Request for Assistance:
Has anyone successfully used the extra_config to interface with the VM during deployment as referenced below (copied from Grant Orchard's blog):
```
extra_config = {
"guestinfo.metadata" = base64encode(file("${path.module}/templates/metadata.yaml"))
"guestinfo.metadata.encoding" = "base64"
"guestinfo.userdata" = base64encode(file("${path.module}/templates/userdata.yaml"))
"guestinfo.userdata.encoding" = "base64"
}
```
If so, what might I be doing wrong? Something to disable vapp properties? Specify something in TF to enable the extra_config?


##### References:
https://grantorchard.com/terraform-vsphere-cloud-init/
https://github.com/vmware-archive/cloud-init-vmware-guestinfo

##### Version details:
- Client system (on which terraform is run): Ubuntu 20.04.3 LTS
- ESXi: 7.0.2 / Build: 18538813
- vCenter Server: 7.0.2 / Build: 18455184
- Cloud Image: https://cloud-images.ubuntu.com/impish/current/impish-server-cloudimg-amd64.ova
- Terraform v1.0.7
- on linux_amd64
- provider registry.terraform.io/hashicorp/template v2.2.0
- provider registry.terraform.io/hashicorp/vsphere v1.24.3

https://redd.it/pweoaj
@r_devops
Distributing GH actions over private GH repos / docker repos

The GH actions docs says:


The actions you use in your workflow can be defined in:

* A public repository
* The same repository where your workflow file references the action
* A published Docker container image on Docker Hub

I work for a customer with GH private repositories only, so distributing over public repos is not an option. I wonder what could be an alternative?

As I see it's also possible to distribute actions using docker containers, but again they should pulled from public Docker Hub, or maybe there is alternative to use private docker repositories?


Thanks

https://redd.it/pwkgtn
@r_devops
Simplest way to automate deploys

Hi everyone, i'm a backend engineer trying to get into devops. At the moment, i've been trying to set up a pipeline that triggers a docker-compose build task after a push to a given branch.

After some research, resources like Jenkins and Gitlab CI/CD seem kinda overkill for what i want to do.

Are there simpler technologies available to solve this problem, or should i just go for one of these?

https://redd.it/pwmejc
@r_devops
Top resources for learning Linux

I'm an Azure Cloud Admin and come from a Windows background. I've been learning and using Terraform, Docker, and python to position myself towards more of a DevOps type role. I think I've come to the point in my career that in order to move any further along in my career I need to finally learn Linux. It's always been on my list of skills to learn and I'm seeing that most DevOps/CloudOps positions are requiring solid Linux knowledge.

​

I'm not starting from zero in regards to Linux knowledge but I wouldn't say I'm much better than a beginner with it. I really want to become a Docker/Kubernetes expert so if there is a way to bundle in learning Linux along with either of those technologies I'd be all for it.

https://redd.it/pwop2i
@r_devops
Has anyone here moved from devops to another role?

Recently I've been thinking of switching things up as I'm not particularly interested in "devops" anymore (read IaC, cloud and building pipelines) but just stick around due to it being a fairly easy job and pretty well paying.

I've been thinking of perhaps switching to network engineering as I've got pretty good knowledge of networking already (all my experience is in the cloud though). Has anyone else made this switch or similar?

https://redd.it/pwpk38
@r_devops
Startup( small- Medium size) vs large enterprise which one is better ?

Hello everyone,
I have 4+ years of experience working as a software developer+ devops/ SRE. I have worked with 3 different companies now, all of which were small to medium size startups. They had very rapid and hyper development environment where things are created and pushed very fast.

Right now I am working in a startup which has around 500+ employees, company's net worth is around $2B. As a devops/ SRE i am responsible for working in CI/CD, automation, monitoring, cloud and containers tech, Databases etc. I may not be expert in all horizontals but i have either fair or good knowledge on each of these.

Now, I have offer from very big enterprise. It has more than 16k+ employees, and is subsidiary of FAANGM. To my understanding they have different teams of each of the different tech they use for eg for they have a different team for kafka, or rational DB or some monitoring tech.

As a DevOps/SRE guy i dont want to restrict myself to just one particular technology/ tool. This domain is already too dynamic, sticking to just one tech would be entering into some sort of a comfort zone i think.


If anyone can shed some lights on this, who have worked with some large scale companies or anyone who has better knowledge and understanding on this..

Context- India, if it makes any difference.

Thanks

https://redd.it/pwpvct
@r_devops
Concourse trigger pipeline to run from another pipeline

I'm using a pipeline with the "set-pipeline" call in order to set another pipeline whenever a push to it's repo is detected. The problem is after setting the updated pipeline, I'd also like to trigger it to run but am not sure how to do so.

For example here is what my auto setting pipeline looks like, would like to figure out how to run a pipeline "build_to_registry" after setting it.

resources:

\- name: my-repo

type: git

source:

uri: ((GIT REPO)

branch: main

paths:

\- yamls/pipelines/

username: ((repo-username))

password: ((repo-password))

​

jobs:

\- name: reconfigure-mypipeline

plan:

\- get: my-repo

trigger: true

\- set_pipeline: build_to_registry

file: my-repo/yamls/pipelines/build_to_registry.yml

vars:

repo-username: ((repo-username))

repo-password: ((repo-password))

registry-username: ((registry-username))

registry-password: ((registry-password))

https://redd.it/pwrm5h
@r_devops
OneDev5 - the open source DevOps server now gets agent based CI/CD farm and full Git LFS support

OneDev is an all-in-one devops server with Git repository management, built-in CI/CD, and issue boards, featuring easy to use, high performance, and good resource usage.

The 5.0 release sees agent based CI/CD farm and full Git LFS support.

# Agent Based CI/CD Farm

In addition to run CI/CD jobs on Kubernetes cluster, OneDev is now able to run jobs on remote machine via agents. Agent can run jobs with or without container, based on executor type you are using. Agents are designed to be zero-maintenance. It will update automatically when server is upgraded. Check these tutorials to explore more of agents

# Git LFS Support

Git LFS is now fully supported, with http access, ssh access and file locking. The CI/CD checkout step can also retrieve LFS files into job workspace if option "Retrieve LFS Files" is turned on

https://redd.it/pwub7h
@r_devops
Confused about my next step...

Hi guys! I'm a 21 year old graduating senior with a computer network and systems management degree, and I'm honestly kind of lost in what to do as my career. I've been reading into dev-ops and cloud architects as a career and it sounds the most interesting to me. My professor told me that getting the CCNA was the best first step due to my major but just in my personal opinion system administration doesn't seem to be very viable in the future. Is getting an AWS cert the correct next step for me? I'm currently studying Linux systems admin on LinkedIn learning and want to decide what cert I will go for next. Should I be going for the CCNA or do you guys think the AWS solutions architect is a better fit to transition to DevOps and cloud computing? Thank you so much for the help!

https://redd.it/pwunsh
@r_devops
DevOps as a student

I’m a CS student with background in IT.
I recently got a job offer as a DevOps student in a cyber security company.
I’m also expecting a job offer as a software developer in a networking company.
I don’t know which one of them is better for me and my future career.
If I take the DevOps job, will it be considered as programming experience in case I want to apply to a SWE position in the future? What’s my best course of action in your opinion?

https://redd.it/pwrm33
@r_devops
Random question from somewhat of a lay person: I’m using Amazon Lightsail to host a dedicated Halo 2 server (lol) - Should I use Windows Server 2012 or 2019?

I figured that it would be better to use Windows Server 2012 instead of 2019 in terms of compatibility since it’s closer to the release of Halo 2 (2007). Is there any merit to this or would I be completely fine with 2019? I kind of hate the 2008’ish layout of Windows Server 2012, but if it might have better compatibility with the Halo server application I’m fine with using it.

https://redd.it/pwrfqh
@r_devops
buildkite

We're currently evaluating buildkite and wondering if anyone has any feedback that you would like to share.

https://redd.it/pwr5qf
@r_devops
Prometheus basic auth

Hi there,

We've set-up a Grafana and Prometheus cluster with succes, but i'm wondering one thing. Grafana is working with LDAP authentication but i want to hide/protect the Prometheus instance with basic auth (see their documentation: https://prometheus.io/docs/guides/basic-auth/) but i have no idea how i can reach this via their docker which is on docker-hub. Now i'm wondering, has anyone done this with succes in a docker container?

Kind regards.

https://redd.it/px14ve
@r_devops
What differentiates a Senior DevOps from a Mid Level?

It's about time for me to start up the job search again, and I currently have 4+ years of experience in DevOps (CICD, Kubernetes, Cloud, etc etc). I'm curious as to what the expectation of a Senior level position is compared to a Mid level?

https://redd.it/px12a4
@r_devops
How is it to work as a devops on a daily basis? am i following a path that matches that profile?

Hi there!

Let me explain a little of context. A year ago i finished my sysadmin studies, somehow at a "university" level, but in a more professional focus way.

So from that point i jumped inside learning mongodb, python and ansible but got "interrupted" because i started a fullstack webdev course earlier this year that will end in less than a month. Now i want to take advantage of that and mix both mongodb and ansible basics i already know with webdev, getting into the mern stack (mongodb,express, react and nodejs) as a project for 2022, letting python aside for now, i also learned a bit of mysql (already known before the webdev thing), git and github workflow.

My main question/concern here is that, lets say i get a job as a devops in two or three months, how do your workflow "looks" like on a daily basis? i mean, which main tasks do you repeat the most? Do i fit on the devops profile? i already know a bunch of stuff sysadmin related from my already mentioned studies.

Also any recommendation about what to learn is really welcomed. Thanks in advance and hope you have/had/are having a great day!.

Edited pre posting: yes, i deleted the previous one because i misstyped the title, im sorry.

https://redd.it/pwm3tp
@r_devops