Reddit DevOps
270 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
How to query a docker repo on an artifactory instance for a list of available images

I have a docker repo on a jfrog/artifactory server. Is there a way Is there a way to query all the images in the repo and return a yaml file?

https://redd.it/lypytb
@r_devops
Change container instance from DEFAULT cluster to ANOTHER_CLUSTER in AMI Linux 2 ecs optimized

Good afternoon folks,

The title is self-explanatory.

I need to change a container instance that was created in DEFAULT cluster. I tried everything with no success.

Things that i tried:

1 - delete the content of checkpoint file used by ecs-agent and restart the ecs-agent container. I did it right after create ecs.config file with my target cluster name.

2- Insert the configuration ecs.config in user data of my launch configuration

EVERY ATTEMPT CREATES A CONTAINER INSTANCE IN DEFAULT CLUSTER :((((

https://redd.it/lyjsnz
@r_devops
Looking to switch into devOps from cybersecurity, what should i focus on?

Ansible seems like of no use to me -.-

View Poll

https://redd.it/lyfjcc
@r_devops
Automating provisioning of additional tenant infrastructure

Say you have a multi-tenant web application using the database per tenant approach. And each tenant requires it’s own S3 bucket as well. I can see how one would use IaC (Terraform) to bootstrap/deploy this initial application (zero tenants). But if you wanted to be able to dynamically create a new tenant automatically (from a website sign up form), what strategies or IaC tools would be used?
I feel like this is a common enough problem but I can’t find much specific information on it (or I’m not using the right search terms).

https://redd.it/ly4cw7
@r_devops
How do single tenant systems backup customer uploaded data?

I am working on a fun/learning project which is pretty much a shitty CMS that is mainly about learning and implementing best practices for DevOps/SRE and am looking for help on a design thing related to backups and DR.

Let's say I have a single-tenant CMS core that runs in a Kubernetes pod and there is the option for the user to upload files like images to the site/service which is used to display to end-users. Ideally, I'd have some way of backing these up but still keeping them secure and able to be restored in the event of a DR situation.

How can I have it so someone could upload an image that is used on the Kubernetes pod to set traffic but is still backed up and usable in the event of a disaster?

https://redd.it/ly0c02
@r_devops
Self-hosted heroku-like solution?

I’m looking for a solution where I can play around with some ideas and possibly convert some of them into paid apps. Heroku is great, but it puts unused apps to sleep and tends to get very pricey when trying to scale.

Is there a self hosted solution where I could spin up multiple NodeJS apps on a Digital Ocean droplet and assign a subdomain to each? I was looking at Flynn but development on it seems to have stalled. Something like bunny shell looks like what I want, but $50/month (on top of the digital ocean droplet) is out of my budget for experiments.

https://redd.it/lxyxfs
@r_devops
Helm chart repository on GCP

Hi,

What's everyone using to store their helm charts on GCP? Just Google Artifact Registry?

https://redd.it/lyrwii
@r_devops
Benchmarking Fluentbit vs Fluentd

Hi All, I want to benchmark the performance and resource usage of fluentd vs fluent bit? My use case is for the edge environment, we are of fluent bit designed for edge and IoT environment with limited resources available. We like to validate this by benchmarking these two tools on our servers. I am running these as daemon sets on the k3s cluster. Any tools if you guys have in mind to achieve this, that would be very helpful?  Thanks

https://redd.it/lyq4i3
@r_devops
Email Me When My Site is Down

Imagine you launch your website and out of the blue your site goes down and you have no idea. Yikes! Well, that's where https://github.com/Salaah01/website\_pinger/ comes in!

I had my website go down a few days ago and only noticed when I hardly had any traffic come onto my site https://www.bluishpink.com. And so, I've written a little bit of Python that will email me from now on whenever the site is down that is triggered by a shell script.

Just a little code you can use to email yourself if your server is down!

https://redd.it/lyplw7
@r_devops
New developer trying to understand

Hey all! I'm a very green developer (was moved into the space by way of developing an 'application' for a team which saved a ton of money). Being that a 'DevOps' team was literally formed around myself and a few individuals doing development/scripting/ coding type work, I am wondering if the functionality of our team is more just lipservice to DevOps/the company just deciding that 'the world is doing DevOps so should we'. I would like to better understand how many products, development platforms, and services a single developer typically supports. Does a typical life look like a project request team? IE: one development team services multiple other teams requests for new products/services? Or is it more like you support a single application, service, or Product your company provides, and you make sure that it stays running, optimizing it and developing new features?

https://redd.it/lxc6in
@r_devops
How do you tune for performance/diagnose bottlenecks on a server when you develop on a local machine?

I'm built a micro service architecture that works pretty good on a single workstation machine. When I deploy to a larger server and run things at full speed(it ingests data from different sources) I start to run into different bottlenecks due to CPU allocation, memory and disk iops. For example one of the services is elastic search which uses JVM and can have issues with back pressure if things go to fast so it has to be tuned.

I haven't dockerized things and know I need to. Up until this point i have been ftp'ing files to the server, running a bash script that starts all the services, with a follow up command to begin processing data.

Looking for advice/direction on how to best troubleshoot bottlenecks/tune for performance on a server, when the code is written on a dev machine.

Thanks in advance, I'm new to devops but am trying to learn fast.

https://redd.it/lxbtq5
@r_devops
Experience with multi frontend setups?

Hi! I've now setup Kubernetes, CI and CD for our project. It is great, but our frontend is getting increasingly bulky and builds take a long time.

From an operations perspective the answer is obvious. Split it into micro frontends and toss Nginx in front of it for the core routing. Each frontend can have its own container and you can call it a day.

From a development point of view? Not so much. We could maintain a docker-compose file for starting the project, but that will require one Dockerfile for development and one for production. Duplicate setup, but not too bad.

The dev setups would have to have the src and .yarn in volumes. src to enable live reloading and .yarn to include the build cache.

node_modules can't be included as native build modules may have different binaries.

Installing dependencies would be really awkward. Stop docker-compose, yarn add some-library, docker-compose up.

It's absolutely manageble, but it feels kind of Frankenstein'y and implements a lot of stuff twice (rebuilding is not an option for frontend development). Is this the best I can do?

https://redd.it/lx0u58
@r_devops
Rebuilding, testing, and deploying all microservices even if there wasn't an update to it?

I'm just curious if this is considered a bad practice because it seems like it would slow the pipeline down considerably.

I'm working on Azure DevOps Pipeline that I'm trying to keep to a single file and I'm trying to implement functionality in it where it avoids building, testing and deploying a service if there have been no changes to it. So if I have the following microservices:

* `/` (`client` repo)
* `/admin` (`admin` repo)
* `/api` (`api` repo)

There have been no changes to `api` or `admin`, but there have been several to client, the pipeline shouldn't rebuild them, run tests, and redeploy... really, it should pull the latest image and do integration testing with the `client` service that does need to be built, tested, and deployed.

So is it fine to let it rebuild, test, and deploy these unchanged services or should I implement something to prevent it?

https://redd.it/lx1vnn
@r_devops
Handling secrets in Flux v2 repositories with SOPS

This is part 2 of my series on “GitOps with Flux v2”: "Handling secrets in Flux v2 repositories with SOPS"

If you’re not familiar with what Flux is and how it helps you build GitOps workflows on Kubernetes, feel free to read part 1 here: “Introduction to GitOps on Kubernetes with Flux v2”.

In today’s guide we will look at Mozilla SOPS and learn how to incorporate it with Flux v2 to store encrypted secrets in our GitOps repositories and have Flux decrypt them automatically during deployments.

Hope this is helpful to someone.

https://redd.it/lx0qfl
@r_devops
Artifact/Package versioning

How does everyone handle versioning of their artifacts/packages?

Are you using semantic versioning and increasing with every change and just deploying once it passes your pipeline? How do you track what needs to be deployed then or what's in dev/production?

Do you not version artifacts at all and just rely on your source code versioning? Same questions as above.

Do you add a tag/version as it moves through your pipe? One for feature branch, dev, master, etc, so that you always know what's where and just have your pipeline change the tag?

Something else?

In my case specifically, it's just RPMs right now and maybe eventually Conan packages. No continuous delivery to customers (no connectivity to them) atm.

I understand it's a choice to what fits your organization, just looking to hear some pros and cons and issues you've done with different methods.

https://redd.it/lz3ysl
@r_devops
Junior Devops salaries/life in London?

Just curious for some outside perspective on this, as I’m moving to London at the end of the year. I’ve gotten different answers from different people so it’s hard to be sure what’s what.

Also, how is the work there? I’ve heard work-life balance can be better but I guess it depends. Thanks

https://redd.it/lz7mrw
@r_devops
Looking for non-dev friendly batch job operation service

My organization runs a lot of containerized batch jobs, mostly for importing and exporting data from third-party APIs on behalf of our customers. Today, jobs are both provisioned and operated by our devops people. The main causes of failure in these jobs are external and I would like our support organization to handle most of these failures, since the customer usually wants a notification and explanation and the main resolution is anyway to re-run the job with a slightly different configuration.

Thus, I would like a system somewhat as Airflow or Argo to manage these jobs. But unlike those systems, I would like one which (falling importance):

- has easy to use (i.e. point-and-click web UI) job provisioning
- allows devops to operate infrastructure and configuration (e.g. job templates)
- supports both scheduled and manual runs
- provides easy access to job log files and basic metrics (e.g. RAM consumed)
- has a reasonable API for programmatic access

Hashicorp Nomad combined with Hashi-ui (https://github.com/jippi/hashi-ui) comes relatively close, but is disqualified because it provides no support for easy to use provisioning. Azkaban comes relatively close, but seems not to have strong supoort for containers.

Does anyone know of such a system or service, preferrably a FOSS one?

(My understanding of Airlow, Argo and Celery springs from research rather than operational experience, so info on how to extend these to fill my need would also be an appreciated answer.)

https://redd.it/lz5y66
@r_devops
Incremental introduction of IaC and DevSecOps to a traditional IT department

I've been in IT Operations since 1996. I've always enjoyed scripting a process, whether it was in perl (young people, that was the python of the day) or vbscript, etc. Through the years, I have seen new languages evolve and have really been excited about the idea of coding everything and getting away from "right-click to glory" and "lemme ssh in and fix that" style Operations.

My current challenge is convincing my leadership that DevSecOps is not just for companies that produce software, but for traditional IT shops as well. I've done all the Powerpoint slides on how IaC and CaC are going to increase reliability, configuration management, <insert itsm lingo> but I can't seem to get any momentum.

Side-note, I'd say that 85% of our Operations is maintaining existing rather than building new things.

I'm looking for a quick, flashy, smallest thing that could work to show everyone that it is not only possible, but better.

A good robust ecosystem takes buy-in, hours and dollars; and I want to get there, but I need a spark and some kindling.

Creating "one-offs" is never a good thing, so there is going to be resistance to "use this whole other process to do the one thing" so it has to be Culture Changing from the start.

Any advice, war-stories, or README.md's would be greatly appreciated

https://redd.it/lz5qg2
@r_devops
Easiest way to make file local

We have many servers spread worldwide by supermicro. We need to install Ubuntu on all of them from an ISO. We have the ISO auto install seed ready.

Only problem is, we can't attach the ISO file locally to all the remote servers as it would take forever.

What would be the easiest way of setting up the ISO locally on each location? The plan is to do make the file accisable from each location and run an ansible that would auto install the OS through the ipmi.

Thanks ahead!

https://redd.it/lza3h4
@r_devops
Self-hosted tools similar to bitbucket?

I'd like a graphical interface for my git repos at home.

What are the free offerings?

https://redd.it/lzc7u2
@r_devops
Continuous Deploy workflow for deploying to Virtual Machines and Kubernetes?

What are people's strategy to deploying to Virtual Machines and Kubernetes? My company is currently transitioning to Kubernetes but there are some applications that are just not ready to be put onto Kubernetes and some that just may never be put onto Kubernetes because of various reasons like being a legacy application that leaks and needs to be rewritten or needing very specific drivers, etc...


I'm looking to unify the process in which we deploy to Kubernetes and Virtual Machines and curious if anyone has done this yet. We're using GitLab CI for our build pipelines right now.
Current solutions I'm looking into:
Octopus Deploy - Started out as virtual machine deployments, now they support Kubernetes.
GitLab CI + Helm + Ansible - This is expanding on our current solution of using GitLab CI and Helm to deploy to Kubernetes, just when there is a Virtual Machine deployment we'll have an ansible playbook to make sure that the VM has all the require pre-req packages, firewall rules, services accounts etc... then pull the application down from some raw package store as a zip, extract, and run (general through systemd).


What other solutions have you guys used or can think of? I'm also looking into Spinnakers but I'm not sure that it does what I need. I'd like something that follows a similar flow for both Kubernetes and VM to abstract as much as we can from our Developers who deploy.

https://redd.it/lz4ex9
@r_devops