Reddit DevOps
270 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
quick file sharing solution :-)

Have you ever neeeded to quickly copy a file from one computer to another, but... the boxes did not want to talk to each other? (For example one is Windows machine, and the other one is livecd-booted Linux without samba client...) Yes, me too... :-)

Have fun: https://hub.docker.com/r/michabbs/trashbox

https://redd.it/l2vxi3
@r_devops
Some career advice needed

Hi! I've been unemployed for a while now and I'm looking to get back into the working space.

I have chosen a government funded .NET coding bootcamp that I am planning on following. Now, as I might have expected the government services are not really premium quality service and it's taking forever to hear from these people. I am therefor planning on maybe pursuing a career as a devops engineer.

I have had multiple job offerings for devops engineer positions because of my Linux experience and my home python coding projects, but I am hesitant to go into devops. I am well aware that Devops people have to code as well, but I am only seeing things like ansible and terraform scripts as "coding", with some bash en python on the side. What I was looking for in coding is more like back-end coding (writing program logic, managing databases,..) and maybe some scripting as wel (terraform, ansible). I wonder if going into devops will give me a lot of opportunities to do the back-end stuff as well, instead of only scripts with IaC tools and automation.

I don't know if I'm making sense or if people will understand what I mean, English is my second language and I'm often not very good at explaining myself. XD But I thought I would give it a shot on this SR anyway. Thanks!

https://redd.it/l2n8rw
@r_devops
What would be the best way to automate android mobile device to turn on smart lights on and off?

Workflow

Pair Android device with RPI through an app (GE, Philips)

Pair smart bulb to RPI BLE using the app

Turn smart bulb on and off continuously from the app.

​

Trying to check BLE connectivity by using the app to ensure, the app is not the cause of packet delivery/scanning etc... While having multiple beacons advertising near by.

Was thinking about learning Appium but not sure if this is the most efficient way.

Thanks!

https://redd.it/l2ebkr
@r_devops
VM health status on telegram

Looking for ideas.

​

I would like to create telegram bot that sends messages to Vcenter adminsitraors when a node is having issues..

Any ideas how i can integrate that?

​

Currently testing out using Rundeck with Ansible for CD and CI. And using packer to 'pack' VMs

Works perfectly.

​

Can integreate telegram to Rundeck?

I am using the community version of Rundeck.

I willing to explore other tools as well

https://redd.it/l37ac0
@r_devops
Are build & test really parts of a DevOps engineer job?

I've red many articles about the tasks of a devops engineer and almost all of them emphasize on learning tools like maven for build and selenium for testing.
And now my questions are:

1) If i as a (future)Devops engineer have to build and test a software, then what developers and QAs are doing?!

2) If build and test are really parts of a DevOps engineer job then why i don't see them in the skill requrements of job offers?!

Thank you for clarifying me in advance. 🙏🏻

https://redd.it/l2e428
@r_devops
Job Framework recomendations?

Hi Everyone,

I am building a very data heavy application, which uses a lot of code (python scripts/executables/etc.) that periodically execute and perform tasks.

In the past I've used Jenkins for this, it was easy to schedule and monitor how the tasks do (the log print outs from builds are nice), see failures, add new tasks etc. However this dosen't seem like the right usecase for jenkins, as jenkins seems to bill itself as a CI/CD. Not sure if that should even matter to me as it seems to work fine.

Does anyone use any scheduling frameworks that they'd recommend. I really value the monitoring (which is why I don't just use cron)

Thanks!

https://redd.it/l2c6s1
@r_devops
My career is “frozen” but I am not lazy

For the past 3 years I am working as a DevOps engineer.
Technologies like Kubernetes, Docker, Jenkins, Ansible, Nexus/Artefactory are not new for me and I think I became better and better. I implement all my day to day tasks without need any help from someone in the team.

Before my DevOps position I was Java developer for 6 years.
And this position helps me a lot to write better pipelines, shell scripts, to understand needs of developers and get the right decisions when we are talking about release etc.

My main problem now is that all of my colleges in DevOps team has sys admin background. So they are far better then me in investigating problems, networking and all sys admin stuff.

I feel I need a lot of sys admin experience. That’s way I decide to make for myself a home lab. Mostly some VMs using KVM and start breaking them.
The question is how can I produce some real time scenarios instead of just reading some sys admin blogs/articles and reproduce these stuff.

https://redd.it/l270m2
@r_devops
Cockroach db in gitlab ci services

I have configured test cases in gitlab ci pipeline and it will require a database connection to run test cases. So I am trying to configure the cockroach db database in GitLab ci services. But I can't able to connect the database container from the app container service. here is my sample gitlab-ci.yml file.

​

test:
image: node
stage: test-cases
variables:
DATABASE_URL: postgresql://test_db:password@localhost:26257/test_db?sslmode=disable
services:
- name: cockroachdb/cockroach:v20.1.4
alias: localhost
entrypoint:
- "bash"
command:
- -c
- >
mkdir certs my-safe-directory
COCKROACH_DB="$(cat /etc/hosts |grep $HOSTNAME|cut -d$'\t' -f1)"
cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key
cockroach cert create-node localhost 127.0.0.1 cockroachdb $(hostname) --certs-dir=certs --ca-key=my-safe-directory/ca.key
cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key
cockroach start-single-node --certs-dir=certs --listen-addr=0.0.0.0:26257 --http-addr=0.0.0.0:8080 --background
cockroach sql --certs-dir=certs --execute="CREATE DATABASE test_db; CREATE USER test_db WITH PASSWORD 'password'; GRANT ALL ON DATABASE test_db TO test_db;"
tail -f /dev/null
cache: {}
script: |
curl -i https://localhost:8080 ##not able to connect service.
npm test
allow_failure: true
only:
- master

I also tried connecting using an alias but not worked. Help me if anyone has an idea.

https://redd.it/l1ryiy
@r_devops
Versioning images and code releases

How do you guys handle versioning of application or code releases?

I would like to use a semantic versioning structure, but it has not been easy to do this the kubernetes way, as for example most images on Dockerhub have their version numbers (and arch's) in the tagfield. This makes sorting and determining the versions before the latest version pretty difficult.

I have also tried to inspect image descriptions to trace versions, but even for this, there is no clearly defined structure (or best practice) that is mostly used.

https://redd.it/l1li9q
@r_devops
What is the simplest log aggregation tool out there?

I have some log files on a server. I want to aggregate them into a single persistent place and then do some basic searches over them, and occasionaly just watch incoming logs.

Preferably I'd want to install just a single binary for the centralized server and the log shipping agents (if required)`, simple config, low overhead. No JVM for the sweet love of God.

Best thing I've found so far is Papertrail, which has some limitations for the free version. SumoLogic is OK but kind of bloated for what I want. Simple is the key criteria. Elasticsearch for example is way too bloated.

Any ideas?

https://redd.it/l1m0x8
@r_devops
Anyone else using Universal Resource Names (URN) for Asset Tracking and Automation?

Just like AWS' "arn" some shops use this resource name to identity assets. These arn's obviously help give an asset a 'name' to serve as a unique identifier. Having that name allows you to develop a service where you can 'attach' metadata to it (e.g. datacenter, region, managed_by, is_live, etc). Now that you have that metadata you could automate many things from that (e.g. enable/disable monitoring, where to send alerts, billing, etc).

One challenging area is naming the URN. Specifically, since URNs are strings delimited by ":" what do you define each token to be? One common approach is some form of reverse DNS (e.g. com.example.devops.service1). Curious if anyone else is using URNs, how your using them, and your approach to coming up with the tokens of the URN?

https://redd.it/l3ey3g
@r_devops
Director of IT & DevOps?

I'm not in DevOps, I recruit for a software company. Our VP of Product & Engineering is splitting the team into 2 groups: Product (for our core product, dashboard, etc) and "everything else" (AWS, corporate website, marketing automation, Salesforce, database, internal IT, etc)

He's asking for help coming up with a title for the "everything else" team. Has anyone seen a Director of IT & DevOps title? Or do you have any other suggestions for a role like I'm describing?

Any insight is appreciated!

https://redd.it/l0r154
@r_devops
Traditional IT role with a State agency to DevOps?

Good Morning,

I wanted to see if anyone has any opinion on this. I currently have about 6 years of experience working in IT at a State agency primarily as an administrator for a few proprietary bits of software that we use(an internal case management system and a SaaS that we use that manages digital evidence). I have been looking into DevOps for some time now as I have experience with Python, Docker, virtualization, and various other IT functions, but most of them that would be applicable when applying to a job would be things that I do personally at my home lab and not so much at my office. The agency that I work for doesn't really have opportunities to work with these kinds of systems, unfortunately. I was originally looking to transfer to a role in QA as an entry point into the software development world since I won't be able to get anything like that where I'm at currently, but a lot of people I spoke with gave me the impression that doing something like that would most likely lock me into that position and that if I want to go for DevOps I should instead pursue that actively. My question is with the current experience in and out of the office should I try and get some certification to help me stand out or should I try and apply and hope that I can demonstrate that I am passionate about this, enough so that I do most of my learning in my own time and on my own dime.

If anyone is interested I would love to get some feedback on my resume, although I may need to update it because right now it's mostly tailored towards QA.

Thank you in advance for any and all assistance on this matter.

https://redd.it/l0lryc
@r_devops
How do i come up with a proof of concept

So i have a question which might a slightly off topic.

So i am not a DevOps engineer per say, but my job pretty much revolves around that.

Nevertheless, i pitched an idea to automate some of the stuffs the team does on a regular basis (Continuous deployment) to one my managers, whom agreed that we should implement it, but he has asked me to come up with a proof of concept.

So, i was wondering what do i have to include in a Proof of concept?

I researched online and i get various results.

Was hoping someone who does this regularly can point me in the right direction or even give me a good example of a proof of concept in the DevOps industry

Sorry to ask such a silly question, i have not done such before nor am i formally educated.

​

​

Thanks in advance

​

:))

https://redd.it/l0iwsj
@r_devops
New to devops, can't really understand something related to docker-compose

Hey guys, great community here by the way..

There is something I can't understand in the integration between Dockerfile and docker-compose.yml .

First \- Is it a must to have a Dockerfile in the same location as the docker-compose.yml file ?

Second \- I can write a line in the docker-compose such as : image: <SomeDockerHubImagePull>

Will it also build that image ? if so, why do I need Dockerfile at all ? only for a specific CMD/Run commands?

&#x200B;

Third \- From what I understood, Dockerfile is serving as the location for the "base" image and the docker-compose.yml is kinda the place where I specify all of my services related to the base image.

So if I only specify the related services in the docker-compose.yml such as redis/consul services, and I run docker-compose up , will it also execute the Dockerfile located in the same folder as the docker-compose.yml ? Will it know that the services that I specify in the docker-compose are in dependent to the image I build in the Dockerfile?

&#x200B;

Thanks for anyone willing to explain :):)

https://redd.it/l0ijlj
@r_devops
What would be the advantage of separating appgateway and aks cluster into two different vnets and then peering them together to connect as opposed to having all of them in one vnet?

I understand at subnet level, the appgateway needs its own subnet in azure, but i am trying to understand what type of performance/security pros/cons would come from having two vnets like this vs 1 vnet

https://redd.it/l3kyi6
@r_devops
Setting up OAuth for Grafana using Terraform and Auth0

I have some personal servers where I tend to install a bunch of internal tools that I want to check regularly (such as Kibana, Grafana, ...). I don't really store sensitive information, but anyways I don't want those to be publicly accessible on the internet. So over the years, I've built a bunch of workarounds like basic auth, ssh tunnels and whatnot.

I recently invested some time in setting up proper OAuth as a real solution that I can use over time. I started with Grafana. I used Auth0, but after some bad experiences not remembering what I did in the UI now I have everything in code using Terraform. Feel like it's a much more stable and maintainable solution. I wrote about the topic if you are interested in the details:

https://hceris.com/setting-up-oauth-for-grafana-with-auth0/

https://redd.it/l0ge4e
@r_devops
SaaS platform for automated servers backups

Hello,

I am web developer. Do you think that a SaaS platform for automated servers backups (VPS servers, block storage volumes, managed databases, files and folders) of some hosting providers (DigitalOcean, Vultr, Linode, etc.) would be a good project idea as for attract many customers to my website? The same purpose that some other platforms are already doing, like snapshooter.io, simplebackups.io, backupsheep.com, backup.ninja, etc.

For example, let's focus on just DigitalOcean for now.

\- VPS servers: they offer automated backups in a weekly basis, so not daily or hourly, or even more. My service would allow this.

\- Block storage volumes: they offer manual backups, so not automated. My service would allow this.

\- Managed databases: they offer a retention of 7 days, so more than 7 backups are not possible. My service would allow this.

\- Files and folders: this is not something that a hosting provides. My service would allow to just backup certain files and folders from the VPS servers.

Now, let's say that some day, very unusual to happen, DigitalOcean in particular decides to change his backups model, so that my solution would be partially or totally useless because they would be doing same as me, which I don't think it would happen just the way I said. Well, this would just be for this particular hosting provider. For the other ones, they would still have weak points, just like DigitalOcean has now.

With this service the customer has the ability to manage all the types of backups (vps, volumes, databases, files) for all the hosting providers in just one dashboard. It makes it possible to even have just one centralized place for all the customer's backups which can be stored in the hosting's own cloud storage, or in the customer's own cloud storage provider, or in my own cloud storage provider as part of the subscribed plan. So the backups can be both stored internally or externally of the customer's hosting providers. In the case that the customer chooses to store the backups externally this would suppose an extra layer of security.

Everything is as much automated as possible for the customer. The customer doesn't have to worry about the hosting's limitations: manual backups, backups periodicity, backups retention, or expensive backups costs. My service is intended to solve all these limitations, as each hosting provider has different backup models, which will differ from one to another, and that they would never satisfy the customer as much as I would. With my platform the customer has a centralized full control over all his backups.

So, coming back to the original question... Do you think that this project idea is feasible and can be successful in the sense that it can attract many customers? I think that the similar platforms that I mentioned at the beginning are having success, and the idea would be make something similar but improved and somewhat unique.

Regards,

Néstor Llamas

https://redd.it/l3hv7q
@r_devops
New to devops best scripting language for Devops ?

Hi all,

I’m thinking to learn a scripting language for automation, can anyone with experience suggest their take on the same.

https://redd.it/l3gw72
@r_devops
Deploy Sentry through CloudFormation using only AWS services

# TL;DR

If anyone else is interested, I’ve written an alternative to this stack in CloudFormation which is deployed via AWS ECS (through either SPOT and ON-DEMAND Fargate containers) and supports all relevant micro-services.

It has been tested alongside Performance Monitoring on a platform with 5 different environments which generates on average about 5k events per hour using just t2.\* instance classes for RDS/Redis/Kafka.

Link = [https://github.com/Rungutan/sentry-performance-monitoring](https://github.com/Rungutan/sentry-performance-monitoring)

# What is Sentry?

Sentry is a service that helps you monitor and fix crashes in realtime. The server is in Python, but it contains a full API for sending events from any language, in any application.

With Performance Monitoring, teams can trace slow-loading pages back to its API call as well as surface all related errors. That way, Engineering Managers and Developers can resolve bottlenecks and deliver fast, reliable experiences that fit customer demands.

## Web vitals

More important than understanding that there’s been an error is understanding how your users have been impacted by that error. By gathering field data (variable network speed, browser, device, region) via Google’s Web Vitals, Performance helps you understand what’s happening at your user’s level. Now you know whether your users are suffering from slow loading times, seeing unexpected changes, or having trouble interacting with the page.

## Tracing

Trace poor-performing pages not only to its API call but to its children. Performance’s event detail waterfall visualizes your customer’s experience from beginning to end, all while connecting user device data to its expected operation.

## Transaction monitoring

With performance monitoring, you can view transactions by slowest duration time, related issues, or the number of users — all in one consolidated view. And release markers add another layer of context so your team can gauge how customers react to code recently pushed to production.

&#x200B;

&#x200B;

# How do I deploy it?

Let me make it clear before we go any further -> **Sentry** prides itself for being [open-source](https://sentry.io/_/open-source/) but it does offer a cloud-based solution as a [SaaS](https://sentry.io/welcome/) for those who do not want to deploy, manage and maintain the infrastructure for it.

There are a few community contributed ways of deploying it on premise if you do decide to not for the cloud version:

* One of the ways is use with **docker-compose** mentioned in one of Sentry's official GitHub repositories - [getsentry/onpremise](https://github.com/getsentry/onpremise)
* Another way is a community built **HELM** package available in this repo - [sentry-kubernetes/charts](https://github.com/sentry-kubernetes/charts)

Both of these solutions though have some downsides, specifically:

* Scaling ingestion of events is a bit hard due to the hard capacity limits of both solutions
* It is a well known fact that database systems perform better on NON-docker infrastructure points
* Keeping up with the different changes in versions is usually a hassle
* Customizing the different bits and pieces such as integrations require a lot of man hours

That's why, for those of you who use **Amazon Web Services** as their preferred cloud provider, we came in your help with **a fully scalable, easy to maintain and secure infrastructure** based on the following AWS services:

* AWS ECS Fargate
* AWS RDS
* AWS ElastiCache
* AWS MSK (Kafka)
* AWS OpsWorks
* AWS VPC
* AWS CloudWatch

&#x200B;

You can deploy it by following these simple steps:

1. Create the stack in CloudFormation using this link ->