Reddit DevOps
270 subscribers
2 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Tutorial 51 - Methods With The Same Name In GO | Golang For Beginners

Watch the video tutorial on creating methods in Golang with the same name premiering today at 10AM IST on my YouTube channel Brainstorm Codings.

Make sure to subscribe the channel if you find it interesting, like the video, comment your thoughts, and share.

https://redd.it/nhhjbj
@r_devops
AWS service with CI

Has anyone hooked up their AWS service with CI? I use ec2 and s3 quite a lot and I deploy using awscli. But was curious if anyone does testing (however you do it) and then deploy a ec2 instance using awscli?

https://redd.it/nhcubs
@r_devops
Opinion Kubik: language to define validation rules

I'm working on a language to define validation rules. Purpose is to validate Kubernetes and other cloud configurations.

In this post trying to collect an opinion on the overall syntax. Entire doc is inside README.md. Examples there are using "life" cases (no k8s, etc.).

https://github.com/kubevious/kubik

Thank you!

https://redd.it/nhcccx
@r_devops
Need help updating dependency on Lambda

I'm using a Python (3.8) runtime on AWS Lambda, and one of the packages I'm using requires OpenSSL 1.1.1+, but the Amazon Linux instance used by Lambda has an older version (1.0.2k).

I've read people doing this for NodeJS runtimes (Stack overflow link), but I'm too much of a noob to fully understand this.

How can I achieve this? I'm already using Lambda Layers to update Python libraries, but no idea how to do this for native Linux ones.

https://redd.it/nhc6ib
@r_devops
Managing Binaries/Executables for Jenkins Agents

I'm getting a CI/CD server set up in Jenkins on Kubernetes, and I'm struggling finding good documentation around my issue. How do people manage executables required for a pipeline?

Some things that come to mind are the gcloud sdk and sops library for remote decryption of secrets, but I'm sure some other things could apply. So my question is this - what are the "best practice" ways of handling these things?

My initial thought was to create a custom image with all of the goods I need, but I reach the catch 22 of needing the gcloud sdk to access the image because we store our images in GCPs container registry. Some other things I've read include creating permanent agents with the software you need included, but my current setup uses the Kubernetes plugin to dynamically create pods for agents and assign them to nodes in our GKE cluster.

So, I'd love to hear everyone's thoughts, experiences, and industry go-to's for the issue!

https://redd.it/nh8qaz
@r_devops
What a devops do? Different point of view

Recently I have spoken with an it recruiter, she said that a devops is a sysadmin that knows a little of backend development and fronted development. I said instead that a devops is not a sysadmin and a developer, but a sysadmin that knows how to manage a system infrastructure with the code, I was referring to python and ansible above all.
Who has right? Let me know thanks.
Maybe neither lol.

https://redd.it/nh6lss
@r_devops
Does datadog monitor query supports multiple tags with same key?

I am using terraform datadog_monitor resource to deploy some monitors, one thing I came accross is in the "query" paramater of datadog_monitor resource it only works with single tags as shown below.

resource "datadogmonitor" "httpdevtestdemo" {
..........................
query = "'http.canconnect'.over('environment:dev').by('host','instance','url').last(5).countbystatus()"
.........................
}

The below one does not work

resource "datadog
monitor" "httpdevtestdemo" {
..........................
query = "'http.can
connect'.over('environment:dev','environment:demo','environment:test').by('host','instance','url').last(5).countbystatus()"
.........................
}

Anyone came accross this issue or knows a solution for this except creating separate monitors for each environment.

https://redd.it/ngnrga
@r_devops
restricting use of certain python library for developer

Imagine that we've learned the Python simplejson package has a major security vulnerability. The engineering teams have spent a few days replacing it with safer substitutes, so it's now secure and safe to ship. How would you ensure there are no regressions in the future? I.e., that no one adds the simplejson package back in.

https://redd.it/nhujsz
@r_devops
Has anyone found success in switching to a night role?

Generally, this question is rooted in a lifelong struggle with ADHD but for a long time (years) I've avoided the reality that I can work much better at night, focus better at night, have zero need for medication at night or live with the anxiety and guilt of struggling through each day in the workplace. I believe this may be a better solution than getting back on stimulants, I can't put my body through that anymore.


My question is, has anyone out there had this realization and switched to night hours or found a role fitting that description in the wild and if you did make that switch, did you find that it worked for you? Any unexpected tradeoffs? I can think of a few that might come up with communicating with daytime hour teams or mandatory early meetings when they happen, etc.


Typically I don't see this in job postings but people in this sub are probably used to working with staggered team schedules/international teams in different time zones anyway.


Thanks for humoring my question.

https://redd.it/ni7xw5
@r_devops
MidLevel DevOps Engineer Interviewing Sr DevOps Engineer

I work at a large company, and my manager asked me to conduct the “team member interview” portion of the hiring pipeline.

I’m a MidLevel DevOps Engineer with 2 - 3 years xp and will be interviewing an applicant with 6 years xp. I’m conducting the interview for our sister team, and am familiar with their tech stack, but am not sure how to “interview up” as I’ve only ever interviewed interns and seasonals (college job, not tech).

Any Sr engineers or up-interviewers have advice?

Thanks guys!

PS: Love this subreddit

https://redd.it/ni33gl
@r_devops
Gitlab-CI: Passing version from one stage to the next

I'm running into a bit of an issue which I'm not sure I'm solving in the right way. This is for a personal project, basically to continue learning things about gitlab-ci, etc...

​

What I am trying to achieve is:

1. Commits pushed to master
2. Gitlab CI runs on master and runs tests, lint, whatever
3. If tests pass, CI stage runs an automatic version: release-it, semantic-release, etc, bumps the version number and creates a commit with the updated package.json and CHANGELOG.md
4. The new version is then packaged into a docker image (tagged with new version, sentry release created with new version).
5. New docker image is pushed to deployment.

Problem being that the commit made in step 3, is not reflected in steps 4 and 5.

e.g.

Software is at 1.0.0 and I make some changes and run the pipeline. Step 3 runs and says, "cool, we can make this into 1.0.1 " and makes a commit back to the repo.

Steps 4 and 5 run and bundle the software and deploy it, which still shows 1.0.0 on the front end, with the version from package.json and without the updated CHANGELOG.md which was created/updated during the pipeline.

I hope that makes sense, and I'm totally unsure if I'm approaching this the right way. Basically I want the pipeline to create the next version of the software and release it.

I've found a bunch of stuff on automatic semantic versioning, but noting about carrying the new version forward through the pipeline.

https://redd.it/nhyxq5
@r_devops
Question about nip.io and including port number in ip address

Hi, I'm relatively inexperienced in this area. I have a machine with an external ip address from gcp. I want to create an oauth app (say google oauth) but they do not accept an ip address as it is not a "valid url". I did some digging and learned about nip.io, which I assume is just a service that forwards requests from xyz.nip.io to xyz , xyz being the ip address.

Now, I write wep apps that listen to a certain port (say 2021) and receive requests there. So I would usually do 11.123.12.12:2021 to go to say the index of my web app. But, I don't know how to specify that with nip.io. Should I do 11.123.12.12.nip.io:2033 say? Or, maybe 11.123.12.12.nip.io works and some default port receives the request on the gcp machine (I did some googling to no avail).

I'm hoping someone can provide insight on this (I'm a newbie so if possible simplified terms would be great), and if my understanding of using nip.io in this situations works. I hope this question is ok here!

https://redd.it/nhyj5n
@r_devops
Github integration with teleport

Hi,

I have configured teleport with github as the oauth provider. I am able to login via the Web ui and tsh. I get the admin role which I had configured while setting up github oauth.

The problem is I am getting the username of the profile I logged into in the node connection user list. The username normally dosen't exists.

How should I setup github oauth such that the logins defined in a role are given to the user signing in via github. Normal username and password authentication gives the correct login list for all the nodes.

Any help will be appreciated. Thank you.

https://redd.it/nhxyzc
@r_devops
Will this CI/CD pipeline work out?

Hi all,

I‘m a Junior Dev and try to come up with an idea how my company (very small) could make use of a CI/CD pipeline to streamline some processes which are still done manually.
This is just a idea without any details, would be grateful if you point out what I‘m missing.

3 Stages: Dev, Test, Production
3 Branches: feature, develop, master

1. Every Push to any feature branch triggers build, unit/integration tests and code analysis
2. After merging feature -> develop
a. Build docker image
b. Deployment to Dev stage
c. Extended automated tests
d. If passed, deployment to Test stage
3. Optionally manual tests on Test stage or customer review
4. Trigger deployment to Production stage, merge develop -> master branch


Thanks, any feedback is highly appreciated.

https://redd.it/nhx0xp
@r_devops
IaaC and secrets

How do you guys handle your secrets (service passwords/tokens) as a code? We wanted our secrets to live in source control (encrypted with git-crypt) and be written to some secrets storage, like vault or AWS SSM, with Terraform. However, since we've been using Terraform Cloud, we couldn't get git-crypted files decrypted on Terraform Cloud side. My colleague is working on Terraform provider for reading git-crypted files. If it works out, we are going to have our secrets under source control being managed by Terraform. It's going to remain decrypted in TF state, but we are ok with that since Terraform Cloud keeps it securely. I expect it to work well but I'm wondering if there are any other ways to achieve managing secrets under VCS in a secure way. Can you guys share your experience?

https://redd.it/nhuhdx
@r_devops
Prometheus Metrics Push/Pull Relay?

I was wondering how you folks set up Prometheus scraping for endpoints that don't have inbound traffic enabled.

I am thinking of a use case such as servers running on-site, but running Prom/Grafana in AWS. Or maybe IoT devices deployed in remote locations, or just don't have web servers running.

Is there any sort of Prometheus relay that an endpoint can push metrics to, which will expose those same metrics for Prometheus to pull from? I believe Telegraf can do this, but I'm sure there are other methods, no?

https://redd.it/nhrnnp
@r_devops
Passing and creating metrics in Prometheus using Postgres queries

HelloI want to create metrics which I got from Postgres db.So far the metrics appear in Prometheus using this query-exporterThe problem is that both metrics are big integers in Postgres so the values which i`ve got are not the real ones for some reason..This is a code sample for my queries

databases:
db1:
dsn: postgres://........

metrics:
delay:
type: histogram
description: A sample gauge
id:
type: histogram
description: A sample summaryqueries:
query1:
interval: 5s
databases: [db1]
metrics: [delay, id]
sql: SELECT delay AS delay, id as id from table

These are metrics got from prometheus

id_bucket{container="prom-postgres-monitor", database="db1", endpoint="http", instance="0.0.0.0:9560", job="prom-postgres-monitor", le="+Inf", namespace="dev", pod="prom-postgres-monitor-g3g43g34g3g", service="prom-postgres-monitor"} 6141id_bucket{container="prom-postgres-monitor", database="db1", endpoint="http", instance="0.0.0.0:9560", job="prom-postgres-monitor", le="0.005", namespace="dev", pod="prom-postgres-monitor-g3g43g34g3g", service="prom-postgres-monitor"} 0id_bucket{container="prom-postgres-monitor", database="db1", endpoint="http", instance="0.0.0.0:9560", job="prom-postgres-monitor", le="0.005", namespace="dev", pod="prom-postgres-monitor-g3g43g34g3g", service="prom-postgres-monitor"} 0id_bucket{container="prom-postgres-monitor", database="db1", endpoint="http", instance="0.0.0.0:9560", job="prom-postgres-monitor", le="0.005", namespace="dev", pod="prom-postgres-monitor-g3g43g34g3g", service="prom-postgres-monitor"} 0

Most of them are zeros but some real values which i have in postgres are - 10024958860, 10027398870, 10027401148 etc..

What metric option should i use to get some real data - enum, histogram, summary etc..

Also does it matter what is the data type which i have in postgres - for example as i said now is big integer and i`m not shure if Prometheus is ok with this values ?

https://redd.it/nikmlq
@r_devops
Simple Bitrise build dashboard

My writeup on simple bitrise build dashboard that can visualise all the branches with its buld statuses. You can see your colleague’s or dependent branch status in a single web page.

https://link.medium.com/zLevB0MWsgb

https://redd.it/nihlx7
@r_devops
Progressively Build an Optimized Docker Image for React Projects

Hi everyone, I'm following up on the series of building Dockerfiles, now with React:

https://www.codingholygrail.com/build-docker-image-for-react-projects

Hope you enjoy and as always please provide me with feedback how you 're deploying React on your container clusters.

ps. I know the vast majority of react apps are being deployed in CDNs and other cloud providers (Vercel, Netlify). If you're using Docker what more steps do you take?

https://redd.it/nid6sk
@r_devops