Reddit DevOps
268 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
f something goes sideways, one need time to fix it within normal working hours and normal working days. And of course, I do not think they’ll meddle in technical stuff again without first consulting with the people possessing the proper expertise.

Oh, and a little bit of knowledge is a very dangerous thing, but we already knew that… :)

https://redd.it/fa9274
@r_devops
Have anyone tested Azure DevOps + Wordpress integration?

Hello guys,

I'm currently learning & working on making our wordpress site a DevOps friendly environment. I've been trying to integrate Azure DevOps with our Wordpress site.

Here's what I'm thinking of doing:

I will have to init a git on my Kinsta hosting via SSH.

Then connect the repository to DevOps Azure.

​

Do you guys have any experience integrating Wordpress with DevOps? If yes, what are your suggestion?

I have experience with Gitlab. Looking for the suggestion of the wiser & more experienced one. THank you.

https://redd.it/fadew4
@r_devops
Unsupported value: “Always”: supported values: “OnFailure”, “Never”

Hi, I am trying to run the following cron job:

​

apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cjob
labels:
job-name: my-cjob
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
metadata:
name: my-cjob
labels:
job-name: my-cjob
spec:
containers:
- name: my-cjob
image: my-image-name
restartPolicy: OnFailure

​

But get the error:

2020-02-27T14:01:18.7412341Z \* spec.jobTemplate.spec.template.spec.containers: Required value
2020-02-27T14:01:18.7412503Z \* spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", "Never"

2020-02-27T14:01:18.7511779Z ##\[error\]/usr/share/openshift/oc failed with return code: 12020-02-27T14:01:18.7528214Z ##\[error\]/usr/share/openshift/oc failed with error: /usr/share/openshift/oc failed with return code: 1

​

Any idea what I am doing wrong?

​

I've got my inspiration from OpenShift: [https://access.redhat.com/documentation/en-us/openshift\_container\_platform/3.11/html/developer\_guide/dev-guide-cron-jobs](https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html/developer_guide/dev-guide-cron-jobs)

https://redd.it/facsm3
@r_devops
Need some help

Hi my name is Siddhart i am 15 years old and i am from the netherlands.
I started programming when i was 13 years old. I started with visual scripting but I didnt liked that. I jumped straight to unity and c#. I have been working with unity and c# for 3 years now. I started this year with python and learning to programm a arduino. I wanted to ask what the best things for me are to learn like what languages and which books i should read.
Ty

https://redd.it/facevp
@r_devops
Advice to a new career about devops

I'am new in devops, but currently a lead software engineer, i can handle AWS Service, have a basic knowledge in Ansible and Terraform, a great understanding in bash scripting too.

What should be the steps i can do in order to get my self into devops smoothly.

Thanks for the answers and advice, appreciate that.

https://redd.it/fac1ft
@r_devops
Recommendations for setting up a web app production environment on AWS with a CI/CD pipeline using Jenkins

I'm new to the DevOps world and I've joined a project where our engineering team is building a small web app. It's currently in development and a dev/QA environment has been set up on EC2 using EBS volumes.

The code for the app will be turned over to the client upon completion of the project, and I've been tasked with providing instructions to the client which would allow them to set up a production environment and CI/CD pipeline.

Relevant information/requirements:

* The repo for the web app is in GitHub
* The web app is made up of 3 components:
* React/Next.js front-end
* Neo4j database
* GraphQL API
* AWS EBS volumes are used for storage
* AWS CloudFront should be used as the CDN
* AWS CloudWatch should be used for monitoring the web app
* Jenkins should be used for the CI/CD pipeline for deployments to the production environment

So far I've been thinking:

* Use terraform to provision:
* a VPC
* the 3 EC2 instances needed for the web app (FE, DB, API)
* the EBS volumes
* the CloudFront distribution
* the CloudWatch logs/metrics/alarms/dashboard
* a 4th EC2 instance for the Jenkins server
* Configure Jenkins to build and deploy the app upon merges to the master branch

I just want to confirm whether this a good general strategy before I get started.

Are there any major things I'm overlooking? Anything you would do differently?

Also, what would your estimate be for how long it would take an experienced DevOps professional to complete this task?

https://redd.it/fayirv
@r_devops
Netdata release v1.20!

Hey all,

Our first major release of 2020 comes with an alpha version of our new **eBPF collector**. eBPF ([extended Berkeley Packet Filter](https://lwn.net/Articles/740157/)) is a virtual bytecode machine, built directly into the Linux kernel, that you can use for advanced monitoring and tracing. Check out the [full release notes](https://github.com/netdata/netdata/releases/tag/v1.20.0) and our [blog post](https://blog.netdata.cloud/posts/release-1.20/) for full details.

With this release, the eBPF collector monitors system calls inside your kernel to help you understand and visualize the behavior of your file descriptors, virtual file system (VFS) actions, and process/thread interactions. You can already use it for debugging applications and better understanding how the Linux kernel handles I/O and process management.

The eBPF collector is in a technical preview, and doesn't come enabled out of the box. If you'd like to learn more about\_why\_ eBPF metrics are such an important addition to Netdata, see our blog post: [*Linux eBPF monitoring with Netdata*](https://blog.netdata.cloud/posts/linux-ebpf-monitoring-netdata/). When you're ready to get started, enable the
eBPF collector by following the steps in our [documentation](https://docs.netdata.cloud/collectors/ebpf_process.plugin/).

This release also introduces **host labels**, a powerful new way of organizing your Netdata-monitored systems. Netdata automatically creates a handful of labels for essential information, but you can supplement the defaults by segmenting your systems based on their location, purpose, operating system, or even when they went live.

You can use host labels to create alarms that apply only to systems with specific labels, or apply labels to metrics you archive to other databases with our exporting engine. Because labels are streamed from slave to master systems, you can now find critical information about your entire infrastructure directly from the master system.

Our [host labels tutorial](https://docs.netdata.cloud/docs/tutorials/using-host-labels/) will walk you through creating your first host labels and putting them to use in Netdata's other features.

Finally, we introduced a new **CockroachDB collector**. Because we use CockroachDB internally, we wanted a better way of keeping tabs on the health and performance of our databases. Given how popular CockroachDB is right now, we know we're not alone, and are excited to share this collector with our community. See our [tutorial on monitoring CockroachDB metrics](https://docs.netdata.cloud/docs/tutorials/monitor-cockroachdb/) for set-up details.

We also added a new [**squid access log collector**](https://docs.netdata.cloud/collectors/go.d.plugin/modules/squidlog/#squid-logs-monitoring-with-netdata) that parses and visualizes requests, bandwidth, responses, and much more. Our [**apps.plugin collector**](https://docs.netdata.cloud/collectors/apps.plugin/) has new and improved way of processing groups together, and our [**cgroups collector**](https://docs.netdata.cloud/collectors/cgroups.plugin/) is better at LXC (Linux
container) monitoring.

Speaking of collectors, we **revamped our** [**collectors documentation**](https://docs.netdata.cloud/collectors/) to simplify how users learn about metrics collection. You can now view a [collectors quickstart](https://docs.netdata.cloud/collectors/quickstart/) to learn the process of enabling collectors and monitoring more applications and services with Netdata, and see everything Netdata collects in our [supported collectors list](https://docs.netdata.cloud/collectors/collectors/).

## Breaking Changes

* Removed deprecated bash
collectors apache
, cpu\_apps
, cpufreq
, exim
, hddtemp
, load\_average
, mem\_apps
, mysql
, nginx
, phpfpm
, postfix
, squid
, tomcat
If you were still using one of these collectors with custom configurations, you can find the new collector that replaces it in the [supported collectors list](https://docs.netdata.cloud/collectors/collectors/).
* Modified the Ne
tdata updater to prevent unnecessary updates right after installation and to avoid updates via local tarballs [\#7939](https://github.com/netdata/netdata/pull/7939). These changes introduced a critical bug to the updater, which was fixed via [\#8057](https://github.com/netdata/netdata/pull/8057) [\#8076](https://github.com/netdata/netdata/pull/8076) and [\#8028](https://github.com/netdata/netdata/pull/8028). **See** [**issue 8056**](https://github.com/netdata/netdata/issues/8056) **if your Netdata is stuck on v1.19.0-432**.

## Improvements

### Host Labels

* Added support for host labels
* Improved the monitored system information detection. Added CPU freq & cores, RAM and disk space
* Started distinguishing the monitored system's (host) OS/Kernel etc. from those of the docker container's
* Started creating host labels from collected system info
* Started passing labels and container environment variables via the streaming protocol
* Started sending host labels via exporting connectors
* Added label support to alarm definitions and started recording them in alarm logs
* Added support for host labels to the API responses
* Added configurable host labels to netdata.conf
* Added Kubernetes labels

### New Collectors

* eBPF kernel collector
* CockroachDB
* squidlog: squid access log parser

Check out the [full release notes](https://github.com/netdata/netdata/releases/tag/v1.20.0) and our [blog post](https://blog.netdata.cloud/posts/release-1.20/) for full details!

https://redd.it/faz2kc
@r_devops
How can I ask a company nicely to hurry up with the hiring process?

Company: "As I mentioned, I will review the results and then get back to you, hopefully sometime next week."

The last phrase sounds so long. How should I phrase this?

>Do you know how long the hiring process takes?
>
>I'm expecting a job offer pretty soon from a company and I'd love to get to know more about you and your company, as the job specs of your company match my skills better.

https://redd.it/fauqgj
@r_devops
Bro, do I even devops?

I'm a veteran programmer, working as an embedded "devops" guy in the games industry (indie studio level). I write tools and services that are consumed only by other developers - source code control, build servers, artifact storage, data storage/analytics/visualization, and misc quality-of-life stuff. As a programmer I worked my own way into this field, and I don't know anyone else who carries the title "devops", and I'd actually like to know - is what I do even called devops?

Recently I started looking around for another job and felt really out of my depth. Most of the openings seemed to involve customer-facing cloud services at massive scale, all of them using well-established tools. And here's crazy little me, writing my own servers and services and hand-deploying a mesh of docker containers, all of these things being just easier for me to customize for the bizarre needs that game developers have.

What am I even?

https://redd.it/fb2we8
@r_devops
Configuring nginx with docker-compose

I have a simple app of 3 containers which all run in the same AWS EC2 server. I want to configure Nginx to act as a reverse-proxy serving the same domain however I'm pretty new with Nginx and don't know how to set the conf file correctly.

Here is my docker-compose file:

version: "3"
services:

nginx:
container_name: nginx
image: nginx:latest
ports:
- "80:80"
volumes:
- ./conf/nginx.conf:/etc/nginx/nginx.conf

frontend:
container_name: frontend
image: myfrontend:image
ports:
- "3000:3000"

backend:
container_name: backend
depends_on:
- db
environment:
DB_HOST: db
image: mybackend:image
ports:
- "8400:8400"

db:
container_name: mongodb
environment:
MONGO_INITDB_DATABASE: myDB
image: mongo:latest
ports:
- "27017:27017"
volumes:
- ./initialization/db:/docker-entrypoint-initdb.d
- db-volume:/data/db

volumes:
db-volume:

The backend fetches data from the database and sends it to be presented by the frontend.

Here is what I tried to do with my nginx.conf file (which is obviously wrong):

events {
worker_connections 4096;
}

http {
server {
listen 80;
listen [::]:80;

server_name myDomainName.com;

location / {
proxy_pass https://frontend:3000/;
proxy_set_header Host $host;
}

location / {
proxy_pass https://backend:8400/;
proxy_pass_request_headers on;
}

}
}

Any help would be greatly appreciated.Note: I want all containers to run behind the same domain name

https://redd.it/faxz3q
@r_devops
Getting an SSL error when trying to push my Kafka Message to the Cloud via my python script.

I've followed all of the proper instructions via the Aiven Getting Started Page (I'm using their script as a skeleton) & even their youtube tutorial

[https://www.youtube.com/watch?v=QBFWgvudgaE](https://www.youtube.com/watch?v=QBFWgvudgaE)

[https://help.aiven.io/en/articles/489572-getting-started-with-aiven-kafka](https://help.aiven.io/en/articles/489572-getting-started-with-aiven-kafka)

Here's my code:

​

# This script connects to Kafka and send a few messages

from kafka import KafkaProducer

producer = KafkaProducer(
bootstrap_servers="kafka-385d27c1-mkramer789-8285.aivencloud.com:29668",
security_protocol="SSL",
ssl_cafile="/Users/mike/Desktop/AivenKeys/ca.pem",
ssl_certfile="/Users/mike/Desktop/AivenKeys/service.cert",
ssl_keyfile="/Users/mike/Desktop/AivenKeys/client.keystore.p12"
)

for i in range(1, 4):
message = "message number {}".format(i)
print("Sending: {}".format(message))
producer.send("demo-topic", message.encode("utf-8"))

# Force sending of all messages

producer.flush()

Heres the error:

Traceback (most recent call last):
File "aiven_producer.py", line 5, in <module>
producer = KafkaProducer(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/kafka/producer/kafka.py", line 380, in __init__
client = KafkaClient(metrics=self._metrics, metric_group_prefix='producer',
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/kafka/client_async.py", line 242, in __init__
self.config['api_version'] = self.check_version(timeout=check_timeout)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/kafka/client_async.py", line 907, in check_version
version = conn.check_version(timeout=remaining, strict=strict, topics=list(self.config['bootstrap_topics_filter']))
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/kafka/conn.py", line 1228, in check_version
if not self.connect_blocking(timeout_at - time.time()):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/kafka/conn.py", line 337, in connect_blocking
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/kafka/conn.py", line 398, in connect
self._wrap_ssl()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/kafka/conn.py", line 478, in _wrap_ssl
self._ssl_context.load_cert_chain(
ssl.SSLError: [SSL] PEM lib (_ssl.c:3965)

https://redd.it/fav8fh
@r_devops
Looking for some job advice. Been working for 3 months and I real haven’t done any DevOps work.

This is my first job. Very cool fintech company. I’m on a 6 month contract and I get payed $25hr. Ever since I got here I’ve been pulled in every direction doing everything but DevOps. I just got back from a business trip to a major client where I mostly performance tuned their environments and set them up to scale out. I pretty much by myself reduced their user wait times by a factor of 10 so I’m coming back with a great success. I had a great time doing this but it wasn’t even in the application I work in. After doing similar work for a few other clients I’m now the go to guy for solving client problems. Currently in my local office I am assigned critical work on almost every project. For example they’re tasking people to move our client hosted application to azure to sell prod side. They have this big org task chart with tons of business tasks then in the middle is a single box “put application on cloud” with my name above it. I have a few other DevOps that work in different countries but I’m always assigned the work because they don’t answer their emails.

The people I work with everyday are getting payed $150k-500k a year. I am very confident in my value an know that they do not want to lose me. Is my hourly pay something I can ask to renegotiate mid contract? I’ve been keeping track of all the major success I’ve accomplished and have a solid portfolio.

I’m not really doing any DevOps work. I’ve setup Jenkins to do auto deployments for QA but besides that I’m basically just a problem solver. I do a lot of work performance tuning and spend a large amount of time face to face/calling clients. Should I change my job title? What would I change it to? I actually love working with clients even the difficult ones.

TLDR
3 months into my first job on a contract. I am the go to guy for client issues and performance tuning in my company on the North American side and do almost no DevOps work in the project I was hired for. I get payed $25/hr and my desk mates are making 150k. Commonly the project managers from other projects will come and task me with major jobs. Is it crazy to ask to get paid more? Also should I change my job title to something more fitting for what I’ve accomplished?

https://redd.it/fasiib
@r_devops
Reduce redis costs in dev

Hi Devops

I work at a very large enterprise in the UK as a platform engineer. I'm very much new to the role and the nuances of the environment.

We currently have a setup that's dev - stage - prod and the way the pipeline is configured is that everything that's deployed to dev is eventually deployed to stage and prod. Infrastructure included. This is to avoid disparity between environments.

The problem thks brings with it is though is that the environments cost a fortune because the infrastructure is always deployed with the code.

Currently we are being stung with huge costs for redis infra in dev, and its hard to determine whether or not we can shutoff or delete these clusters.

My question is, is there a way to Mock redis in dev only so thst instead of provisioning infrastructure for redis we just fake it? And save the redis deployment for staging and prod?

https://redd.it/fb6l8d
@r_devops
How to learn about networking ?

Hello all,

I am a non-CS graduate devops engineer and I am seeing a lot of jobs out there require quite a bit of networking knowledge. At my current job I do a lot of development and don't really get to work with network layer. It is really a big gap in my knowledge and I'd like to start learning. Where do I start ? What kind of tasks I should be able to do ? Can we include networking and security related "getting into devops" links to the mega post ?

https://redd.it/faqiyo
@r_devops
Any tool chain to manage deployment/ roll out steps?

So in my organization we do roll out of tools (off the shelf).

There are like 40+ steps which need to be followed. The rollout itself us automated but the pre and post steps have to be done personally by a team member.

We use (unfortunately) excel for managing these steps.

I am wondering what kind of tool sets are available to kind of document these roll out steps and manage them (reuse for next releases, review etc).

How does your organization manages these?

https://redd.it/fb804g
@r_devops
Heroku to AWS Migration?

Hi, Imma final year at college doing some freelance development on the side.. I created the backend service for an app with Golang + Postgres stack that is hosted on my personal heroku account. How do i migrate it to AWS? Resarch tells me there are several options available like

a) Start an EC2 Instance with Postgres and Golang running

b) EC2 instance with golang and a Amazon RDS running postgres

c)Dockerize the whole thing and deploy

Which do you think is the best option or is there anything else altogether that i should know of

https://redd.it/faq31p
@r_devops
As a DevOps how do you deal with the management asking you to do Service Desk / Help Desk stuff?

I am seeking for advise. New user here and sorry if I am breaking any rules, feel free to delete this post if so.

Before reading further, I am aware that DevOps is a culture and not a job title, however I will be talking about a job title because that's what the job titles are named these days.

TL;DR: Those of you who work as a DevOps or Ops engineer, but at the company where you work for, apart from the usual DevOps tasks you are also expected to do the shitty Service Desk/Help Desk tasks (like fixing the laptops of the other incompetent coworkers, fix that video call, fix broken WiFis and broken LAN connectors, and other shitty stuff), how do you deal with it? Also have you ever tried to change this or you just leave?

&#x200B;

The long story:

I always use to ask these questions before I start working somewhere during the job interview. *"What is expected of me for this position? What kind of tasks are given to the team? How is success measured?"* The last question give's me an oversight how do they measure success and if they have a KPI which is also another shitty trend.

Thing is, the DevOps is really a mixed concept among the companies. Usually (most often, actually) a company posts a job as a DevOps but what they really need is a one man army, where you are expected to do everything from Service Desk and fixing other people laptops, broken LAN wall jacks, IT Procurement and then pps stuff containers, Kubernetes, Jenkins, Sysadmin things like fixing broken Linux servers, administering mail servers, user account access, managing the physical equipment in the datacenter and what else. Now, please do not misinterpret me here. It is not that I complain because of these tasks, the problem arises when ALL of them are given to a single team, and when ALL of them are measured in the KPI equally for all team members.

With my 10 years of experience I have knowledge of many legacy and modern technologies so I rarely get rejected from a job and when I get and answer like that, usually I am the one who rejects the position.

In reality few companies offer a job position that really is DevOps (or Ops), and in most occurrences they ask for a combination of Service Desk/Help Desk and Ops. Myself for an IT guy of 10 years, to me it makes absolutely no difference if someone comes to my desk with his laptop asking me "please install the printer" or if he comes and asks me to mop the floor in his office.

To summarise IMO, it is humiliating and disrespectful from the company to ask these kinds of tasks from experienced engineers.

Thus I come to the last part of my story and my current job position. I was referred into this company by a friend of mine (who is also my coworker right now and in the same team). During the interview I knew almost everything they asked me and in the end they gave me a demo task that was totally DevOps oriented. It was really a joy for me to work on it and I completed it even ahead of schedule. The salary they offered was really competitive so I took the offer.

All this pleasant experience was so good at the beginning that I completely forgot to ask the main three questions that I mentioned above, so guess what? I landed in the same type of environment that I was trying for so hard to avoid.

What could I do and how to deal with this without leaving? (I have my reasons of why I do not wish to leave at least for another 6 months)

https://redd.it/faauvl
@r_devops
Rancher as a Kubernetes Dashboard

So at work we have Kubernetes clusters being managed by Rancher. A bad experience with Rancher and certificates has left everyone there very sour with it. So there's a plan to migrate our clusters to EKS.

However, there are less technical people in our company that like having a visualization of the clusters that isn't just via kubectl. Heck, even I really like some features that Rancher has, like the "press the + button to deploy more pods" one.

So I was thinking... Would it be possible to have clusters running on EKS, and just having Rancher be like a Kubernetes Dashboard for them? Rancher wouldn't manage any of those clusters, it would only be an interface, kinda like the actual Kubernetes Dashboard.

The thing that is important is that authentication via LDAP is a must, and I found it easier to setup in Rancher than in Kubernetes Dashboard.

Is something like this possible? If yes, how hard/flexible is it to configure?

https://redd.it/fa6unj
@r_devops
Should I modify a AWS DMS task?

I have AWS DMS task running in CDC mode for sometime and now we need to sync two more tables which were not included initially. Should I stop and modify current task or should I create a new task for the new tables?

https://redd.it/fa7db7
@r_devops