Reddit DevOps
268 subscribers
2 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Do DevOps people need to be utility players?

*Please read first then put in your 2-cents in response to the question.*

Utility players - people with T-shaped skills - were rare in software teams until recently. DevOps is one of those spaces that management are pushing for utility players.

To my understanding, DevOps implies operations involvement in the whole SDLC to make sure the end product runs well. This means a breadth of systems that people in DevOps must be comfortable with -- essentially requiring them to be utility players.

A utility player is someone who can do several things competently

* You need to be comfortable with platforms, tools, networks, servers and databases and customer support to succeed in today's operations landscape
* Very different from the traditional throw-over-the-wall operational role

Ops of yesteryear

* Ops have been traditionally concerned about stability, so they used to set stringent controls on what kind of code is allowed to run on their systems
* In this environment, they dictate the need for extensive QA in staging, reams of handover documentation and releases only when necessary
* That luxury no longer exists in many fast moving software environments

And so we move forward to the new equation -- the DevOps equation where operators are sometimes so involved in the software, they are embedded in the sprints.

They make sure that plans are made so that resources are used judiciously, code is executed securely and quality is assurable. They are doing all the things in DevOps philosophy to make sure the end product that customers see works as intended.

For this reason, they need to have their finger in many pies -- including a good understanding of software planning and development.

https://redd.it/p1girx
@r_devops
Regression testing plaforms recommendations?

Hi There!

I work for a large dinosaur corp. Our application comprises of a couple million lines of code. We've been switching our SVN to git and using github enterprise do develop our CI/CD pipelines.

​

We also have an independent house build regression platform that works of batches of jobs, testcases that're written in perl predominantly. We manage a UAT environment of \~2000 servers of most OS' (even random old stuff like HPIA). Our regression platform/dispatcher will basically dispatch jobs to servers based on things like regression classes/OS level etc. We have our own builtin dashboards to view stuff like the job queues and run reports (basically html/sql pages) and a workers dashboard that we can use to manage our workers (user id's on servers), such as enabled classes and software releases to test etc. Doing my best to describe this but I honestly never bothered FULLY understanding the nuances of this couple decade old platform.

​

Does anyone know of any available platforms/softwares (ideally open sourced) that would be good to help replace this. Essentially a job dispatcher with good queuing functionality, worker management and reporting dash-boarding.

​

Apologies for the lackluster description and/or lack of regression/testing knowledge that'd have made this an exceptionally painful read LOL

​

Edit: Adding some more info about the application. Not much front end/api testing to do. More so function verification, backend stuff. OS level commands etc.

https://redd.it/p1itbt
@r_devops
Microservices are social constructs

>We can draw application boundaries in hundred arbitrarily different ways... There's little [hard\] science in how this work, and in many ways these boundaries are drawn primarily by human inter-relationships and politics rather than technical and functional considerations.

In summary, if you want to have more robust services, get better at communicating and collaborating, as well as building up political capital.

https://redd.it/p1jjba
@r_devops
Is there a forum/website for technical collaboration in DevOps?

Is there somewhere like Stack Overflow where DevOps Engineers can collaborate? I'm new to the job and the previous DevOps guy left months before I got there. I'm struggling to get some things running in Docker and could use some advice from folks with experience.

I'd rather not post that stuff on Reddit. I realize I can ask on Stack Overflow but I was hoping for something more geared towards the DevOps position, and not just Software Engineering students looking to get homework answers.

https://redd.it/p1k3dn
@r_devops
Secure and sane setup of new virtual instance on any cloud provider

Hi, community,

I'd like to share my new blog post.

https://medium.com/@alexeysamoshkin/secure-and-sane-setup-of-new-virtual-instance-on-any-cloud-provider-c9c78342ad24

We’ll set up a new virtual instance on a cloud provider, change the default configuration towards more secure and reasonable settings, create a new user, change sshd daemon configuration, harden firewall settings, and more…

Here is the agenda for this blog post.

How to spin up a new instance on the Linode cloud provider.
Connecting to the remote host under root user via SSH
Creating a new dedicated user asamoshkin and adding it to the sudo group, so it can run commands on behalf of root user using sudo command
Generating new SSH key pair for asamoshkin user and configure the remote host to authorize connections by uploading the public key
Changing sshd daemon configuration to make it more secure. Disable password-baseduthentication. Disable login for the root user. Listen on IPv4 interface only. Allow TCP portforwarding (optional).
Learning a bit of theory about how iptables firewall work under the hood.
Changing firewall default policy from ALLOW ALL to DENY ALL, and enable only inbound traffic to port 22, where the sshd daemon is listening on.
Using Linode Lish console to connect directly to the host on behalf of the root user, in case you messed up with your configuration and cannot connect via SSH any more.
Save iptables rules to a file, in order to restore them back using iptables-save and iptables-restore commands
Create an image from the running Linode instance. Delete an instance to avoid extra charges being applied. Create a new Linode instance by restoring it from an image.

https://redd.it/p1kz4w
@r_devops
Cloud Foundry Simplified

Often while dealing with networks and services, a big question arrives in our heads regarding deployment. It becomes a relevant concern when you have the required product in your hand, but you have no clue how to deploy it and spread it out to the world.

https://www.p3r.one/cloud-foundry-simplified/

https://redd.it/p1ket7
@r_devops
Any suitable training for Terraform?

I want to go for the Terraform Associate cert as part of my DevOps bootcamp I'm doing. Just wondered what material is best to get started on this? I'm using a DevOps bootcamp by Nana from Techworld but I'm spending longer that 6 months and aiming to get certified in each section along the way (I also have Kubernetes, Azure DevOps and Jenkins certs planned).

Interested into what material you all would suggest I use to reinforce this knowledge, as I don't want to solely rely on this bootcamp alone. I'm using the bootcamp as a 'guide' to help transition from Sysadmin, but will add extra training in between. Thanks.

https://redd.it/p1lufg
@r_devops
How can I create a Bitbucket to deploy a Spring Boot application?

Hello all,

Currently I'm deploying my Spring Boot application manually on a Ubuntu Linux server. I build the jar locally, send it per sftp to the server, and then start it using the java -jar command.

I was clicking around a bit on Bitbucket and I saw that they offer pipelines. Is there a way for me to create a "staging" branch and configure bitbucket to listen on that branch and, on commit, build the branch, deploy it on my ubuntu server and then start it?

I have no idea where and how to face this.

Thankful for any pointers :)

https://redd.it/p1o0j4
@r_devops
Orchestration layers

Working with digital.ai xl release on a s/w deliver/deploy pipeline.. any experiences opinions on XL release (+possible alternatives)? Thx

https://redd.it/p1pbtc
@r_devops
Dynamic environments per client, which is the best approach, if any?

Hi, people.


I don't know how to explain this problem better, but I'll try to explain it clearly:
\- Where I work our customers are companies, with lots of users
\- We offer a SaaS solution for them to manage stuff
\- For every customer, there are around 50 right now, we create a vhost in one of our machines, a database in some of our database VMs and we configure the dotenv files for this application, potentially creating more VMs or database machines (nowadays this happens less frequently)
\- The SaaS allows us to configure the same application to serve multiple clients by doing a multi-tenant setup with multiple environment files

The problem:
The whole process of creating vhost, configuring stuff, etc. is manual. We know how to automate this setup with ansible + terraform by creating application VMs and infrastructure for every customer or creating a HA environment with every application dotenv on a shared application, and we even have the option to migrate this to k8s. However, the most valuable thing to us would be to be able to add, to whichever solution we choose, extra customers based on API calls or some sort of automation.

If you're on k8s world, I think this would be equivalent to add a new ConfigMap to every customer based on some sort of automation, alongside deployments for these customers.


Let me know if the question/problem is not clear. Thank you in advance.

https://redd.it/p1nip5
@r_devops
First Episode of New DevOps Master Class - Zero Advertising

I normally create a lot of Azure content and have Master Classes about Azure and PowerShell. My new Master Class is for DevOps and over the next couple of months I'll release a whole set of classes. All on my channel, no adverts of any kind, its just about helping people learn. There is a playlist and GitHub repo of the content. Happy learning.

https://youtu.be/YMdtaWfU\_QE

https://redd.it/p1qlgg
@r_devops
AWS DevOps Pro learning + cert OR DevOps boot camp + cert ?

16 year IT sysadmin here with 3 newly acquired AWS Associate level certs. Learning Python currently. I'm interested in heading down the DevOps path.

I've seen mixed reviews about these DevOps boot camps you see advertised with CalTech, LSU, etc. With no real world experience in DevOps that can reach my resume other than HomeLab stuff, I feel as though putting down a well known college boot camp experience on my resume might have more impact than an AWS DevOps pro cert.

I've signed up for KodeKloud but again, I feel like that might not be as impactful on my resume as a boot camp experience because the name itself isn't as flashy as CalTech or LSU, for example. I've gotten a lot of knowledge from Stephane Maarek's Udemy courses and expect the same from the DevOps Pro course, but ultimately it will only be another shiny badge on my resume.

End goal is to get my foot in the door with a Junior / Entry Level DevOps role.

Any insight into this is appreciated!

https://redd.it/p1rw9x
@r_devops
HackerSploit Docker Security Essentials

The HackerSploit: Docker Security Series aims to provide developers, system administrators and DevOps engineers the necessary skills to be able to audit, secure and manage Docker in the context of an organization or in their own personal projects.
https://www.i-programmer.info/news/150-training-a-education/14785-hackersploit-docker-security-essentials.html

https://redd.it/p1unho
@r_devops
MongoDB scaling and speed

Hello there,

Context: I work in a small company where I am the only Ops/devops/make-some-magic-so-everything-is-running-smoothly. I use a bit ansible, a lot of docker & kubernetes (but always for jobs & low availability needs)-

We have one big mongo that starts to become huge and gets slower for some queries. We have more than 100 databases, as we have a multi-tenancy service.

The actual set-up for the mongo is:Docker-compose running a mongo container on a server, and we use a DO volume to write the data. As we approach the 500GB storage and it is a BIG single point of failure, it may be the best moment to use shards and/or replica sets.

**What is the best to manage and scale such a setup?** *Keeping in mind we are growing fast, availability and speed are our main focus (not confidentiality).*

1. Use a [shared cluster](https://github.com/mongodb/mongodb-kubernetes-operator) (via Helm chart probably) in Kubernetes;
2. Doing it manually with droplets, docker-compose, and command line;
3. Using Ansible to manage the different servers;
4. others?

*Managed mongo services are too expensive for the amount of data we use, that's why I don't include them.*

I have some points/concerns such as:

* I am not a Database administrator, I want to keep it as simple as possible.
* Is running a DB on Kubernetes a good idea? (I've read very different opinions online)
* If something goes wrong, the *meantime to recovery* is really important. 1h of downtime during the week is bad but OK. 2h is really bad. Half of the day we could lose clients.

I am curious to have your opinion on this one :)


Edit: With DigitalOcean, ReadWriteMany volumes is not available, only ReadWriteOnce.

https://redd.it/p1tzbo
@r_devops
Hiring Mgrs, do you even care about Certs?

There's a ton of talk about getting certifications here as a way to get into field. Almost a post everyday it seems. Is this really necessary? Would you rather see a list of certs or a resume full of projects showing you can do whats done on the job... maybe w link to github repo so scripts, yaml, code is reviewable. The cert obsession seems like a carryover from traditional IT industry.

I ask bc there are basically no certs in software engineering other than a degree or maybe a bootcamp. So if you don't have experience you fill resume w relevant projects and that is basically how u get in the door.

As entry swe who came into DevOps this is basically what I did and landed job where they were looking for some overlap. I have 0 certs.

I dont see a lot of mention about doing projects for resume. Just as a suggestion for learning in general.

so HM's, what's your perspective.

https://redd.it/p1umde
@r_devops
Issues with setting up networking in Podman for Prometheus Containers

I'm trying to run Prometheus containers with Podman on RHEL8.

My Prometheus container can't see the Prometheus Node Exporter Container and vice versa. My end goal is to get both on the same network, and also get a better understanding of how networking works with Podman.

I'm not able to list any networks from the CLI with the following command either with or no sudo:

sudo podman network ls

At this point, I'm not sure what is wrong. I'm coming from a Docker backgroup where we could create networks on the fly and I don't seem to have the capability to do that. I'm not root, unprivileged user and can do some sudo commands.

​

[user_a@host_a prometheus]$ podman version

Version: 1.4.2-stable2

RemoteAPI Version: 1

Go Version: go1.12.8

OS/Arch: linux/amd64

​

Let me know if we need more info.

thanks in advance

https://redd.it/p1wh3k
@r_devops
How to get a job that uses Cloud if I don't have indurstry-level experience?

Hey guys,

I've been a DevOps engineer at a major bank for two years and am looking for my next move.

I am in a hard situation where almost every job (SRE/DevOps/Cloud Engineer) that I applied for requires Cloud/K8S experience.

However, in my current position, it doesn't involve these technologies. Maybe in two or three years later, some cloud technology would be adopted. But it's too late for me. Inside the organization, there are not that many opportunities for cloud technology.

​

Do you guys have any suggestions?

How should I plan for my next move?

​

Some ideas that I can think of:

* Get Cloud Certifications.
* The certificate could help on passing HR screening, but during tech interviews, the production-level experience is preferred.
* Personal projects deployed on Cloud.
* It can pass HR screening but seems not preferred to tech interviewers.
* Look for an entry-level cloud engineer position.
* Since I have almost 4 years of work experience (2 for current and 16 months for internship), I would prefer an intermediate/senior-level position.
* Even I ask for a reference, I guess I would be stuck with points 1 & 2 again.

​

Any suggestions would be appreciated!

https://redd.it/p1znrj
@r_devops
Understanding workflow of multi-stage Dockerfile

There are a few processes I'm struggling to wrap my brain around when it comes to multi-stage Dockerfile.

Using this as an example, I have a couple questions below it:

# Dockerfile
# Uses multi-stage builds requiring Docker 17.05 or higher
# See https://docs.docker.com/develop/develop-images/multistage-build/

# Creating a python base with shared environment variables
FROM python:3.8.1-slim as python-base
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIPNOCACHEDIR=off \
PIP
DISABLEPIPVERSIONCHECK=on \
PIP
DEFAULTTIMEOUT=100 \
POETRY
HOME="/opt/poetry" \
POETRYVIRTUALENVSINPROJECT=true \
POETRY
NOINTERACTION=1 \
PYSETUP
PATH="/opt/pysetup" \
VENVPATH="/opt/pysetup/.venv"

ENV PATH="$POETRY
HOME/bin:$VENVPATH/bin:$PATH"


# builder-base is used to build dependencies
FROM python-base as builder-base
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
curl \
build-essential

# Install Poetry - respects $POETRY
VERSION & $POETRYHOME
ENV POETRY
VERSION=1.0.5
RUN curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python

# We copy our Python requirements here to cache them
# and install only runtime deps using poetry
WORKDIR $PYSETUPPATH
COPY ./poetry.lock ./pyproject.toml ./
RUN poetry install --no-dev # respects


# 'development' stage installs all dev deps and can be used to develop code.
# For example using docker-compose to mount local volume under /app

FROM python-base as development
ENV FASTAPI
ENV=development

# Copying poetry and venv into image
COPY --from=builder-base $POETRYHOME $POETRYHOME
COPY --from=builder-base $PYSETUPPATH $PYSETUPPATH

# Copying in our entrypoint
COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh

# venv already has runtime deps installed we get a quicker install
WORKDIR $PYSETUPPATH
RUN poetry install

WORKDIR /app

COPY . .

EXPOSE 8000
ENTRYPOINT /
docker-entrypoint.sh $0 $@
CMD ["uvicorn", "--reload", "--host=
0.0.0.0", "--port=8000", "main:app"]


# 'lint' stage runs black and isort
# running in check mode means build will fail if any linting errors occur
FROM development AS lint
RUN black --config ./pyproject.toml --check app tests
RUN isort --settings-path ./pyproject.toml --recursive --check-only
CMD ["tail", "-f", "/dev/null"]


# 'test' stage runs our unit tests with pytest and
# coverage. Build will fail if test coverage is under 95%
FROM development AS test
RUN coverage run --rcfile ./pyproject.toml -m pytest ./tests

RUN coverage report --fail-under 95


# 'production' stage uses the clean 'python-base' stage and copyies
# in only our runtime deps that were installed in the 'builder-base'
FROM python-base as production
ENV FASTAPI
ENV=production

COPY --from=builder-base $VENVPATH $VENVPATH
COPY ./docker/gunicornconf.py /gunicornconf.py

COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh

COPY ./app /app
WORKDIR /app

ENTRYPOINT /docker-entrypoint.sh $0 $@
CMD "gunicorn", "--worker-class uvicorn.workers.UvicornWorker", "--config /gunicorn_conf.py", "main:app"

The questions I have:

1. Are you docker build ... this entire image and then just docker run ... --target=<stage> to run a specific stage (development, test, lint, production, etc.) or are you only building and running the specific stages you need (e.g. docker build ... -t test --target=test && docker run test ...)?
2. When it comes to local Kubernetes development (minikube, skaffold, devspace, etc.) and running unit tests, are you supposed referring to these stages