Reddit DevOps
268 subscribers
1 photo
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Cloud Foundry Simplified

Often while dealing with networks and services, a big question arrives in our heads regarding deployment. It becomes a relevant concern when you have the required product in your hand, but you have no clue how to deploy it and spread it out to the world.

https://www.p3r.one/cloud-foundry-simplified/

https://redd.it/p1ket7
@r_devops
Any suitable training for Terraform?

I want to go for the Terraform Associate cert as part of my DevOps bootcamp I'm doing. Just wondered what material is best to get started on this? I'm using a DevOps bootcamp by Nana from Techworld but I'm spending longer that 6 months and aiming to get certified in each section along the way (I also have Kubernetes, Azure DevOps and Jenkins certs planned).

Interested into what material you all would suggest I use to reinforce this knowledge, as I don't want to solely rely on this bootcamp alone. I'm using the bootcamp as a 'guide' to help transition from Sysadmin, but will add extra training in between. Thanks.

https://redd.it/p1lufg
@r_devops
How can I create a Bitbucket to deploy a Spring Boot application?

Hello all,

Currently I'm deploying my Spring Boot application manually on a Ubuntu Linux server. I build the jar locally, send it per sftp to the server, and then start it using the java -jar command.

I was clicking around a bit on Bitbucket and I saw that they offer pipelines. Is there a way for me to create a "staging" branch and configure bitbucket to listen on that branch and, on commit, build the branch, deploy it on my ubuntu server and then start it?

I have no idea where and how to face this.

Thankful for any pointers :)

https://redd.it/p1o0j4
@r_devops
Orchestration layers

Working with digital.ai xl release on a s/w deliver/deploy pipeline.. any experiences opinions on XL release (+possible alternatives)? Thx

https://redd.it/p1pbtc
@r_devops
Dynamic environments per client, which is the best approach, if any?

Hi, people.


I don't know how to explain this problem better, but I'll try to explain it clearly:
\- Where I work our customers are companies, with lots of users
\- We offer a SaaS solution for them to manage stuff
\- For every customer, there are around 50 right now, we create a vhost in one of our machines, a database in some of our database VMs and we configure the dotenv files for this application, potentially creating more VMs or database machines (nowadays this happens less frequently)
\- The SaaS allows us to configure the same application to serve multiple clients by doing a multi-tenant setup with multiple environment files

The problem:
The whole process of creating vhost, configuring stuff, etc. is manual. We know how to automate this setup with ansible + terraform by creating application VMs and infrastructure for every customer or creating a HA environment with every application dotenv on a shared application, and we even have the option to migrate this to k8s. However, the most valuable thing to us would be to be able to add, to whichever solution we choose, extra customers based on API calls or some sort of automation.

If you're on k8s world, I think this would be equivalent to add a new ConfigMap to every customer based on some sort of automation, alongside deployments for these customers.


Let me know if the question/problem is not clear. Thank you in advance.

https://redd.it/p1nip5
@r_devops
First Episode of New DevOps Master Class - Zero Advertising

I normally create a lot of Azure content and have Master Classes about Azure and PowerShell. My new Master Class is for DevOps and over the next couple of months I'll release a whole set of classes. All on my channel, no adverts of any kind, its just about helping people learn. There is a playlist and GitHub repo of the content. Happy learning.

https://youtu.be/YMdtaWfU\_QE

https://redd.it/p1qlgg
@r_devops
AWS DevOps Pro learning + cert OR DevOps boot camp + cert ?

16 year IT sysadmin here with 3 newly acquired AWS Associate level certs. Learning Python currently. I'm interested in heading down the DevOps path.

I've seen mixed reviews about these DevOps boot camps you see advertised with CalTech, LSU, etc. With no real world experience in DevOps that can reach my resume other than HomeLab stuff, I feel as though putting down a well known college boot camp experience on my resume might have more impact than an AWS DevOps pro cert.

I've signed up for KodeKloud but again, I feel like that might not be as impactful on my resume as a boot camp experience because the name itself isn't as flashy as CalTech or LSU, for example. I've gotten a lot of knowledge from Stephane Maarek's Udemy courses and expect the same from the DevOps Pro course, but ultimately it will only be another shiny badge on my resume.

End goal is to get my foot in the door with a Junior / Entry Level DevOps role.

Any insight into this is appreciated!

https://redd.it/p1rw9x
@r_devops
HackerSploit Docker Security Essentials

The HackerSploit: Docker Security Series aims to provide developers, system administrators and DevOps engineers the necessary skills to be able to audit, secure and manage Docker in the context of an organization or in their own personal projects.
https://www.i-programmer.info/news/150-training-a-education/14785-hackersploit-docker-security-essentials.html

https://redd.it/p1unho
@r_devops
MongoDB scaling and speed

Hello there,

Context: I work in a small company where I am the only Ops/devops/make-some-magic-so-everything-is-running-smoothly. I use a bit ansible, a lot of docker & kubernetes (but always for jobs & low availability needs)-

We have one big mongo that starts to become huge and gets slower for some queries. We have more than 100 databases, as we have a multi-tenancy service.

The actual set-up for the mongo is:Docker-compose running a mongo container on a server, and we use a DO volume to write the data. As we approach the 500GB storage and it is a BIG single point of failure, it may be the best moment to use shards and/or replica sets.

**What is the best to manage and scale such a setup?** *Keeping in mind we are growing fast, availability and speed are our main focus (not confidentiality).*

1. Use a [shared cluster](https://github.com/mongodb/mongodb-kubernetes-operator) (via Helm chart probably) in Kubernetes;
2. Doing it manually with droplets, docker-compose, and command line;
3. Using Ansible to manage the different servers;
4. others?

*Managed mongo services are too expensive for the amount of data we use, that's why I don't include them.*

I have some points/concerns such as:

* I am not a Database administrator, I want to keep it as simple as possible.
* Is running a DB on Kubernetes a good idea? (I've read very different opinions online)
* If something goes wrong, the *meantime to recovery* is really important. 1h of downtime during the week is bad but OK. 2h is really bad. Half of the day we could lose clients.

I am curious to have your opinion on this one :)


Edit: With DigitalOcean, ReadWriteMany volumes is not available, only ReadWriteOnce.

https://redd.it/p1tzbo
@r_devops
Hiring Mgrs, do you even care about Certs?

There's a ton of talk about getting certifications here as a way to get into field. Almost a post everyday it seems. Is this really necessary? Would you rather see a list of certs or a resume full of projects showing you can do whats done on the job... maybe w link to github repo so scripts, yaml, code is reviewable. The cert obsession seems like a carryover from traditional IT industry.

I ask bc there are basically no certs in software engineering other than a degree or maybe a bootcamp. So if you don't have experience you fill resume w relevant projects and that is basically how u get in the door.

As entry swe who came into DevOps this is basically what I did and landed job where they were looking for some overlap. I have 0 certs.

I dont see a lot of mention about doing projects for resume. Just as a suggestion for learning in general.

so HM's, what's your perspective.

https://redd.it/p1umde
@r_devops
Issues with setting up networking in Podman for Prometheus Containers

I'm trying to run Prometheus containers with Podman on RHEL8.

My Prometheus container can't see the Prometheus Node Exporter Container and vice versa. My end goal is to get both on the same network, and also get a better understanding of how networking works with Podman.

I'm not able to list any networks from the CLI with the following command either with or no sudo:

sudo podman network ls

At this point, I'm not sure what is wrong. I'm coming from a Docker backgroup where we could create networks on the fly and I don't seem to have the capability to do that. I'm not root, unprivileged user and can do some sudo commands.

​

[user_a@host_a prometheus]$ podman version

Version: 1.4.2-stable2

RemoteAPI Version: 1

Go Version: go1.12.8

OS/Arch: linux/amd64

​

Let me know if we need more info.

thanks in advance

https://redd.it/p1wh3k
@r_devops
How to get a job that uses Cloud if I don't have indurstry-level experience?

Hey guys,

I've been a DevOps engineer at a major bank for two years and am looking for my next move.

I am in a hard situation where almost every job (SRE/DevOps/Cloud Engineer) that I applied for requires Cloud/K8S experience.

However, in my current position, it doesn't involve these technologies. Maybe in two or three years later, some cloud technology would be adopted. But it's too late for me. Inside the organization, there are not that many opportunities for cloud technology.

​

Do you guys have any suggestions?

How should I plan for my next move?

​

Some ideas that I can think of:

* Get Cloud Certifications.
* The certificate could help on passing HR screening, but during tech interviews, the production-level experience is preferred.
* Personal projects deployed on Cloud.
* It can pass HR screening but seems not preferred to tech interviewers.
* Look for an entry-level cloud engineer position.
* Since I have almost 4 years of work experience (2 for current and 16 months for internship), I would prefer an intermediate/senior-level position.
* Even I ask for a reference, I guess I would be stuck with points 1 & 2 again.

​

Any suggestions would be appreciated!

https://redd.it/p1znrj
@r_devops
Understanding workflow of multi-stage Dockerfile

There are a few processes I'm struggling to wrap my brain around when it comes to multi-stage Dockerfile.

Using this as an example, I have a couple questions below it:

# Dockerfile
# Uses multi-stage builds requiring Docker 17.05 or higher
# See https://docs.docker.com/develop/develop-images/multistage-build/

# Creating a python base with shared environment variables
FROM python:3.8.1-slim as python-base
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIPNOCACHEDIR=off \
PIP
DISABLEPIPVERSIONCHECK=on \
PIP
DEFAULTTIMEOUT=100 \
POETRY
HOME="/opt/poetry" \
POETRYVIRTUALENVSINPROJECT=true \
POETRY
NOINTERACTION=1 \
PYSETUP
PATH="/opt/pysetup" \
VENVPATH="/opt/pysetup/.venv"

ENV PATH="$POETRY
HOME/bin:$VENVPATH/bin:$PATH"


# builder-base is used to build dependencies
FROM python-base as builder-base
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
curl \
build-essential

# Install Poetry - respects $POETRY
VERSION & $POETRYHOME
ENV POETRY
VERSION=1.0.5
RUN curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python

# We copy our Python requirements here to cache them
# and install only runtime deps using poetry
WORKDIR $PYSETUPPATH
COPY ./poetry.lock ./pyproject.toml ./
RUN poetry install --no-dev # respects


# 'development' stage installs all dev deps and can be used to develop code.
# For example using docker-compose to mount local volume under /app

FROM python-base as development
ENV FASTAPI
ENV=development

# Copying poetry and venv into image
COPY --from=builder-base $POETRYHOME $POETRYHOME
COPY --from=builder-base $PYSETUPPATH $PYSETUPPATH

# Copying in our entrypoint
COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh

# venv already has runtime deps installed we get a quicker install
WORKDIR $PYSETUPPATH
RUN poetry install

WORKDIR /app

COPY . .

EXPOSE 8000
ENTRYPOINT /
docker-entrypoint.sh $0 $@
CMD ["uvicorn", "--reload", "--host=
0.0.0.0", "--port=8000", "main:app"]


# 'lint' stage runs black and isort
# running in check mode means build will fail if any linting errors occur
FROM development AS lint
RUN black --config ./pyproject.toml --check app tests
RUN isort --settings-path ./pyproject.toml --recursive --check-only
CMD ["tail", "-f", "/dev/null"]


# 'test' stage runs our unit tests with pytest and
# coverage. Build will fail if test coverage is under 95%
FROM development AS test
RUN coverage run --rcfile ./pyproject.toml -m pytest ./tests

RUN coverage report --fail-under 95


# 'production' stage uses the clean 'python-base' stage and copyies
# in only our runtime deps that were installed in the 'builder-base'
FROM python-base as production
ENV FASTAPI
ENV=production

COPY --from=builder-base $VENVPATH $VENVPATH
COPY ./docker/gunicornconf.py /gunicornconf.py

COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh

COPY ./app /app
WORKDIR /app

ENTRYPOINT /docker-entrypoint.sh $0 $@
CMD "gunicorn", "--worker-class uvicorn.workers.UvicornWorker", "--config /gunicorn_conf.py", "main:app"

The questions I have:

1. Are you docker build ... this entire image and then just docker run ... --target=<stage> to run a specific stage (development, test, lint, production, etc.) or are you only building and running the specific stages you need (e.g. docker build ... -t test --target=test && docker run test ...)?
2. When it comes to local Kubernetes development (minikube, skaffold, devspace, etc.) and running unit tests, are you supposed referring to these stages
in the Dockerfile (devspace Hooks or something) or using native test tools in the container (e.g. npm test, ./manage.py test, etc.)?

Thanks for clearing this questions up.

https://redd.it/p1sn27
@r_devops
How do you like to learn new tactics and tools?

I'm investigating ways to improve how systems engineers take on the breadth of their role.

One of the ways I'm looking at this is through learning design.

Looking at microlearning as one avenue for time-poor individuals.

So, my question to you is how do you prefer to learn new tactics and tools?

Pick as many types as you find useful.

View Poll

https://redd.it/p1nh49
@r_devops
How to choose the correct PATH to get into DevOps from Ops...

Hello, my friends

After a long journey and many interviews, I have got a job position as a Junior Application Operations (my first Ops role) and I will start it in September. I have only 1 year 8 months of experience as a Quality Assurance Technician (in the gaming industry/game tester). I love improving myself and I have a great passion to become a DevOps engineer in the future. I would like to read here Advice/Tips from DevOps engineers, especially those who come from an Ops background. I want to know what kind of courses should I follow in Udemy, Coursera, or in other platforms? Tbh, I'm a Business Management student with a master's degree who doesn't have an IT background. Right now I am following Shell/Bash scripting course for myself and I hope this is a good start. I wouldn't want to skip your advice here. Thanks everyone in advance!

https://redd.it/p17azd
@r_devops
MINIKUBE AND KUBECTL - HELP NEEDED


~ % minikube start
😄 minikube v1.22.0 on Darwin 11.2 (arm64)
Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
🔎 Verifying Kubernetes components...
Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

~ % kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:60186
CoreDNS is running at https://127.0.0.1:60186/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-558bd4d5db-4c2qc 1/1 Running 1 46m
kube-system etcd-minikube 1/1 Running 1 46m
kube-system kube-apiserver-minikube 1/1 Running 1 46m
kube-system kube-controller-manager-minikube 1/1 Running 1 46m
kube-system kube-proxy-wtdcr 1/1 Running 1 46m
kube-system kube-scheduler-minikube 1/1 Running 1 46m
kube-system storage-provisioner 1/1 Running 3 46m


why does cluster-info or namespace not show the dashboard, and kubeDNS?

https://redd.it/p251r3
@r_devops
Git Switch and Restore commands came in version 2.23. In this article, we will go through all the new commands that are here to make our life a bit easier. To understand more about the new Switch and Restore, we will look at "Checkout" first. Let's Start!

https://www.p3r.one/git-switch-and-restore/

https://redd.it/p274k4
@r_devops