Reddit DevOps
269 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
Read the DevOps handbook and Phoenix project. But I don't have a way to change the Org practices because of low rank. What should I do?

The idea seems good but how to apply it?

https://redd.it/13lor87
@r_devops
Awesome Cloud Cost Repository

Based on my previous post and your great tips I decided to open a repository for awesome cloud cost. It can be a place where we share the latest and most curated tips and tricks, and better ourselves as engineers and help us through our careers.

https://github.com/jatalocks/awesome-cloud-cost

https://redd.it/13lsltj
@r_devops
Branch and merge, improvements since TFS?

I didn't have a very good experience branch and merge in TFS some years ago, and since then most of my clients moved to Git and it's worked pretty well and it's what I'm used to.

I've had to use DevOps on my current project, and it's time to branch and I'm seriously worried.

Should I be?

https://redd.it/13ltsky
@r_devops
What are good options for observability for tiny startup?

I work for a tiny startup (<5 employees) with one SAAS webapp product. Our infra is in AWS and our monthly bill is ~$300 for a sense of scale. I need to set up a way for us to gather and analyze “telemetry”; specifically latencies and failure rates on HTTP endpoints. This is to support engineers supporting customers.

In a previous life for a bigger company I did the whole ansible, terraform, packer thing to provision grafana + prometheus and it worked well enough. I know that stack well enough that i am confident it provides the solution blocks i need. I’m worried about upfront investment, running costs and opportunity costs considerations.

I could probably replicate such infra for current employer but interested in hearing advice from professionals. (I’m more of a jack of all trades, master of none type…)

- I’ve considered using PutMetricData CloudWatch api for custom metrics. I’m not convinced it can do everything I need, but happy to hear from someone that’s instrumented an app this way. Our logs already go to CloudWatch.
- as mentioned, I can probably set up grafana + prometheus & dependencies within a few days, so I consider it a reliable fallback option.
- datadog? I’ve never used them and they’ve been in the news a lot recently and not for great reasons. Apparently expensive? Vendor lock-in concerns…
- which options am i missing?

https://redd.it/13luhnt
@r_devops
Monitor - IIS App Pool

Is there any open-source solution to monitor IIS App Pools? if not, any thoughts to approach this?

basically looking to notify on pool crash and shutdown. Restart them remotely if required.

https://redd.it/13lyq23
@r_devops
Cloudquery, Resoto, Steampipe, or Airbyte?

I have been tasked with gathering data about resources across multiple cloud providers (AWS and Azure primarily). Whatever I use must be open source or at least on-prem.

My first goal is asset management, with a possible need for compliance and generating resource graphs in the future.

I found these 4 tools:

Cloudquery: https://cloudquery.io/

Steampipe https://steampipe.io/

Airbyte: https://airbyte.com/

Resoto: https://resoto.com/

Any idea which one is best? i.e. most maintained and stable? If I were to choose one of these tools, which one is the least likely to get completely abandoned 1-2 years down the road?

https://redd.it/13m3gjv
@r_devops
What do I need to master in devops?

Okay, so I am a Software engineer with 3 years of work exp.
I have worked in full stack development with react and node at the core.

Also of k deploy the code I track with Droplets in do, opening ports, check the process I'd and automating the tasks using cronjobs in Linus and some more into the Linux and networking domain.

So, I know end to end deployment and all.

But what more exactly do I eed to learn to become devops, I have used kubernetes just for setting up and run some checks via kubectl.

I need some structured concepts to cover in devops.
So that I can write development+devops as a skill in my resume.

Your help will be much appreciated.

https://redd.it/13m4ku4
@r_devops
GitHub Status Checks - Help Please

I am trying to understand GitHub Status checks for a protected branch.

When I try to require status checks, there are no checks to choose from. How do I make a status check?

Do I need GitHub Actions enabled in order to create a status check?

Is there a way to do this without GitHub Actions by using Jenkins?

I am trying to add simple checks, such as.
Do not allow merge into branch if build failed (build is happening in Jenkins).
Do not allow merge into branch unless the merge is coming from a particular branch.

I am trying to start small and simple to get a base understanding of how I can have GitHub and Jenkins work together. Eventually I would like to add checks for unit tests passing, etc.

https://redd.it/13m3gh8
@r_devops
I'm new to infrastructure as code and I wonder if Ansible or Terraform is the right tool for my purpose

Hello guys, first up I hope this kind of post is allowed on this sub. I've been working on a side project for a while and I'm starting to look into deployment part of things.

What I would like to achieve is a a system were the a backend application can trigger provisioning of hardware. So from what I've read Terraform and Ansible both allow for fast gui-less
Cloud provisioning. (Kinda like what docker compose is for software)

But are these tools suitable for creating for example a new vm each time a new customer registers?

Basically automated provisioning?

https://redd.it/13m0sny
@r_devops
How do I get real client IP inside docker container for logging to the database

I have a following docker compose file:

version: "3.8"

services:
postgres:
image: postgres:11
volumes:
- myapp_postgres_volume:/var/lib/postgresql/data
- type: tmpfs
target: /dev/shm
tmpfs:
size: 536870912 # 512MB
environment:
POSTGRES_DB: elearning_academy
POSTGRES_USER: myapp
POSTGRES_PASSWORD: myapp123
networks:
- myapp_network

pgadmin:
image: dpage/pgadmin4:5.4
volumes:
- myapp_pgadmin_volume:/var/lib/pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: [email protected]
PGADMIN_DEFAULT_PASSWORD: myapp123
ports:
- 8080:80
networks:
- myapp_network

redis:
image: redis:6.2.4
volumes:
- myapp_redis_volume:/data
networks:
- myapp_network

wsgi:
image: wsgi:myapp3
volumes:
- /myapp/frontend/static/
- ./wsgi/myapp:/myapp
- /myapp/frontend/clientApp/node_modules
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
- postgres
- redis
ports:
- 9090
- 3000:3000
- 8000:8000
environment:
C_FORCE_ROOT: 'true'
SERVICE_PORTS: 9090
networks:
- myapp_network
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s

nodejs:
image: nodejs:myapp3
volumes:
- ./nodejs/frontend:/frontend
- /frontend/node_modules
depends_on:
- wsgi
ports:
- 9000:9000 # development
- 9999:9999 # production
environment:
BACKEND_API_URL: https://0.0.0.0:3000
networks:
- myapp_network

nginx:
image: mydockeraccount/nginx-brotli:1.21.0
volumes:
- ./nginx:/etc/nginx/conf.d:ro
- ./wsgi/myapp:/myapp:ro
- myapp_nginx_volume:/var/log/nginx/
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- myapp_network

haproxy:
image: haproxy:2.3.9
volumes:
- ./haproxy:/usr/local/etc/haproxy/:ro
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- wsgi
- nodejs
- nginx
ports:
- 9763:80
networks:
- myapp_network
deploy:
placement:
constraints: [node.role == manager]

volumes:
myapp_postgres_volume:
myapp_redis_volume:
myapp_nginx_volume:
myapp_pgadmin_volume:

networks:
myapp_network:
driver: overlay

As you can see I have a nodejs app and a django (wsgi) app. I have written django middleware to log incoming IP to the database. However, it [logs the IP different from the actual IP](https://stackoverflow.com/questions/76280610/accessing-browser-ip-address-in-django). After
reading online, I came to know that this might be due to how the docker network is configured (`overlay` as can be seen in last line of above docker compose file). I read online that I need to configure docker network in `host` mode. So I tried adding `network_mode: host` to each service and removing `networks` section from above file.

version: "3.8"

services:
postgres:
image: postgres:11
volumes:
- myapp_postgres_volume:/var/lib/postgresql/data
- type: tmpfs
target: /dev/shm
tmpfs:
size: 536870912 # 512MB
environment:
POSTGRES_DB: elearning_academy
POSTGRES_USER: myapp
POSTGRES_PASSWORD: myapp123
network_mode: host

pgadmin:
image: dpage/pgadmin4:5.4
volumes:
- myapp_pgadmin_volume:/var/lib/pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: [email protected]
PGADMIN_DEFAULT_PASSWORD: myapp123
ports:
- 8080:80
network_mode: host

redis:
image: redis:6.2.4
volumes:
- myapp_redis_volume:/data
network_mode: host

wsgi:
image: wsgi:myapp3
volumes:
- /myapp/frontend/static/
- ./wsgi/myapp:/myapp
- /myapp/frontend/clientApp/node_modules
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
- postgres
- redis
ports:
- 9090
- 3000:3000
- 8000:8000
environment:
C_FORCE_ROOT: 'true'
SERVICE_PORTS: 9090
network_mode: host
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s

nodejs:
image: nodejs:myapp3
volumes:
- ./nodejs/frontend:/frontend
- /frontend/node_modules
depends_on:
- wsgi
ports:
- 9000:9000 # development
- 9999:9999 # production
environment:
BACKEND_API_URL: https://0.0.0.0:3000
network_mode: host

nginx:
image: mydockeraccount/nginx-brotli:1.21.0
volumes:
- ./nginx:/etc/nginx/conf.d:ro
- ./wsgi/myapp:/myapp:ro
- myapp_nginx_volume:/var/log/nginx/
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
network_mode: host

haproxy:
image: haproxy:2.3.9
volumes:
- ./haproxy:/usr/local/etc/haproxy/:ro
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- wsgi
- nodejs
- nginx
ports:
- 9763:80
network_mode: host
deploy:
placement:
constraints: [node.role == manager]

volumes:
myapp_postgres_volume:
myapp_redis_volume:
myapp_nginx_volume:
myapp_pgadmin_volume:

When I run `docker stack deploy -c docker-compose.yml my_stack`, it outputs `Ignoring unsupported options: network_mode.` I tried to check how the network is created:

$ docker network ls
NETWORK ID NAME DRIVER SCOPE
tp6olv2atq06 myapp_default overlay swarm

So it ended
up being configured in the overlay mode only and not in host mode.

Further reading online revealed that host network mode is not available for swarm started with `docker stack deploy`. (Possible related open issue: [github link](https://github.com/docker/roadmap/issues/157)).

For development, I simply connect vscode to nodejs and wsgi containers and run the apps in debug mode. So nginx is not involved at during development. However, in the deployment we do use nginx, which is also deployed as a docker container.

Now I have following questions:

**Q1.** Is configuring nginx to set `X-Real-IP` header with actual IP only solution?

**Q2.** Cant I do some docker config or changes in docker compose file to obtain real IP address inside docker container (django in my case), without doing any changes to nginx? Say somehow configuring those containers to be on `host` network?

**Q3.** If somehow I configure nginx to set `X-Real-IP` header, will it work even while development, given that nginx is not involved while development?

**Q4.** The comments to [this](https://serverfault.com/a/1055735/546170) answer seem to suggest that if the nginx itself is running as a docker container, then configuring it to add `X-Real-IP` header will not work. Is it so?

https://redd.it/13m9a2n
@r_devops
Preparation materials for interview

I'm pretty new to the field of DevOps but every role I've had so far has used different tools but the concepts remain the same throughout.

I have a 3 hour final interview block coming up and I was wondering if you guys have any sources that can help me prepare for DevOps questions?

This job is focused around AWS, Jenkins, Python, and SQL.

https://redd.it/13ma3da
@r_devops
Landed a DevOps role. Was a windows system admin before.

Wanted to know if anyone has any tips and tricks that made them become a better engineer?

I have a strong background on the windows side but compare to the Linux side of things. I am still learning.

https://redd.it/13mctco
@r_devops
Became a devops engineer with 3 years exp as a windows system admin.

Wanted to know what’s things did any of you do that made you become better engineers ?

https://redd.it/13md9hn
@r_devops
Options to pull regularly-changing http config to ec2 instances?

I'm in process of planning a small web setup and I'm wondering how to sync app-generated files.

The system involves a "parent" website in linode and 3-5 child web instances in ec2. Customers login to a app on the parent server and make changes to their account. I then generate http config files on the parent server which are pushed to the ec2 instances (which check them and load). Updates can happen several times per day.

The question is what is the best way to get the files from the parent server to the web instances? This also has to cover when ec2 instances get relaunched.

Ideas I've had:

The web servers connect to the parent and rsync the files.
I run something like saltstack/chef/puppet/ansible to sync the files.

Any thoughts on a "standard" way to do this in 2023?

CD doesn't seem right since the files come from database not code. Pull seems easier since ec2 instances are dynamic and firewalled but push isn't too hard. Would prefer something not too complicated since it's only a little system.

https://redd.it/13m9f1i
@r_devops
Are there many gamers on devops/devsecops?

I was just wondering...I for instance always loved gaming and as I got old(late 30s), leaned towards those masochistics hard games.

https://redd.it/13lvi47
@r_devops
How much of your learning and experience is self (personal AWS account) vs on the job?

I have been working in a cloud engineer position for about a year. On the job learning has been really slow, I am the only person in my team in the US (rest are in Europe or India). I am working on Acloudguru and Acantril AWS video courses to learn. But so far it feels like learning to drive by watching videos.

How much learning should be on the job, as in with actual projects and engagement with other team members, versus self learning? Atleast the good thing about AWS is that you can in theory do complicated setups in your own AWS account. Provided you have enough money to cover the cost of running the infrastructure.

https://redd.it/13mh5fd
@r_devops