Reddit DevOps
269 subscribers
5 photos
31K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
GitHub Status Checks - Help Please

I am trying to understand GitHub Status checks for a protected branch.

When I try to require status checks, there are no checks to choose from. How do I make a status check?

Do I need GitHub Actions enabled in order to create a status check?

Is there a way to do this without GitHub Actions by using Jenkins?

I am trying to add simple checks, such as.
Do not allow merge into branch if build failed (build is happening in Jenkins).
Do not allow merge into branch unless the merge is coming from a particular branch.

I am trying to start small and simple to get a base understanding of how I can have GitHub and Jenkins work together. Eventually I would like to add checks for unit tests passing, etc.

https://redd.it/13m3gh8
@r_devops
I'm new to infrastructure as code and I wonder if Ansible or Terraform is the right tool for my purpose

Hello guys, first up I hope this kind of post is allowed on this sub. I've been working on a side project for a while and I'm starting to look into deployment part of things.

What I would like to achieve is a a system were the a backend application can trigger provisioning of hardware. So from what I've read Terraform and Ansible both allow for fast gui-less
Cloud provisioning. (Kinda like what docker compose is for software)

But are these tools suitable for creating for example a new vm each time a new customer registers?

Basically automated provisioning?

https://redd.it/13m0sny
@r_devops
How do I get real client IP inside docker container for logging to the database

I have a following docker compose file:

version: "3.8"

services:
postgres:
image: postgres:11
volumes:
- myapp_postgres_volume:/var/lib/postgresql/data
- type: tmpfs
target: /dev/shm
tmpfs:
size: 536870912 # 512MB
environment:
POSTGRES_DB: elearning_academy
POSTGRES_USER: myapp
POSTGRES_PASSWORD: myapp123
networks:
- myapp_network

pgadmin:
image: dpage/pgadmin4:5.4
volumes:
- myapp_pgadmin_volume:/var/lib/pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: [email protected]
PGADMIN_DEFAULT_PASSWORD: myapp123
ports:
- 8080:80
networks:
- myapp_network

redis:
image: redis:6.2.4
volumes:
- myapp_redis_volume:/data
networks:
- myapp_network

wsgi:
image: wsgi:myapp3
volumes:
- /myapp/frontend/static/
- ./wsgi/myapp:/myapp
- /myapp/frontend/clientApp/node_modules
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
- postgres
- redis
ports:
- 9090
- 3000:3000
- 8000:8000
environment:
C_FORCE_ROOT: 'true'
SERVICE_PORTS: 9090
networks:
- myapp_network
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s

nodejs:
image: nodejs:myapp3
volumes:
- ./nodejs/frontend:/frontend
- /frontend/node_modules
depends_on:
- wsgi
ports:
- 9000:9000 # development
- 9999:9999 # production
environment:
BACKEND_API_URL: https://0.0.0.0:3000
networks:
- myapp_network

nginx:
image: mydockeraccount/nginx-brotli:1.21.0
volumes:
- ./nginx:/etc/nginx/conf.d:ro
- ./wsgi/myapp:/myapp:ro
- myapp_nginx_volume:/var/log/nginx/
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- myapp_network

haproxy:
image: haproxy:2.3.9
volumes:
- ./haproxy:/usr/local/etc/haproxy/:ro
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- wsgi
- nodejs
- nginx
ports:
- 9763:80
networks:
- myapp_network
deploy:
placement:
constraints: [node.role == manager]

volumes:
myapp_postgres_volume:
myapp_redis_volume:
myapp_nginx_volume:
myapp_pgadmin_volume:

networks:
myapp_network:
driver: overlay

As you can see I have a nodejs app and a django (wsgi) app. I have written django middleware to log incoming IP to the database. However, it [logs the IP different from the actual IP](https://stackoverflow.com/questions/76280610/accessing-browser-ip-address-in-django). After
reading online, I came to know that this might be due to how the docker network is configured (`overlay` as can be seen in last line of above docker compose file). I read online that I need to configure docker network in `host` mode. So I tried adding `network_mode: host` to each service and removing `networks` section from above file.

version: "3.8"

services:
postgres:
image: postgres:11
volumes:
- myapp_postgres_volume:/var/lib/postgresql/data
- type: tmpfs
target: /dev/shm
tmpfs:
size: 536870912 # 512MB
environment:
POSTGRES_DB: elearning_academy
POSTGRES_USER: myapp
POSTGRES_PASSWORD: myapp123
network_mode: host

pgadmin:
image: dpage/pgadmin4:5.4
volumes:
- myapp_pgadmin_volume:/var/lib/pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: [email protected]
PGADMIN_DEFAULT_PASSWORD: myapp123
ports:
- 8080:80
network_mode: host

redis:
image: redis:6.2.4
volumes:
- myapp_redis_volume:/data
network_mode: host

wsgi:
image: wsgi:myapp3
volumes:
- /myapp/frontend/static/
- ./wsgi/myapp:/myapp
- /myapp/frontend/clientApp/node_modules
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
- postgres
- redis
ports:
- 9090
- 3000:3000
- 8000:8000
environment:
C_FORCE_ROOT: 'true'
SERVICE_PORTS: 9090
network_mode: host
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s

nodejs:
image: nodejs:myapp3
volumes:
- ./nodejs/frontend:/frontend
- /frontend/node_modules
depends_on:
- wsgi
ports:
- 9000:9000 # development
- 9999:9999 # production
environment:
BACKEND_API_URL: https://0.0.0.0:3000
network_mode: host

nginx:
image: mydockeraccount/nginx-brotli:1.21.0
volumes:
- ./nginx:/etc/nginx/conf.d:ro
- ./wsgi/myapp:/myapp:ro
- myapp_nginx_volume:/var/log/nginx/
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
network_mode: host

haproxy:
image: haproxy:2.3.9
volumes:
- ./haproxy:/usr/local/etc/haproxy/:ro
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- wsgi
- nodejs
- nginx
ports:
- 9763:80
network_mode: host
deploy:
placement:
constraints: [node.role == manager]

volumes:
myapp_postgres_volume:
myapp_redis_volume:
myapp_nginx_volume:
myapp_pgadmin_volume:

When I run `docker stack deploy -c docker-compose.yml my_stack`, it outputs `Ignoring unsupported options: network_mode.` I tried to check how the network is created:

$ docker network ls
NETWORK ID NAME DRIVER SCOPE
tp6olv2atq06 myapp_default overlay swarm

So it ended
up being configured in the overlay mode only and not in host mode.

Further reading online revealed that host network mode is not available for swarm started with `docker stack deploy`. (Possible related open issue: [github link](https://github.com/docker/roadmap/issues/157)).

For development, I simply connect vscode to nodejs and wsgi containers and run the apps in debug mode. So nginx is not involved at during development. However, in the deployment we do use nginx, which is also deployed as a docker container.

Now I have following questions:

**Q1.** Is configuring nginx to set `X-Real-IP` header with actual IP only solution?

**Q2.** Cant I do some docker config or changes in docker compose file to obtain real IP address inside docker container (django in my case), without doing any changes to nginx? Say somehow configuring those containers to be on `host` network?

**Q3.** If somehow I configure nginx to set `X-Real-IP` header, will it work even while development, given that nginx is not involved while development?

**Q4.** The comments to [this](https://serverfault.com/a/1055735/546170) answer seem to suggest that if the nginx itself is running as a docker container, then configuring it to add `X-Real-IP` header will not work. Is it so?

https://redd.it/13m9a2n
@r_devops
Preparation materials for interview

I'm pretty new to the field of DevOps but every role I've had so far has used different tools but the concepts remain the same throughout.

I have a 3 hour final interview block coming up and I was wondering if you guys have any sources that can help me prepare for DevOps questions?

This job is focused around AWS, Jenkins, Python, and SQL.

https://redd.it/13ma3da
@r_devops
Landed a DevOps role. Was a windows system admin before.

Wanted to know if anyone has any tips and tricks that made them become a better engineer?

I have a strong background on the windows side but compare to the Linux side of things. I am still learning.

https://redd.it/13mctco
@r_devops
Became a devops engineer with 3 years exp as a windows system admin.

Wanted to know what’s things did any of you do that made you become better engineers ?

https://redd.it/13md9hn
@r_devops
Options to pull regularly-changing http config to ec2 instances?

I'm in process of planning a small web setup and I'm wondering how to sync app-generated files.

The system involves a "parent" website in linode and 3-5 child web instances in ec2. Customers login to a app on the parent server and make changes to their account. I then generate http config files on the parent server which are pushed to the ec2 instances (which check them and load). Updates can happen several times per day.

The question is what is the best way to get the files from the parent server to the web instances? This also has to cover when ec2 instances get relaunched.

Ideas I've had:

The web servers connect to the parent and rsync the files.
I run something like saltstack/chef/puppet/ansible to sync the files.

Any thoughts on a "standard" way to do this in 2023?

CD doesn't seem right since the files come from database not code. Pull seems easier since ec2 instances are dynamic and firewalled but push isn't too hard. Would prefer something not too complicated since it's only a little system.

https://redd.it/13m9f1i
@r_devops
Are there many gamers on devops/devsecops?

I was just wondering...I for instance always loved gaming and as I got old(late 30s), leaned towards those masochistics hard games.

https://redd.it/13lvi47
@r_devops
How much of your learning and experience is self (personal AWS account) vs on the job?

I have been working in a cloud engineer position for about a year. On the job learning has been really slow, I am the only person in my team in the US (rest are in Europe or India). I am working on Acloudguru and Acantril AWS video courses to learn. But so far it feels like learning to drive by watching videos.

How much learning should be on the job, as in with actual projects and engagement with other team members, versus self learning? Atleast the good thing about AWS is that you can in theory do complicated setups in your own AWS account. Provided you have enough money to cover the cost of running the infrastructure.

https://redd.it/13mh5fd
@r_devops
Hi can anyone please help me ?. My Postgres docker container is not running


When I do sudo docker-compose logs it just outputs “Attaching to pg”(pg is the name I gave to Postgres container)

Here is my docker compose file : https://pastebin.com/as3FFHm2

Here is my sql file : https://pastebin.com/J5qdsC85

https://redd.it/13mixob
@r_devops
Secretless Self-Hosted Github Actions Runners for Azure...possible?

Recently got into a debate with a co-worker regarding this concept.

I raised that for most applications in the Azure space since runners are event-driven, you could leverage a managed identity on the VM or Containerized runner, and not need to use secrets at all for most general workloads.

Examples of general workload would be:

\- Deploying Infrastructure via ARM

\- Working with Az CLI

\- Executing Az Powershell scripts

\- Working with Graph

This was met with a lot more resistance than I thought. Am I off base here? Is this really harder than I think it is?

I really see no reason why you can just grab a token based on the MI, then use that token to action against anything you need so long as you give the MI the correct permissions/access.

https://redd.it/13mj5w3
@r_devops
Just took the google professional DevOps engineer certification. The exam questions have completely changed. Please help?

The questions were completely different except maybe 5-6 from April 28 when I last took the exam. Does anyone know where I need to look to find actual updated exam questions and content? Any help would be so appreciated

https://redd.it/13mif3k
@r_devops
Terraform and ansible or only ansible?

Not sure what tools to use. We create VMware templates with packer. Then to create VMs from templates should we use ansible or terraform?

https://redd.it/13mm6cx
@r_devops
Bootcamp

Hey guys any free bootcamps,courses with different topics like kubernetes,agile,etc.Im totaly new in devops,but not new in programming,python and JavaScript

https://redd.it/13lr6s3
@r_devops
How do you WORK WITH testing teams?

We have developers working on Jira tickets. Approx 5 developers and 3 testers. Here's our current flow: Developers create a pull request that addresses a Jira ticket from a feature branch to a main branch. (Jira goes to "In review). Automated testing is run against the branch. Another developer reviews and then approves, and the branch is merged to main. Automated testing is run against the main branch and a developer approves deployment to a "dev" environment. (Jira goes to "In Test")

A tester does some manual/semi-automated testing against the dev environment (updating automated tests if required). If tests pass then "main" branch is approved to be deployed to a staging environment. (Jira goes to "In Staging") Then more verification happens and if that passes main is deployed to a production environment. (Jira ticket goes to "In Prod")

The problem I've seen is that it takes a long time for a ticket to go from initial development to production. And it's unclear i) who is responsible for a ticket and ii) when a ticket is really done. We have many situations where when there's a failure found by a testing team we can't pinpoint the ticket/branch that caused the failure, so all are blamed. Similarly we have situations where developers are held back from merging PRs because "dev is broken" or "we're still doing testing".

My instinct is to declare a ticket done when it is merged and auto-deployed to the dev environment. This decouples responsibility/ownership. Any findings from testers are caught and raised as new tickets. Is this a sensible approach? How do you work with testing teams?

(Not sure if this is the right sub, but seems like the closest I could find)

https://redd.it/13mxhqh
@r_devops
Fairly new to devops, I'm looking for feedback and ways to improve our cicd pipeline

Hello everyone, I hope you're having a good day.


As the title states, I'm seeking feedback for our (startup) CI/CD pipeline and deployment process.


Currently, we have four different services running as dockerized containers:

\- Next.js frontend (React frontend + API)
\- Node.js backend (connects to external services, writes to the database)
\- InfluxDB
\- Nginx


We utilize Docker Compose to run these services in development. For deployment, we push the code to GitLab, where a GitLab CI action is triggered. This action builds all the images based on a build-docker-compose file and pushes them to Docker Hub. Finally, we connect to a remote VM where we:
\- Copy a run.sh file to the VM using SCP
\- Copy a production-docker-compose file to the VM (which currently contains the environment variables) using SCP
\- SSH into the VM and execute the run.sh file, which stops the services, pulls the latest images, and starts them up again
Currently, this process works well for us since we don't rely on external services or frameworks. This keeps the complexity low, which is beneficial for our developers who don't have a software engineering background. We have discussed the possibility of migrating to Kubernetes, but at the moment, I don't see the need for it. While Kubernetes offers advantages, its complexity would introduce additional costs for us without significant benefits.
Please feel free to provide feedback or comments on how we can enhance our DevOps processes.
Thank you!

https://redd.it/13mvttz
@r_devops
Does there exist a tool like docker compose that runs containers serially?

Looking for a platform agnostic ci tool that uses containers for each step. Similar to Argo workflows but not as complex or featured. If docker compose could run containers serially it would be perfect for what I’m looking for. Use case is the developer would be able to run the same workflows on their local machine as the pipeline regardless of which ci platform is used officially (we use multiple).

I started working on building a tool but wanted to check first in case it already exists.

https://redd.it/13n1nbd
@r_devops