Reddit DevOps
266 subscribers
30.9K links
Reddit DevOps. #devops
Thanks @reddit2telegram and @r_channels
Download Telegram
DevOps Conferences -Europe 2023

Hi, guys,

I hope you are all doing well.

I was hoping you could give me some ideas of interesting conferences for DevOps people in Europe happening this year still.

Thank you :)

https://redd.it/13qi5lz
@r_devops
Can Terraform Replace Powershell scripts ?

Hello and sorry for asking this as im not really experienced enough know the answer to this.

Context : as my company as a default setup for the Azure tenant of our Clients and will adjust them afterwards for special "needs", ive created around 6-8 Powershellscripts that will create the User and Groups Management import the basic policies ( Endpoint compliance etc ,thanks github for that one) etc etc.


Now my question is could the same be achieved with a terraform file ?
Would you recommend doing it that way or stick to the PS Scripts ?


Thanks :)

https://redd.it/13qk8ou
@r_devops
Recommended approach for setting up performance testing with Locust to test an EKS cluster?

I initially attempted to install and test Locust on Minikube and then deploy it to our EKS cluster.

However, all documentations I encountered were limited and outdated, which caused things not to work. The official documentation offers a partially functional Terraform snippet for an unspecified service.

Considering our usage of GitHub actions, would it be advisable to create a job that can be manually or automatically triggered to run Locust via CLI? Should I still try to have Locust run on EKS? Is there a recommended approach?

Thanks ahead

https://redd.it/13ql7t6
@r_devops
Self heal timeout per app in Argo?

I'm trying to see if it's possible to set a delay on self heal syncing in argo by using an annotation or label or something on a resource directly. It seems like there is a config option to do this across the whole argo installation but I can't seem to find any way to use that option in a more fine grained way. I can't really find an authoritative list of available configuration labels at all to be honest. Most seem like they are scattered across the docs.

Anyone know anything?

https://redd.it/13qkp3r
@r_devops
Advice needed for CI/CD

My friend and I are working on a project that involves a web app, an extension app, a native mobile app, and a desktop app. All these applications share a large amount of codebase, primarily based on React JS, and are structured as a monorepo.

We're looking to set up a comprehensive CI/CD pipeline. Being relatively new to the field, our objective is to have separate staging environments for each of the apps and a pipeline that can test, build, and deploy individual applications as and when necessary. User Acceptance Testing (UAT) is another important component that we wish to include in our workflow.

We've been considering using GitHub Actions for our testing and building phases, Fastlane for mobile app deployments, and potentially integrating Sentry for error tracking.

I would love to hear your thoughts on:

Which CI/CD tools would you recommend for such a setup?
Any best practices for managing CI/CD in a monorepo environment?
Strategies for managing deployments of multiple applications (web, extension, mobile, desktop) from a monorepo?
Recommendations for incorporating UAT testing into our CI/CD pipeline?
Insights on error tracking and monitoring within such a pipeline?
Any advice or insights you could share would be greatly appreciated. Thank you in advance!

https://redd.it/13qj3h6
@r_devops
Can anybody help with a Gitlab / Docker-Compose issue?

Hi, I've created a gitlab pipeline for a laravel project. I'm trying to work on creating a boilerplate docker/laravel setup I can used between each project I use. I know Sail exists but it's too magical & I want something I understand how it works & can configure easily from there.


That being said I build my docker-compose file & all works swimmingly, I can run my tests locally with my database connection working through an artisan entrypoint. However when I run this in my gitlab ci/cd I get:
SQLSTATE[HY000\] [2002\] Connection refused (Connection: mysql, SQL: SHOW FULL TABLES WHERE table_type = 'BASE TABLE')


I've got it to post the logs which shows it creating the test database & got it to show what databases are accesible witht he user credentials I've passed in my env which also return what I expect.


Pipeline looks like this:
image: docker/compose
services:
\- docker:dind
stages:
\- test
variables:
DB_CONNECTION: mysql
DB_HOST: database
DB_PORT: 3306
DB_DATABASE: laravel
DB_USERNAME: matt
DB_PASSWORD: password
before_script:
\- cp .env.example .env
\- docker-compose build --no-cache
\- apk add mysql-client
\- docker-compose up -d
\- sleep 40s
\- docker-compose logs database
\- docker-compose run --rm database mysql -h database -u matt -ppassword -e "SHOW DATABASES;"
test:
stage: test
script:
\- docker-compose run --rm composer install
\- docker-compose run --rm artisan key:generate
\- docker-compose run --rm artisan cache:clear
\- docker-compose run --rm artisan config:clear
\- docker-compose ps
\- docker-compose run --rm artisan test


Docker-compose:
version: '3.8'
services:
app:
container_name: app
build:
context: .
target: php
working_dir: /var/www/html
volumes:
\- ./:/var/www/html
ports:
\- "8000:8000"
depends_on:
\- database
networks:
\- laravel
database:
container_name: database
image: mysql:8.0
ports:
\- "3306:3306"
environment:
\- MYSQL_DATABASE=${DB_DATABASE}
\- MYSQL_USER=${DB_USERNAME}
\- MYSQL_PASSWORD=${DB_PASSWORD}
\- MYSQL_ROOT_PASSWORD=${DB_PASSWORD}
volumes:
\- ./docker/mysql:/docker-entrypoint-initdb.d
\- db-data:/var/lib/mysql
networks:
\- laravel
pma:
container_name: pma
image: phpmyadmin
ports:
\- 9000:80
environment:
\- PMA_HOST={$DB_HOST}
\- PMA_ARBITRARY=1
networks:
\- laravel
jenkins:
image: jenkins/jenkins
container_name: jenkins
ports:
\- "8080:8080"
networks:
\- laravel
artisan:
build:
context: .
target: php
container_name: artisan
volumes:
\- ./:/var/www/html:delegated
depends_on:
\- database
working_dir: /var/www/html
entrypoint: [ 'php', '/var/www/html/artisan' \]
networks:
\- laravel
composer:
image: composer:2.3.5
container_name: composer
volumes:
\- ./:/var/www/html
working_dir: /var/www/html
depends_on:
\- app
entrypoint: [ 'composer', '--ignore-platform-reqs' \]
networks:
\- laravel
volumes:
db-data:
networks:
laravel:
driver: bridge
Any help or point in the right direction would be great. Many thanks!

https://redd.it/13qs3fg
@r_devops
On AWS: Why use EKS instead of ECS?

I'm in a position where I've got to stand up some dockerized services (Airbyte, Kowl, etc.) which need to stay up (so no Lambda).

As I see it, my choices are to use ECS, EKS or good old fashioned Kubernetes. When would you lean towards EKS or Kubernetes instead of ECS? What do those services provide that make up for the added complexity?

https://redd.it/13qtujx
@r_devops
How do you on onboard new engineers?

My team are going through a growth phase in the coming months and I want to prepare some training material for new engineers. I have a bunch of architecture diagrams already there, and some descriptions about each repo, how we build and host. Our cloud environments and accounts. What else would you guys have?

https://redd.it/13r28vf
@r_devops
High Availability and Shared Storage for Docker Containers

I have a problem where I've come to believe the magical solution I want just doesn't exist.

I currently run standalone Docker hosts on Ubuntu virtual machines running on a 3-node Hyper-V S2D failover-cluster. These hosts run containers that handle mostly non-critical workloads, including small web applications and long-running tasks. However, they lack high availability and easy container migration capabilities. The main challenges stem from the use of volume bind mounts and the requirement for docker-compose.yml and various other files to be present on each host.

We're not a large operation so I'm really looking for a solution that gets me 90% of the way there while prioritising simplicity. ie. The idea solution in my mind is shared storage on 4 Swarm nodes. Volumes and compose files all live in this one location, each node has their default volume storage in this shared storage. One node goes offline, jump onto any other and `docker-compose up -d` and be on my way, or let swarm take care of it. Since everything lives in the same location on the shared volume, no problems.

Ideally, I would like to expose our S2D (Storage Spaces Direct) filesystem to each host through say, a hyper-v shared disk. I tested this and it failed but I've misplaced my notes on why this didn't work. If it worked for you please let me know...

Using the sshfs or nfs storage drivers with Swarm requires additional parameters added to each compose file, while not a deal breaker I would prefer something that doesn't require my team to remember to add those parameters for their container deployments to be HA.

**Summarising my scattered thoughts:**

* Find a straightforward solution that provides a majority of the desired outcome while prioritising simplicity.
* Implement shared storage on 4 Swarm nodes, housing volumes and compose files in a centralised location accessible by each node for easy failover.
* Explore options to expose the S2D filesystem to hosts, such as using a Hyper-V shared disk, if proven to be successful.
* Avoid the need for additional parameters in compose files by seeking alternatives to SSHFS or NFS storage drivers for Swarm deployments.
* Evaluate whether investing in Kubernetes is necessary or if there are other viable solutions to achieve high availability without accruing significant technical debt.

If anybody has a magic bullet, I would love to hear it!

https://redd.it/13r2n6w
@r_devops
Picking an architecture


I have been working on a solo project for about a year now in my spare time and probably have another year or two to go before completion.

As I’ve gotten more and more done I have found that it’s getting difficult to manage all my code in my mono repo. I know using micro services in a one man operation feels overkill but I’m looking for a way to space out and modularize my components.

On top of trying to make things more manageable, I have other needs such as abstracting away long running processes, taking in requests from third party webhooks, running code that’s triggered by database changes, etc… that would benefit from a more micro service type architecture.

My current plan is to keep things monolithic where possible, create a database service layer that will house all interactions with my database, and then separate services where needed. Everything would call the database service layer.

I’m interested in peoples thoughts on this, especially if anyone has faced a similar problem.

My stack consists of:
- nextjs
- postgres/prisma
- (almost) everything runs aws

https://redd.it/13r4n8m
@r_devops
Beginner dev ops project with nginx and docker - facing 502 error

Hey everyone, hoping someone could help me debug this issue for a project I'm working on - this my first time trying nginx and docker. I have been stumped for days.

I'm running two containers locally.Container 1: Port 80 Nginx + ReactContainer 2: Port 5000 NodeJs

I'm using nginx to reverse proxy api calls in the react app to the nodejs server. I'm getting this error in the docker logs when I try to login (which makes an api call to the nodejs server):

1 connect() failed (111: Connection reused) while connecting to upstream, client: 172.17.0.1, server: localhost, request: "POST /webapp/login HTTP/1.1", upstream: "https://127.0.0.1:5000/login", host: "localhost", referrer: "https://localhost/login"

My Nginx.conf file looks like this:

server {
listen 80;
server_name localhost;

location /webapp/ {
proxy_pass https://localhost:5000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}

location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}

include /etc/nginx/extra-conf.d/*.conf;
}

​

I want to eventually deploy these containers to lightsail container service and I need a way to communicate between containers. Lightsail docs mention using localhost, but I thought local host was only on local devices which is confusing.

https://redd.it/13r7s9u
@r_devops
Help in choosing a course on udemy or any other platform

Hi all, I'm new to devops and want to learn everything from scratch, could you please suggest a course which covers beginners to intermediate or to advance level on udemy or any other platform.

Thanks in advance

https://redd.it/13rbgas
@r_devops
Devcontainers in k8s

Hey there,

I am a developer that sees a great potential in devcontainers. I am sort of reorienting myself to DevOps because I hate all the obstacles that are present in regular development and I would like to make lives of my coworkers easier.

I already did that by Dockerizing one of our projects, but devcontainers are the next step.

My plan is to start experimenting with devcontainers in our k8s.

Is there any open-source solution available that provides functionality similar to GitHub Codespaces?

I need to learn more. Give me some resources to look through. Thanks for the headsache in advance!

https://redd.it/13rczr7
@r_devops
Issue in packed

I am using packer to install openjdk in an image

Image is successful but when I create a vm from that image it’s saying Java command not found

In image I added command which java it showed /bin/java but in vm it’s not there

I also did find . Still no traces of java
Where am I going wrong

https://redd.it/13rb3jl
@r_devops
Enterprise developers focus on prioritizing security from the early stages of development

Cisco’s most recent report, based on the findings from two SlashData global surveys that targeted enterprise developers, uncovers developers’ exposure to API security exploits, their outlook on security, and how they use automation tools to detect and remediate threats.


There is a significant rise in security threats; in fact, 58% of enterprise developers have had to tackle at least one API exploit in the past year alone. And to make matters worse, nearly half of them have experienced multiple API exploits during that time.


You can help us too and make an impact on the developer ecosystem.


Start here - https://survey.developernation.net/name/cbts3/branch/main?utm_medium=some&utm_source=reddit&utm_campaign=r/devops

https://redd.it/13rf4iq
@r_devops
Looking for advice - need to pick a course that's heavy on networking vs programming. Longer term goal - devops.

hi everyone, my longer term goal is to get into devops and i figured a solid, supporting course of study will go a long way to helping me achieve that.

i need to decide between 2 courses that have very different emphases.

Option 1: big on networking and operating systems (90 credits out of 180) but weak and scattered with programming (bit of C, PHP, JS, SQL, Python - about 30 credits in total). student reviews express high levels of satisfaction.

Option 2: the other course is 50 credits on networking and 50 credits on programming (mostly python, some Java and a bit of JS and SQL) out of 180, and that's if i select the infrastructure track, rather than the dev track, which would then skew it very heavily in favour of dev. student reviews have been critical because it's a new curriculum and some classes were poorly conceived and delivered.

I get the impression that devops leans more towards networks/systems knowledge although some programming is also required, so option 1 is the obvious choice but it's a more specialised education and boxes me into a network/systems role (what if i can't find work in that area and need to branch out as a junior?)

i'm based in Europe in case it matters.

Any advice/insights from industry professionals welcome! better yet, if any of you are willing to take a look at the 2 courses, i'd be happy to send you their links in a private message.

https://redd.it/13rgo2i
@r_devops
CICD with Bitbucket Pipeline and AWS CodeDeploy on EC2

How we can setup Bitbucket Pipeline with EC2 on AWS using CodeDeploy, This will reduce the work of manually deploying the code on the server and reduce the time-to-market of your application. This practice is known as CI-CD, which means continuous integration and continuous deployment or delivery.


https://medium.com/codelogicx/cicd-deploy-to-aws-codedeploy-with-bitbucket-pipeline-b5da79b55477

Time to Market is critical because it gives you a competitive advantage, enhances customer satisfaction, facilitates a feedback loop, enables iterative development, mitigates risks, saves costs, and supports scalability. CI/CD pipelines help achieve faster delivery, improve product quality, and enable continuous improvement, making them essential for reducing time to market. I hope this will help you to successfully setup CI/CD Pipeline on AWS Servers using Bitbucket.

Any kind of feedback is appreciated!!
Thank you!!

https://redd.it/13rj04d
@r_devops
Understanding Crossplane. Steep learning curve

Hey guys,

Coming from TF world, we are attempting to create a POC using crossplane and I am having a big time wrapping my mind around it. It seems like the documentation and the community is not mature enough and I am a little bit stuck if I cannot find good references.

So we are trying to create some XR, for example a MSSQL Database (deploy XR + Composition). The idea is that we only take database name, collation and sku name, to be provided by developers using a claim.

Now the caveat here is that the resource group, the vnet and the server are already existing resources created previously by terraform.

On Crossplane release 1.12.1 we finally got the observe-only feature and after reading the release note, it is still difficult to understand how I include those as part of my XR. Apparently the annotation needs to go on the Managed Resource.

So here are my doubts. How do I go about this?

1. For any existing resources should I create first a managed resource with the annotation to observe only or should it be part of the Composition?
2. If I have multiple compositions will there be a conflict?
3. What If I want these resources (resource group, server) to be their own Composite Resources pointing to these MR with the observe only annotation, can I create child XR (example mssql database) and consume these upstream XR?

As you see i am completely lost and having a hard time understanding these whole XRD + XR + Composition + MR + Claim

Any help would be much appreciated!

TLDR: Need to create a XR that have pre-existing infra created through terraform.

https://redd.it/13rjztj
@r_devops
Anyone here use AWS Cloud9? What's it good for?

Discovered Cloud9 recently.

Is it worth using? Why not just use VS Code with some AWS extensions instead?

Thoughts?

https://redd.it/13rjm6v
@r_devops