Instana synthetic monitoring
Im using the new instana synthetic monitoring feature to monitor my website. I have a smart alert that is created when the test fails 3 times consecutivrly . My website shuts down over night, so the tests fail - is there a way for the tests to run only at a certain time, I.e during the day?
https://redd.it/13pu3og
@r_devops
Im using the new instana synthetic monitoring feature to monitor my website. I have a smart alert that is created when the test fails 3 times consecutivrly . My website shuts down over night, so the tests fail - is there a way for the tests to run only at a certain time, I.e during the day?
https://redd.it/13pu3og
@r_devops
Reddit
r/devops on Reddit: Instana synthetic monitoring
Posted by u/Due-Body-850 - No votes and no comments
SONARQUBE LTS 9.9
Hi Team , i have one query we have sonarqube 8.9.8LTS and devloper have written code in java8, now with the sonarqube lts 9.9 it supports java 17 and sonarscanner java 11 or 17 , will this affect as the code written in java 8 will come to scanner where java 8 is not supported . How we can handle this?
https://redd.it/13qc6h6
@r_devops
Hi Team , i have one query we have sonarqube 8.9.8LTS and devloper have written code in java8, now with the sonarqube lts 9.9 it supports java 17 and sonarscanner java 11 or 17 , will this affect as the code written in java 8 will come to scanner where java 8 is not supported . How we can handle this?
https://redd.it/13qc6h6
@r_devops
Reddit
r/devops on Reddit: SONARQUBE LTS 9.9
Posted by u/Maleficent-Pain2765 - No votes and no comments
Unable To Publish Port On Host Machine With Docker
I was trying to build an image from Dockerfile but it didn't worked When I Use
docker run -p 8081:8080 yt-test
But, It Works When I Use Host As Network Interface. On Host Port 8080 (But I Want 8081).
docker run --network host yt-test
This is my Dockerfile
FROM python:3.9
RUN apt-get update && apt-get install -y git
WORKDIR /app
RUN git clone https://github.com/user234683/youtube-local
WORKDIR /app/youtube-local
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8080
CMD "python", "server.py"
Any Idea Why This is Happening?
https://redd.it/13qdeie
@r_devops
I was trying to build an image from Dockerfile but it didn't worked When I Use
docker run -p 8081:8080 yt-test
But, It Works When I Use Host As Network Interface. On Host Port 8080 (But I Want 8081).
docker run --network host yt-test
This is my Dockerfile
FROM python:3.9
RUN apt-get update && apt-get install -y git
WORKDIR /app
RUN git clone https://github.com/user234683/youtube-local
WORKDIR /app/youtube-local
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8080
CMD "python", "server.py"
Any Idea Why This is Happening?
https://redd.it/13qdeie
@r_devops
GitHub
GitHub - user234683/youtube-local: browser-based client for watching Youtube anonymously and with greater page performance
browser-based client for watching Youtube anonymously and with greater page performance - user234683/youtube-local
Running Post-Mortems
Ever wanted to introduce post-mortems to your team or department? Here is the detailed process of how to run them!
https://certomodo.substack.com/p/running-post-mortems
(cross-posted from r/SRE)
https://redd.it/13pvkwt
@r_devops
Ever wanted to introduce post-mortems to your team or department? Here is the detailed process of how to run them!
https://certomodo.substack.com/p/running-post-mortems
(cross-posted from r/SRE)
https://redd.it/13pvkwt
@r_devops
Certo Modo
Running Post-Mortems
This article continues the discussion on how your team can learn from failure after a production incident. While write-ups are very important in capturing and documenting what took place, the real value is created from an open and deliberate conversation…
If an 18yo person applied for a job and had a load of Cloud Provider certs + CKA - what would be your gut reaction?
My 17yo daughter (non-binary) is F-ing up at school caused by the usual stuff of bullying, depression and teenage angst and is unlikely to finish her A levels. She didn't get a high enough maths gcse grade, 1 point off being able to A level Computer Science and selected criminology instead along with Accounting and law.
So I've bought the GCP Architect, Azure Architect, AWS Architect and Mumshads CKA courses and I'm going to sit with her while she goes through them, does the practice and then buy her the exams.
She'll be 18 by the time she sits them and/or has to do retakes.
So if a CV for an 18 yo with no work experience and a load of certs got past the recruiter screening and landed in your inbox, what would be your thoughts/reactions
Edit
What I should have pointed out is that yes she does have an interest in doing it. And that this is a step to getting a bottom of the ladder entry-level job plus the architecture of the providers, their services, how they integrate, gives the high level knowledge of what they are about.
https://redd.it/13ooqr6
@r_devops
My 17yo daughter (non-binary) is F-ing up at school caused by the usual stuff of bullying, depression and teenage angst and is unlikely to finish her A levels. She didn't get a high enough maths gcse grade, 1 point off being able to A level Computer Science and selected criminology instead along with Accounting and law.
So I've bought the GCP Architect, Azure Architect, AWS Architect and Mumshads CKA courses and I'm going to sit with her while she goes through them, does the practice and then buy her the exams.
She'll be 18 by the time she sits them and/or has to do retakes.
So if a CV for an 18 yo with no work experience and a load of certs got past the recruiter screening and landed in your inbox, what would be your thoughts/reactions
Edit
What I should have pointed out is that yes she does have an interest in doing it. And that this is a step to getting a bottom of the ladder entry-level job plus the architecture of the providers, their services, how they integrate, gives the high level knowledge of what they are about.
https://redd.it/13ooqr6
@r_devops
Reddit
r/devops on Reddit: If an 18yo person applied for a job and had a load of Cloud Provider certs + CKA - what would be your gut reaction?
Posted by u/PowerfulExchange6220 - No votes and 67 comments
What would be the optimal working environment for junior cloud/devops engineers?
Where I come from good devops engineers are rare as (natural) diamonds - and every company is searching for them. I don't really have that much competition but I think my rate of growth could be much better at a larger company.
I'm just some weeks short of becoming intermediate where I work at, finally.
But their lack of an automation/security mindset is probably corrupting my future chances, so:
If you had some experience and want to grow or if you were younger again and had a 2nd chance, what would you actually look for at companies?
What structure, benefits, responsibilities... how large?
I want to have impact and for that I need the right environment.
https://redd.it/13qg6m4
@r_devops
Where I come from good devops engineers are rare as (natural) diamonds - and every company is searching for them. I don't really have that much competition but I think my rate of growth could be much better at a larger company.
I'm just some weeks short of becoming intermediate where I work at, finally.
But their lack of an automation/security mindset is probably corrupting my future chances, so:
If you had some experience and want to grow or if you were younger again and had a 2nd chance, what would you actually look for at companies?
What structure, benefits, responsibilities... how large?
I want to have impact and for that I need the right environment.
https://redd.it/13qg6m4
@r_devops
Reddit
r/devops on Reddit: What would be the optimal working environment for junior cloud/devops engineers?
Posted by u/AemonQE - No votes and no comments
DevOps Conferences -Europe 2023
Hi, guys,
I hope you are all doing well.
I was hoping you could give me some ideas of interesting conferences for DevOps people in Europe happening this year still.
Thank you :)
https://redd.it/13qi5lz
@r_devops
Hi, guys,
I hope you are all doing well.
I was hoping you could give me some ideas of interesting conferences for DevOps people in Europe happening this year still.
Thank you :)
https://redd.it/13qi5lz
@r_devops
Reddit
r/devops on Reddit: DevOps Conferences -Europe 2023
Posted by u/WorriedJaguar206 - No votes and 1 comment
Can Terraform Replace Powershell scripts ?
Hello and sorry for asking this as im not really experienced enough know the answer to this.
Context : as my company as a default setup for the Azure tenant of our Clients and will adjust them afterwards for special "needs", ive created around 6-8 Powershellscripts that will create the User and Groups Management import the basic policies ( Endpoint compliance etc ,thanks github for that one) etc etc.
Now my question is could the same be achieved with a terraform file ?
Would you recommend doing it that way or stick to the PS Scripts ?
Thanks :)
https://redd.it/13qk8ou
@r_devops
Hello and sorry for asking this as im not really experienced enough know the answer to this.
Context : as my company as a default setup for the Azure tenant of our Clients and will adjust them afterwards for special "needs", ive created around 6-8 Powershellscripts that will create the User and Groups Management import the basic policies ( Endpoint compliance etc ,thanks github for that one) etc etc.
Now my question is could the same be achieved with a terraform file ?
Would you recommend doing it that way or stick to the PS Scripts ?
Thanks :)
https://redd.it/13qk8ou
@r_devops
Reddit
r/devops on Reddit: Can Terraform Replace Powershell scripts ?
Posted by u/ThePathOfKami - No votes and no comments
Recommended approach for setting up performance testing with Locust to test an EKS cluster?
I initially attempted to install and test Locust on Minikube and then deploy it to our EKS cluster.
However, all documentations I encountered were limited and outdated, which caused things not to work. The official documentation offers a partially functional Terraform snippet for an unspecified service.
Considering our usage of GitHub actions, would it be advisable to create a job that can be manually or automatically triggered to run Locust via CLI? Should I still try to have Locust run on EKS? Is there a recommended approach?
Thanks ahead
https://redd.it/13ql7t6
@r_devops
I initially attempted to install and test Locust on Minikube and then deploy it to our EKS cluster.
However, all documentations I encountered were limited and outdated, which caused things not to work. The official documentation offers a partially functional Terraform snippet for an unspecified service.
Considering our usage of GitHub actions, would it be advisable to create a job that can be manually or automatically triggered to run Locust via CLI? Should I still try to have Locust run on EKS? Is there a recommended approach?
Thanks ahead
https://redd.it/13ql7t6
@r_devops
Reddit
r/devops on Reddit: Recommended approach for setting up performance testing with Locust to test an EKS cluster?
Posted by u/HeadTea - No votes and no comments
Support Auth0 in Azure Static Web Apps
Learn how to support Auth0 in Azure SWA for your Blazor WebAssembly application.
Read more…
https://redd.it/13qn1rq
@r_devops
Learn how to support Auth0 in Azure SWA for your Blazor WebAssembly application.
Read more…
https://redd.it/13qn1rq
@r_devops
Auth0 - Blog
Support Auth0 in Azure Static Web Apps for Blazor WASM Apps
Learn how to support Auth0 in Azure SWA for your Blazor WebAssembly application.
Self heal timeout per app in Argo?
I'm trying to see if it's possible to set a delay on self heal syncing in argo by using an annotation or label or something on a resource directly. It seems like there is a config option to do this across the whole argo installation but I can't seem to find any way to use that option in a more fine grained way. I can't really find an authoritative list of available configuration labels at all to be honest. Most seem like they are scattered across the docs.
Anyone know anything?
https://redd.it/13qkp3r
@r_devops
I'm trying to see if it's possible to set a delay on self heal syncing in argo by using an annotation or label or something on a resource directly. It seems like there is a config option to do this across the whole argo installation but I can't seem to find any way to use that option in a more fine grained way. I can't really find an authoritative list of available configuration labels at all to be honest. Most seem like they are scattered across the docs.
Anyone know anything?
https://redd.it/13qkp3r
@r_devops
GitHub
argo-cd/core-install.yaml at c99669e088b5f25c8ce8faff6df25797a8beb5ba · argoproj/argo-cd
Declarative continuous deployment for Kubernetes. Contribute to argoproj/argo-cd development by creating an account on GitHub.
Advice needed for CI/CD
My friend and I are working on a project that involves a web app, an extension app, a native mobile app, and a desktop app. All these applications share a large amount of codebase, primarily based on React JS, and are structured as a monorepo.
We're looking to set up a comprehensive CI/CD pipeline. Being relatively new to the field, our objective is to have separate staging environments for each of the apps and a pipeline that can test, build, and deploy individual applications as and when necessary. User Acceptance Testing (UAT) is another important component that we wish to include in our workflow.
We've been considering using GitHub Actions for our testing and building phases, Fastlane for mobile app deployments, and potentially integrating Sentry for error tracking.
I would love to hear your thoughts on:
Which CI/CD tools would you recommend for such a setup?
Any best practices for managing CI/CD in a monorepo environment?
Strategies for managing deployments of multiple applications (web, extension, mobile, desktop) from a monorepo?
Recommendations for incorporating UAT testing into our CI/CD pipeline?
Insights on error tracking and monitoring within such a pipeline?
Any advice or insights you could share would be greatly appreciated. Thank you in advance!
https://redd.it/13qj3h6
@r_devops
My friend and I are working on a project that involves a web app, an extension app, a native mobile app, and a desktop app. All these applications share a large amount of codebase, primarily based on React JS, and are structured as a monorepo.
We're looking to set up a comprehensive CI/CD pipeline. Being relatively new to the field, our objective is to have separate staging environments for each of the apps and a pipeline that can test, build, and deploy individual applications as and when necessary. User Acceptance Testing (UAT) is another important component that we wish to include in our workflow.
We've been considering using GitHub Actions for our testing and building phases, Fastlane for mobile app deployments, and potentially integrating Sentry for error tracking.
I would love to hear your thoughts on:
Which CI/CD tools would you recommend for such a setup?
Any best practices for managing CI/CD in a monorepo environment?
Strategies for managing deployments of multiple applications (web, extension, mobile, desktop) from a monorepo?
Recommendations for incorporating UAT testing into our CI/CD pipeline?
Insights on error tracking and monitoring within such a pipeline?
Any advice or insights you could share would be greatly appreciated. Thank you in advance!
https://redd.it/13qj3h6
@r_devops
Reddit
r/devops on Reddit: Advice needed for CI/CD
Posted by u/No-Psychology3901 - 1 vote and 1 comment
Can anybody help with a Gitlab / Docker-Compose issue?
Hi, I've created a gitlab pipeline for a laravel project. I'm trying to work on creating a boilerplate docker/laravel setup I can used between each project I use. I know Sail exists but it's too magical & I want something I understand how it works & can configure easily from there.
That being said I build my docker-compose file & all works swimmingly, I can run my tests locally with my database connection working through an artisan entrypoint. However when I run this in my gitlab ci/cd I get:
SQLSTATE[HY000\] [2002\] Connection refused (Connection: mysql, SQL: SHOW FULL TABLES WHERE table_type = 'BASE TABLE')
I've got it to post the logs which shows it creating the test database & got it to show what databases are accesible witht he user credentials I've passed in my env which also return what I expect.
Pipeline looks like this:
image: docker/compose
services:
\- docker:dind
stages:
\- test
variables:
DB_CONNECTION: mysql
DB_HOST: database
DB_PORT: 3306
DB_DATABASE: laravel
DB_USERNAME: matt
DB_PASSWORD: password
before_script:
\- cp .env.example .env
\- docker-compose build --no-cache
\- apk add mysql-client
\- docker-compose up -d
\- sleep 40s
\- docker-compose logs database
\- docker-compose run --rm database mysql -h database -u matt -ppassword -e "SHOW DATABASES;"
test:
stage: test
script:
\- docker-compose run --rm composer install
\- docker-compose run --rm artisan key:generate
\- docker-compose run --rm artisan cache:clear
\- docker-compose run --rm artisan config:clear
\- docker-compose ps
\- docker-compose run --rm artisan test
Docker-compose:
version: '3.8'
services:
app:
container_name: app
build:
context: .
target: php
working_dir: /var/www/html
volumes:
\- ./:/var/www/html
ports:
\- "8000:8000"
depends_on:
\- database
networks:
\- laravel
database:
container_name: database
image: mysql:8.0
ports:
\- "3306:3306"
environment:
\- MYSQL_DATABASE=${DB_DATABASE}
\- MYSQL_USER=${DB_USERNAME}
\- MYSQL_PASSWORD=${DB_PASSWORD}
\- MYSQL_ROOT_PASSWORD=${DB_PASSWORD}
volumes:
\- ./docker/mysql:/docker-entrypoint-initdb.d
\- db-data:/var/lib/mysql
networks:
\- laravel
pma:
container_name: pma
image: phpmyadmin
ports:
\- 9000:80
environment:
\- PMA_HOST={$DB_HOST}
\- PMA_ARBITRARY=1
networks:
\- laravel
jenkins:
image: jenkins/jenkins
container_name: jenkins
ports:
\- "8080:8080"
networks:
\- laravel
artisan:
build:
context: .
target: php
container_name: artisan
volumes:
\- ./:/var/www/html:delegated
depends_on:
\- database
working_dir: /var/www/html
entrypoint: [ 'php', '/var/www/html/artisan' \]
networks:
\- laravel
composer:
image: composer:2.3.5
container_name: composer
volumes:
\- ./:/var/www/html
working_dir: /var/www/html
depends_on:
\- app
entrypoint: [ 'composer', '--ignore-platform-reqs' \]
networks:
\- laravel
volumes:
db-data:
networks:
laravel:
driver: bridge
Any help or point in the right direction would be great. Many thanks!
https://redd.it/13qs3fg
@r_devops
Hi, I've created a gitlab pipeline for a laravel project. I'm trying to work on creating a boilerplate docker/laravel setup I can used between each project I use. I know Sail exists but it's too magical & I want something I understand how it works & can configure easily from there.
That being said I build my docker-compose file & all works swimmingly, I can run my tests locally with my database connection working through an artisan entrypoint. However when I run this in my gitlab ci/cd I get:
SQLSTATE[HY000\] [2002\] Connection refused (Connection: mysql, SQL: SHOW FULL TABLES WHERE table_type = 'BASE TABLE')
I've got it to post the logs which shows it creating the test database & got it to show what databases are accesible witht he user credentials I've passed in my env which also return what I expect.
Pipeline looks like this:
image: docker/compose
services:
\- docker:dind
stages:
\- test
variables:
DB_CONNECTION: mysql
DB_HOST: database
DB_PORT: 3306
DB_DATABASE: laravel
DB_USERNAME: matt
DB_PASSWORD: password
before_script:
\- cp .env.example .env
\- docker-compose build --no-cache
\- apk add mysql-client
\- docker-compose up -d
\- sleep 40s
\- docker-compose logs database
\- docker-compose run --rm database mysql -h database -u matt -ppassword -e "SHOW DATABASES;"
test:
stage: test
script:
\- docker-compose run --rm composer install
\- docker-compose run --rm artisan key:generate
\- docker-compose run --rm artisan cache:clear
\- docker-compose run --rm artisan config:clear
\- docker-compose ps
\- docker-compose run --rm artisan test
Docker-compose:
version: '3.8'
services:
app:
container_name: app
build:
context: .
target: php
working_dir: /var/www/html
volumes:
\- ./:/var/www/html
ports:
\- "8000:8000"
depends_on:
\- database
networks:
\- laravel
database:
container_name: database
image: mysql:8.0
ports:
\- "3306:3306"
environment:
\- MYSQL_DATABASE=${DB_DATABASE}
\- MYSQL_USER=${DB_USERNAME}
\- MYSQL_PASSWORD=${DB_PASSWORD}
\- MYSQL_ROOT_PASSWORD=${DB_PASSWORD}
volumes:
\- ./docker/mysql:/docker-entrypoint-initdb.d
\- db-data:/var/lib/mysql
networks:
\- laravel
pma:
container_name: pma
image: phpmyadmin
ports:
\- 9000:80
environment:
\- PMA_HOST={$DB_HOST}
\- PMA_ARBITRARY=1
networks:
\- laravel
jenkins:
image: jenkins/jenkins
container_name: jenkins
ports:
\- "8080:8080"
networks:
\- laravel
artisan:
build:
context: .
target: php
container_name: artisan
volumes:
\- ./:/var/www/html:delegated
depends_on:
\- database
working_dir: /var/www/html
entrypoint: [ 'php', '/var/www/html/artisan' \]
networks:
\- laravel
composer:
image: composer:2.3.5
container_name: composer
volumes:
\- ./:/var/www/html
working_dir: /var/www/html
depends_on:
\- app
entrypoint: [ 'composer', '--ignore-platform-reqs' \]
networks:
\- laravel
volumes:
db-data:
networks:
laravel:
driver: bridge
Any help or point in the right direction would be great. Many thanks!
https://redd.it/13qs3fg
@r_devops
Reddit
r/devops on Reddit: Can anybody help with a Gitlab / Docker-Compose issue?
Posted by u/NowThenMates - No votes and 1 comment
On AWS: Why use EKS instead of ECS?
I'm in a position where I've got to stand up some dockerized services (Airbyte, Kowl, etc.) which need to stay up (so no Lambda).
As I see it, my choices are to use ECS, EKS or good old fashioned Kubernetes. When would you lean towards EKS or Kubernetes instead of ECS? What do those services provide that make up for the added complexity?
https://redd.it/13qtujx
@r_devops
I'm in a position where I've got to stand up some dockerized services (Airbyte, Kowl, etc.) which need to stay up (so no Lambda).
As I see it, my choices are to use ECS, EKS or good old fashioned Kubernetes. When would you lean towards EKS or Kubernetes instead of ECS? What do those services provide that make up for the added complexity?
https://redd.it/13qtujx
@r_devops
Reddit
r/devops on Reddit: On AWS: Why use EKS instead of ECS?
Posted by u/RandomWalk55 - No votes and 5 comments
How do you on onboard new engineers?
My team are going through a growth phase in the coming months and I want to prepare some training material for new engineers. I have a bunch of architecture diagrams already there, and some descriptions about each repo, how we build and host. Our cloud environments and accounts. What else would you guys have?
https://redd.it/13r28vf
@r_devops
My team are going through a growth phase in the coming months and I want to prepare some training material for new engineers. I have a bunch of architecture diagrams already there, and some descriptions about each repo, how we build and host. Our cloud environments and accounts. What else would you guys have?
https://redd.it/13r28vf
@r_devops
Reddit
r/devops on Reddit: How do you on onboard new engineers?
Posted by u/openwidecomeinside - No votes and 1 comment
High Availability and Shared Storage for Docker Containers
I have a problem where I've come to believe the magical solution I want just doesn't exist.
I currently run standalone Docker hosts on Ubuntu virtual machines running on a 3-node Hyper-V S2D failover-cluster. These hosts run containers that handle mostly non-critical workloads, including small web applications and long-running tasks. However, they lack high availability and easy container migration capabilities. The main challenges stem from the use of volume bind mounts and the requirement for docker-compose.yml and various other files to be present on each host.
We're not a large operation so I'm really looking for a solution that gets me 90% of the way there while prioritising simplicity. ie. The idea solution in my mind is shared storage on 4 Swarm nodes. Volumes and compose files all live in this one location, each node has their default volume storage in this shared storage. One node goes offline, jump onto any other and `docker-compose up -d` and be on my way, or let swarm take care of it. Since everything lives in the same location on the shared volume, no problems.
Ideally, I would like to expose our S2D (Storage Spaces Direct) filesystem to each host through say, a hyper-v shared disk. I tested this and it failed but I've misplaced my notes on why this didn't work. If it worked for you please let me know...
Using the sshfs or nfs storage drivers with Swarm requires additional parameters added to each compose file, while not a deal breaker I would prefer something that doesn't require my team to remember to add those parameters for their container deployments to be HA.
**Summarising my scattered thoughts:**
* Find a straightforward solution that provides a majority of the desired outcome while prioritising simplicity.
* Implement shared storage on 4 Swarm nodes, housing volumes and compose files in a centralised location accessible by each node for easy failover.
* Explore options to expose the S2D filesystem to hosts, such as using a Hyper-V shared disk, if proven to be successful.
* Avoid the need for additional parameters in compose files by seeking alternatives to SSHFS or NFS storage drivers for Swarm deployments.
* Evaluate whether investing in Kubernetes is necessary or if there are other viable solutions to achieve high availability without accruing significant technical debt.
If anybody has a magic bullet, I would love to hear it!
https://redd.it/13r2n6w
@r_devops
I have a problem where I've come to believe the magical solution I want just doesn't exist.
I currently run standalone Docker hosts on Ubuntu virtual machines running on a 3-node Hyper-V S2D failover-cluster. These hosts run containers that handle mostly non-critical workloads, including small web applications and long-running tasks. However, they lack high availability and easy container migration capabilities. The main challenges stem from the use of volume bind mounts and the requirement for docker-compose.yml and various other files to be present on each host.
We're not a large operation so I'm really looking for a solution that gets me 90% of the way there while prioritising simplicity. ie. The idea solution in my mind is shared storage on 4 Swarm nodes. Volumes and compose files all live in this one location, each node has their default volume storage in this shared storage. One node goes offline, jump onto any other and `docker-compose up -d` and be on my way, or let swarm take care of it. Since everything lives in the same location on the shared volume, no problems.
Ideally, I would like to expose our S2D (Storage Spaces Direct) filesystem to each host through say, a hyper-v shared disk. I tested this and it failed but I've misplaced my notes on why this didn't work. If it worked for you please let me know...
Using the sshfs or nfs storage drivers with Swarm requires additional parameters added to each compose file, while not a deal breaker I would prefer something that doesn't require my team to remember to add those parameters for their container deployments to be HA.
**Summarising my scattered thoughts:**
* Find a straightforward solution that provides a majority of the desired outcome while prioritising simplicity.
* Implement shared storage on 4 Swarm nodes, housing volumes and compose files in a centralised location accessible by each node for easy failover.
* Explore options to expose the S2D filesystem to hosts, such as using a Hyper-V shared disk, if proven to be successful.
* Avoid the need for additional parameters in compose files by seeking alternatives to SSHFS or NFS storage drivers for Swarm deployments.
* Evaluate whether investing in Kubernetes is necessary or if there are other viable solutions to achieve high availability without accruing significant technical debt.
If anybody has a magic bullet, I would love to hear it!
https://redd.it/13r2n6w
@r_devops
Reddit
r/devops on Reddit: High Availability and Shared Storage for Docker Containers
Posted by u/lolSaam - No votes and 6 comments
Picking an architecture
I have been working on a solo project for about a year now in my spare time and probably have another year or two to go before completion.
As I’ve gotten more and more done I have found that it’s getting difficult to manage all my code in my mono repo. I know using micro services in a one man operation feels overkill but I’m looking for a way to space out and modularize my components.
On top of trying to make things more manageable, I have other needs such as abstracting away long running processes, taking in requests from third party webhooks, running code that’s triggered by database changes, etc… that would benefit from a more micro service type architecture.
My current plan is to keep things monolithic where possible, create a database service layer that will house all interactions with my database, and then separate services where needed. Everything would call the database service layer.
I’m interested in peoples thoughts on this, especially if anyone has faced a similar problem.
My stack consists of:
- nextjs
- postgres/prisma
- (almost) everything runs aws
https://redd.it/13r4n8m
@r_devops
I have been working on a solo project for about a year now in my spare time and probably have another year or two to go before completion.
As I’ve gotten more and more done I have found that it’s getting difficult to manage all my code in my mono repo. I know using micro services in a one man operation feels overkill but I’m looking for a way to space out and modularize my components.
On top of trying to make things more manageable, I have other needs such as abstracting away long running processes, taking in requests from third party webhooks, running code that’s triggered by database changes, etc… that would benefit from a more micro service type architecture.
My current plan is to keep things monolithic where possible, create a database service layer that will house all interactions with my database, and then separate services where needed. Everything would call the database service layer.
I’m interested in peoples thoughts on this, especially if anyone has faced a similar problem.
My stack consists of:
- nextjs
- postgres/prisma
- (almost) everything runs aws
https://redd.it/13r4n8m
@r_devops
Reddit
r/devops on Reddit: Picking an architecture
Posted by u/thisismyusername0909 - No votes and 1 comment
Beginner dev ops project with nginx and docker - facing 502 error
Hey everyone, hoping someone could help me debug this issue for a project I'm working on - this my first time trying nginx and docker. I have been stumped for days.
I'm running two containers locally.Container 1: Port 80 Nginx + ReactContainer 2: Port 5000 NodeJs
I'm using nginx to reverse proxy api calls in the react app to the nodejs server. I'm getting this error in the docker logs when I try to login (which makes an api call to the nodejs server):
1 connect() failed (111: Connection reused) while connecting to upstream, client: 172.17.0.1, server: localhost, request: "POST /webapp/login HTTP/1.1", upstream: "https://127.0.0.1:5000/login", host: "localhost", referrer: "https://localhost/login"
My Nginx.conf file looks like this:
server {
listen 80;
server_name localhost;
location /webapp/ {
proxy_pass https://localhost:5000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
include /etc/nginx/extra-conf.d/*.conf;
}
​
I want to eventually deploy these containers to lightsail container service and I need a way to communicate between containers. Lightsail docs mention using localhost, but I thought local host was only on local devices which is confusing.
https://redd.it/13r7s9u
@r_devops
Hey everyone, hoping someone could help me debug this issue for a project I'm working on - this my first time trying nginx and docker. I have been stumped for days.
I'm running two containers locally.Container 1: Port 80 Nginx + ReactContainer 2: Port 5000 NodeJs
I'm using nginx to reverse proxy api calls in the react app to the nodejs server. I'm getting this error in the docker logs when I try to login (which makes an api call to the nodejs server):
1 connect() failed (111: Connection reused) while connecting to upstream, client: 172.17.0.1, server: localhost, request: "POST /webapp/login HTTP/1.1", upstream: "https://127.0.0.1:5000/login", host: "localhost", referrer: "https://localhost/login"
My Nginx.conf file looks like this:
server {
listen 80;
server_name localhost;
location /webapp/ {
proxy_pass https://localhost:5000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
include /etc/nginx/extra-conf.d/*.conf;
}
​
I want to eventually deploy these containers to lightsail container service and I need a way to communicate between containers. Lightsail docs mention using localhost, but I thought local host was only on local devices which is confusing.
https://redd.it/13r7s9u
@r_devops
Reddit
r/devops on Reddit: Beginner dev ops project with nginx and docker - facing 502 error
Posted by u/Illustrious_You_5159 - No votes and no comments
What is Jenkins, why do we use it and what are its disadvantages?
Check out this beginner-friendly guide on Jenkins and why we use it: https://medium.com/cloud-native-daily/jenkins-tutorial-basics-to-advanced-for-devops-engineer-27265e5ae67d
https://redd.it/13r7q7z
@r_devops
Check out this beginner-friendly guide on Jenkins and why we use it: https://medium.com/cloud-native-daily/jenkins-tutorial-basics-to-advanced-for-devops-engineer-27265e5ae67d
https://redd.it/13r7q7z
@r_devops
Medium
Jenkins Tutorial Basics To Advanced for DevOps Engineer
What is Jenkins?
Help in choosing a course on udemy or any other platform
Hi all, I'm new to devops and want to learn everything from scratch, could you please suggest a course which covers beginners to intermediate or to advance level on udemy or any other platform.
Thanks in advance
https://redd.it/13rbgas
@r_devops
Hi all, I'm new to devops and want to learn everything from scratch, could you please suggest a course which covers beginners to intermediate or to advance level on udemy or any other platform.
Thanks in advance
https://redd.it/13rbgas
@r_devops
Reddit
r/devops on Reddit: Help in choosing a course on udemy or any other platform
Posted by u/LeadershipTasty3507 - No votes and no comments
Devcontainers in k8s
Hey there,
I am a developer that sees a great potential in devcontainers. I am sort of reorienting myself to DevOps because I hate all the obstacles that are present in regular development and I would like to make lives of my coworkers easier.
I already did that by Dockerizing one of our projects, but devcontainers are the next step.
My plan is to start experimenting with devcontainers in our k8s.
Is there any open-source solution available that provides functionality similar to GitHub Codespaces?
I need to learn more. Give me some resources to look through. Thanks for the headsache in advance!
https://redd.it/13rczr7
@r_devops
Hey there,
I am a developer that sees a great potential in devcontainers. I am sort of reorienting myself to DevOps because I hate all the obstacles that are present in regular development and I would like to make lives of my coworkers easier.
I already did that by Dockerizing one of our projects, but devcontainers are the next step.
My plan is to start experimenting with devcontainers in our k8s.
Is there any open-source solution available that provides functionality similar to GitHub Codespaces?
I need to learn more. Give me some resources to look through. Thanks for the headsache in advance!
https://redd.it/13rczr7
@r_devops
Reddit
r/devops on Reddit: Devcontainers in k8s
Posted by u/gnivirht_invest - No votes and no comments