How do you handle SSL certs for dynamic sub-subdomains like feat321.dev.example.com?
I’m in the middle of creating a way for our team to have preview apps foe open Pull Requests.
We have a commercial wildcard certificate for *.example.com. As you all know, this wildcard only works for level1 subdomains like dev.example.com.
We agreed to use domains like this for the preview apps: feat321.dev.example.com. With the restriction that another commercial wild card cert only for this use case is too expensive: how do you tackle this problem?
Do you use let’s encrypt certs for the specific domains, even if you have to create multiple ones per hour and maybe even delete them again within a few mins?
Or do you use a Let‘s Encrypt wildcard cert - which is cumbersome to renew due to the DNS TXT record challenge that has to be altered every 3 months?
Or do you maybe come up with some other domain structure like dev-feat321.example.com for the sake of simplicity?
https://redd.it/rgkjtp
@r_devops
I’m in the middle of creating a way for our team to have preview apps foe open Pull Requests.
We have a commercial wildcard certificate for *.example.com. As you all know, this wildcard only works for level1 subdomains like dev.example.com.
We agreed to use domains like this for the preview apps: feat321.dev.example.com. With the restriction that another commercial wild card cert only for this use case is too expensive: how do you tackle this problem?
Do you use let’s encrypt certs for the specific domains, even if you have to create multiple ones per hour and maybe even delete them again within a few mins?
Or do you use a Let‘s Encrypt wildcard cert - which is cumbersome to renew due to the DNS TXT record challenge that has to be altered every 3 months?
Or do you maybe come up with some other domain structure like dev-feat321.example.com for the sake of simplicity?
https://redd.it/rgkjtp
@r_devops
reddit
How do you handle SSL certs for dynamic sub-subdomains like...
I’m in the middle of creating a way for our team to have preview apps foe open Pull Requests. We have a commercial wildcard certificate for...
What I can do to do more and more devops things?
I work for a company where I'm the sole "server guy". Fully remote, all of our infrastructure is in Digitalocean (and a few clients in AWS). All servers are managed by me, deployed by me, backed up by me and so on.
We have a very strong dev team, so I don't need to help them much; I'm not a dev myself, I can help understand some problems from a more out of the box perspective but that's it. They pretty much handle themselves. When shit hits the fan and they don't know what to do they either go to their lead dev, the company owner, or me; when the lead dev doesn't know how to handle it he goes to company owner; I'm the last resort when it's not a development challenge.
What I do daily:
\- orient devs on what to focus on (project management), test their work, give feedback, write new vectors for them to focus on the next day/push.
\- solve problems the devs don't know/have access to solve, like installing libraries, reconfiguring PHP, setting up Apache/NGinx/elasticsearch/whatnot to handle the workload
\- solve management requirements, like scripting backup and maintenance, scripting data normalization scripts to filter what devs need to feed to their code to attain client objectives
\- solve "lack of knowledge" issues, like devs don't know how to handle a certain workload and I know a service/software that does just that.
\- solve "lack of creativity" issues, like dev doesn't know how to handle a problem and I can think of a straightforward way to solve it but can't code the solution myself.
\- research when even the company owner doesn't know if something is possible.
There's no need for terraform/ansible on our company because 99,9% of our work is web development, so 99% of servers use the same structure (php, apache, yada yada); I handle most of our staging environment on a single big server (instead of several smaller ones, to save on cost of operation), and deploy to tailored size when it goes live.
There's also not much leeway to interfere in CICD because like I said, we do mostly webdev, so no "new features all the time". I'd bet 50% of our workload is Laravel and around 30% Magento.
Fact is that I earn 20 USD/h and I do have a lot of leeway to do more hours a day. My kids need special needs school next year so I'm wanting tips on what I could do to do more (hours) in my job and also bring more value to the company. Make things better.
I'm most reactive to events in the company and that gets me around 40h to 60h a month; I would love to see that reach 200h.
What would you guys suggest?
https://redd.it/rgnsgm
@r_devops
I work for a company where I'm the sole "server guy". Fully remote, all of our infrastructure is in Digitalocean (and a few clients in AWS). All servers are managed by me, deployed by me, backed up by me and so on.
We have a very strong dev team, so I don't need to help them much; I'm not a dev myself, I can help understand some problems from a more out of the box perspective but that's it. They pretty much handle themselves. When shit hits the fan and they don't know what to do they either go to their lead dev, the company owner, or me; when the lead dev doesn't know how to handle it he goes to company owner; I'm the last resort when it's not a development challenge.
What I do daily:
\- orient devs on what to focus on (project management), test their work, give feedback, write new vectors for them to focus on the next day/push.
\- solve problems the devs don't know/have access to solve, like installing libraries, reconfiguring PHP, setting up Apache/NGinx/elasticsearch/whatnot to handle the workload
\- solve management requirements, like scripting backup and maintenance, scripting data normalization scripts to filter what devs need to feed to their code to attain client objectives
\- solve "lack of knowledge" issues, like devs don't know how to handle a certain workload and I know a service/software that does just that.
\- solve "lack of creativity" issues, like dev doesn't know how to handle a problem and I can think of a straightforward way to solve it but can't code the solution myself.
\- research when even the company owner doesn't know if something is possible.
There's no need for terraform/ansible on our company because 99,9% of our work is web development, so 99% of servers use the same structure (php, apache, yada yada); I handle most of our staging environment on a single big server (instead of several smaller ones, to save on cost of operation), and deploy to tailored size when it goes live.
There's also not much leeway to interfere in CICD because like I said, we do mostly webdev, so no "new features all the time". I'd bet 50% of our workload is Laravel and around 30% Magento.
Fact is that I earn 20 USD/h and I do have a lot of leeway to do more hours a day. My kids need special needs school next year so I'm wanting tips on what I could do to do more (hours) in my job and also bring more value to the company. Make things better.
I'm most reactive to events in the company and that gets me around 40h to 60h a month; I would love to see that reach 200h.
What would you guys suggest?
https://redd.it/rgnsgm
@r_devops
reddit
What I can do to do more and more devops things?
I work for a company where I'm the sole "server guy". Fully remote, all of our infrastructure is in Digitalocean (and a few clients in AWS). All...
Can't ssh self host by ssh
I created an `appuser` on linux. Then generated ssh keys with right permissions.
(operations via appuser)
$ ls -la /appuser/
...
drwx------ 2 appuser appuser 20 1 2 01:01 .ssh
$ ls -la /appuser/.ssh
drwx------ 2 appuser appuser 80 1 5 01:02 .
drwxr-x--- 10 appuser appuser 4096 1 5 01:02 ..
-rw-r--r-- 1 appuser appuser 437 1 5 01:02 authorized_keys
-rw------- 1 appuser appuser 1675 1 5 01:02 id_rsa
-rw-r--r-- 1 appuser appuser 437 1 5 01:02 id_rsa.pub
-rw-r--r-- 1 appuser appuser 2670 1 5 01:02 known_hosts
I copied the id_rsa.pub key to authorized_keys. Then run
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
In the `/etc/ssh/sshd_config`:
#PasswordAuthentication yes
When ssh self
$ ssh (self IP)
It request password
appuser@(self IP)'s password:
Why? Which permission is wrong?
https://redd.it/rgorur
@r_devops
I created an `appuser` on linux. Then generated ssh keys with right permissions.
(operations via appuser)
$ ls -la /appuser/
...
drwx------ 2 appuser appuser 20 1 2 01:01 .ssh
$ ls -la /appuser/.ssh
drwx------ 2 appuser appuser 80 1 5 01:02 .
drwxr-x--- 10 appuser appuser 4096 1 5 01:02 ..
-rw-r--r-- 1 appuser appuser 437 1 5 01:02 authorized_keys
-rw------- 1 appuser appuser 1675 1 5 01:02 id_rsa
-rw-r--r-- 1 appuser appuser 437 1 5 01:02 id_rsa.pub
-rw-r--r-- 1 appuser appuser 2670 1 5 01:02 known_hosts
I copied the id_rsa.pub key to authorized_keys. Then run
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
In the `/etc/ssh/sshd_config`:
#PasswordAuthentication yes
When ssh self
$ ssh (self IP)
It request password
appuser@(self IP)'s password:
Why? Which permission is wrong?
https://redd.it/rgorur
@r_devops
reddit
Can't ssh self host by ssh
I created an `appuser` on linux. Then generated ssh keys with right permissions. (operations via appuser) $ ls -la /appuser/ ... ...
I get to pick 1 online course for professional development, what should I pick to enhance my employability?
I'm a build/release engineer whose main tools each day are jenkins, docker, and various AWS services. I can also code in python, bash, and groovy. As eluded to in the title, this year my company has offered to pay for me to pick one online course to up-level my general devops skills. Since the tech I work with is not very cutting edge and so I'm not in a position to know, my question for the community is what do you think would be the single highest-leverage investment I could make to improve my employability in the current market (I'm not actively job hunting atm, but maybe in the new year)? Probably something new or in high demand, but open to all ideas!
https://redd.it/rgmt8e
@r_devops
I'm a build/release engineer whose main tools each day are jenkins, docker, and various AWS services. I can also code in python, bash, and groovy. As eluded to in the title, this year my company has offered to pay for me to pick one online course to up-level my general devops skills. Since the tech I work with is not very cutting edge and so I'm not in a position to know, my question for the community is what do you think would be the single highest-leverage investment I could make to improve my employability in the current market (I'm not actively job hunting atm, but maybe in the new year)? Probably something new or in high demand, but open to all ideas!
https://redd.it/rgmt8e
@r_devops
reddit
I get to pick 1 online course for professional development, what...
I'm a build/release engineer whose main tools each day are jenkins, docker, and various AWS services. I can also code in python, bash, and groovy....
A Pipeline that creates pipelines?
Hello,
Perhaps I am at the brink of insanity, because at face-value this does seem ridiculous and I've been spinning my wheels for weeks. But hear me out, in a fast growing organization with new projects/modules being spun up everyday with the exact deployment process, I was thinking to myself, how can I automate the creation of these pipelines? Specifically with AWS CodePipeline/AWS Codebuild.
There is no way to scan github and create these pipelines automatically with AWS. So I was thinking to myself, how could I make this possible?
So at face value, AWS treats (most) everything as a resource. Whether that be an API Gateway, an ECR, EC2, Codebuild, CodePipeline, they are all just "resources".
So I was thinking, what's to prevent me from creating a pipeline, that, well creates a resource, specifically another CodePipeline Resource?
The basic principle is this -- and feel free to call this ridiculous because it most likely is.
Please note, this was quickly written and obviously there are some intricacies that need to be refined, but heres the quick and dirty rundown:
I set up a lambda ran on a cronjob (or alternatively can be triggered manually) that scans our organization for repositories, and as it scans the repositories I search for a terraform file that references a terraform module that handles the the setup, stages, etc of the pipeline that we want created. The base terraform file in the repository contains just boilerplate code such as the repository source url, buildspec, type of deployment, etc, and any additional non-secret env variables passed in by the terraform file in the repository. If it contains the terraform file, then it checks out the latest version, sets the pipeline source to the s3 reference key to the repositories name, and from there it then stores the the source code in the s3 bucket with the name of the project, which then triggers the codepipeline to execute. From there the pipeline then takes the source, and in the build step it executes the terraform script which sets up the pipeline for that repository. If the pipeline has already been created, then the terraform script has no changes, then there will have be no updates - and nothing will be changed. If there has been updates to the terraform file, then the pre-existing pipeline will then be updated.
From there bam, we have a pipeline that has been automatically created without any manual work.
This is obviously entirely a novel concept -- but does this seem absolutely ridiculous or could this actually be a feasible solution?
https://redd.it/rglrtv
@r_devops
Hello,
Perhaps I am at the brink of insanity, because at face-value this does seem ridiculous and I've been spinning my wheels for weeks. But hear me out, in a fast growing organization with new projects/modules being spun up everyday with the exact deployment process, I was thinking to myself, how can I automate the creation of these pipelines? Specifically with AWS CodePipeline/AWS Codebuild.
There is no way to scan github and create these pipelines automatically with AWS. So I was thinking to myself, how could I make this possible?
So at face value, AWS treats (most) everything as a resource. Whether that be an API Gateway, an ECR, EC2, Codebuild, CodePipeline, they are all just "resources".
So I was thinking, what's to prevent me from creating a pipeline, that, well creates a resource, specifically another CodePipeline Resource?
The basic principle is this -- and feel free to call this ridiculous because it most likely is.
Please note, this was quickly written and obviously there are some intricacies that need to be refined, but heres the quick and dirty rundown:
I set up a lambda ran on a cronjob (or alternatively can be triggered manually) that scans our organization for repositories, and as it scans the repositories I search for a terraform file that references a terraform module that handles the the setup, stages, etc of the pipeline that we want created. The base terraform file in the repository contains just boilerplate code such as the repository source url, buildspec, type of deployment, etc, and any additional non-secret env variables passed in by the terraform file in the repository. If it contains the terraform file, then it checks out the latest version, sets the pipeline source to the s3 reference key to the repositories name, and from there it then stores the the source code in the s3 bucket with the name of the project, which then triggers the codepipeline to execute. From there the pipeline then takes the source, and in the build step it executes the terraform script which sets up the pipeline for that repository. If the pipeline has already been created, then the terraform script has no changes, then there will have be no updates - and nothing will be changed. If there has been updates to the terraform file, then the pre-existing pipeline will then be updated.
From there bam, we have a pipeline that has been automatically created without any manual work.
This is obviously entirely a novel concept -- but does this seem absolutely ridiculous or could this actually be a feasible solution?
https://redd.it/rglrtv
@r_devops
reddit
A Pipeline that creates pipelines?
Hello, Perhaps I am at the brink of insanity, because at face-value this does seem ridiculous and I've been spinning my wheels for weeks. But...
Has anyone here used Ansible and Packer with Proxmox?
I am trying to now get my template provisioned with ansible. I finally have it so that packer is able to create the vm, configure it, reboot then attempt to connect by SSH. On the output I get waiting for ssh to become available, then connected to ssh. But then it fails to handshake because its unreachable. This is the pastebin I have from my console. I have -vvvv for ansible output so I see that it fails to connect via ssh to openssh and trying to connect to 127.0.0.1
Packers documents make it seem that its just a add provisioner, add the yml and you are good.
https://redd.it/rgjt58
@r_devops
I am trying to now get my template provisioned with ansible. I finally have it so that packer is able to create the vm, configure it, reboot then attempt to connect by SSH. On the output I get waiting for ssh to become available, then connected to ssh. But then it fails to handshake because its unreachable. This is the pastebin I have from my console. I have -vvvv for ansible output so I see that it fails to connect via ssh to openssh and trying to connect to 127.0.0.1
Packers documents make it seem that its just a add provisioner, add the yml and you are good.
https://redd.it/rgjt58
@r_devops
Pastebin
proxmox: output will be in this color.==> proxmox: Creating VM==> proxmox: - Pastebin.com
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
Is kubernetes in demand?
Just took a look on Google trends and seems it has dropped quite a lot... I'm wondering if is still worth the trouble of learning it.. also what could be the culprit for the drop in interest?
Thanks!
https://redd.it/rgu7hr
@r_devops
Just took a look on Google trends and seems it has dropped quite a lot... I'm wondering if is still worth the trouble of learning it.. also what could be the culprit for the drop in interest?
Thanks!
https://redd.it/rgu7hr
@r_devops
reddit
Is kubernetes in demand?
Just took a look on Google trends and seems it has dropped quite a lot... I'm wondering if is still worth the trouble of learning it.. also what...
Deploying microservices in a consistent way using different gitlab repositories
Hi,
I'm looking for a good organization to deploy our solution consisting in multiple apps using Gitlab and K8S.
Our SaaS app that is made of :
A backend app, mostly our API (django)
A user app (React)
An admin app (React)
Both frontend apps are connected to the API.
Currently the backend and user apps are in the same gitlab repository, and are deployed using a CI/CD pipeline that builds the apps, builds the docker images, and deploy them to K8S using a Helm Chart. The Helm Chart is located in the same repo.
I recently added the admin app in another gitlab repository and I'm concerned about keeping all apps consistent, that is to say, both frontend apps have to compatible with the API.
I'm thinking about adding another repository especially for the deployment (let's call this Deploy Repo). This repo could contain:
3 git submodules, one for each sub app,
The Helm chart,
Other files related to deployment
I thought about using git submodules to be able to have separate projects. The devs would update the right versions in the Deploy Repo when a service is ready to be deployed.
The push would then trigger the CI/CD pipeline, build all apps, and deploy all together using the Helm Chart.
Is this a good idea to use submodules like this ? What could be the best practice to link multiple projects together ?
I'm also concerned about how I could do to build only the sub project that has changed instead of all projects.
I have seen that it could be possible to link the pipelines of all subprojects together, and use artifacts to pass the needed files, but I'm not sure if this is a good solution.
https://redd.it/rgtuls
@r_devops
Hi,
I'm looking for a good organization to deploy our solution consisting in multiple apps using Gitlab and K8S.
Our SaaS app that is made of :
A backend app, mostly our API (django)
A user app (React)
An admin app (React)
Both frontend apps are connected to the API.
Currently the backend and user apps are in the same gitlab repository, and are deployed using a CI/CD pipeline that builds the apps, builds the docker images, and deploy them to K8S using a Helm Chart. The Helm Chart is located in the same repo.
I recently added the admin app in another gitlab repository and I'm concerned about keeping all apps consistent, that is to say, both frontend apps have to compatible with the API.
I'm thinking about adding another repository especially for the deployment (let's call this Deploy Repo). This repo could contain:
3 git submodules, one for each sub app,
The Helm chart,
Other files related to deployment
I thought about using git submodules to be able to have separate projects. The devs would update the right versions in the Deploy Repo when a service is ready to be deployed.
The push would then trigger the CI/CD pipeline, build all apps, and deploy all together using the Helm Chart.
Is this a good idea to use submodules like this ? What could be the best practice to link multiple projects together ?
I'm also concerned about how I could do to build only the sub project that has changed instead of all projects.
I have seen that it could be possible to link the pipelines of all subprojects together, and use artifacts to pass the needed files, but I'm not sure if this is a good solution.
https://redd.it/rgtuls
@r_devops
reddit
Deploying microservices in a consistent way using different gitlab...
Hi, I'm looking for a good organization to deploy our solution consisting in multiple apps using Gitlab and K8S. Our SaaS app that is made of...
CoTurn server
Hi all. I have installed Co turn. Can someone pls help on how can I perform load testing for my server?
https://redd.it/rgvzsd
@r_devops
Hi all. I have installed Co turn. Can someone pls help on how can I perform load testing for my server?
https://redd.it/rgvzsd
@r_devops
reddit
CoTurn server
Hi all. I have installed Co turn. Can someone pls help on how can I perform load testing for my server?
ECS + FarGate + AutoScaling
Hi all,
​
I am new to AWS, and I am quite confused of the ECS and farget with autoscaling. How do I launch more of my services (all encapsulated in docker containers) to balance the load? I am a newbie, I have no clue of AWS, and I am working on my app-server solo.
Also, any good tutorial for the same?
Thanks!
https://redd.it/rgwmks
@r_devops
Hi all,
​
I am new to AWS, and I am quite confused of the ECS and farget with autoscaling. How do I launch more of my services (all encapsulated in docker containers) to balance the load? I am a newbie, I have no clue of AWS, and I am working on my app-server solo.
Also, any good tutorial for the same?
Thanks!
https://redd.it/rgwmks
@r_devops
reddit
ECS + FarGate + AutoScaling
Hi all, I am new to AWS, and I am quite confused of the ECS and farget with autoscaling. How do I launch more of my services (all...
Running Collaborative ML Experiments With DVC
Sharing machine learning experiments to compare its models is important when you're working with a team of engineers. You might need to get another opinion on an experiments results or to share a modified dataset or even share the exact reproduction of a specific experiment.
The following tutorial goes through an example of sharing an experiment with DVC remotes: Running Collaborative Experiments - using DVC remotes to share experiments and their data across machines
Setting up DVC remotes in addition to your Git remotes lets you share all of the data, code, and hyperparameters associated with each experiment so anyone can pick up where you left off in the training process. When you use DVC, you can bundle your data and code changes for each experiment and push those to a remote for somebody else to check out.
https://redd.it/rgxnsc
@r_devops
Sharing machine learning experiments to compare its models is important when you're working with a team of engineers. You might need to get another opinion on an experiments results or to share a modified dataset or even share the exact reproduction of a specific experiment.
The following tutorial goes through an example of sharing an experiment with DVC remotes: Running Collaborative Experiments - using DVC remotes to share experiments and their data across machines
Setting up DVC remotes in addition to your Git remotes lets you share all of the data, code, and hyperparameters associated with each experiment so anyone can pick up where you left off in the training process. When you use DVC, you can bundle your data and code changes for each experiment and push those to a remote for somebody else to check out.
https://redd.it/rgxnsc
@r_devops
Data Version Control · DVC
Running Collaborative Experiments
Sharing experiments with teammates can help you build models more efficiently.
Onboarding juniors in DevOps
It started to be difficult to hire juniors as the time to train them simply didn't exist (like in most companies I think) as we are growing in the company I work for.
So we created a program to hire juniors, train them for four months, then put them in a team according to their competence and liking. We are a cloud provider, so the spectrum of choice is pretty large.
Here's the article describing our program in case you're interested :)
https://blog.scaleway.com/devops-onboarding-juniors/
let me know what you thought of it!
https://redd.it/rgyoab
@r_devops
It started to be difficult to hire juniors as the time to train them simply didn't exist (like in most companies I think) as we are growing in the company I work for.
So we created a program to hire juniors, train them for four months, then put them in a team according to their competence and liking. We are a cloud provider, so the spectrum of choice is pretty large.
Here's the article describing our program in case you're interested :)
https://blog.scaleway.com/devops-onboarding-juniors/
let me know what you thought of it!
https://redd.it/rgyoab
@r_devops
Scaleway Blog - European cloud provider
DevOps: Onboarding juniors
Like many other tech companies, Scaleway is growing, and maintaining our values and our culture is a daily challenge. So we built a program to integrate juniors or self-taught profiles into our teams. Meet the Cloud Builder Launchpad. “We can’t hire juniors…
Question about service
I have created a deployment that simply run 3 copies of a pod, each running nginx:
root@k8s-master:~# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-clusterip NodePort 10.108.90.12 <none> 80:30080/TCP 145m
root@k8s-master:~# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
deployment-svc-example 3/3 3 3 165m
root@k8s-master:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-svc-example-9f886b7b-5vwds 1/1 Running 0 165m 192.168.126.29 k8s-worker2 <none> <none>
deployment-svc-example-9f886b7b-f4wk4 1/1 Running 0 165m 192.168.126.28 k8s-worker2 <none> <none>
deployment-svc-example-9f886b7b-scplx 1/1 Running 0 165m 192.168.194.106 k8s-worker1 <none> <none>
As seen in the
Thing is, when I open the browser and visit the public IP address of the MASTER (not k8s-worker1/2), it still works, even though the pod doesn't run on the master.
Why is that? Did it route me to one of the pods in the worker nodes? What happened here?
Thanks ahead!
https://redd.it/rgyy7w
@r_devops
I have created a deployment that simply run 3 copies of a pod, each running nginx:
root@k8s-master:~# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-clusterip NodePort 10.108.90.12 <none> 80:30080/TCP 145m
root@k8s-master:~# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
deployment-svc-example 3/3 3 3 165m
root@k8s-master:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-svc-example-9f886b7b-5vwds 1/1 Running 0 165m 192.168.126.29 k8s-worker2 <none> <none>
deployment-svc-example-9f886b7b-f4wk4 1/1 Running 0 165m 192.168.126.28 k8s-worker2 <none> <none>
deployment-svc-example-9f886b7b-scplx 1/1 Running 0 165m 192.168.194.106 k8s-worker1 <none> <none>
As seen in the
kubectl get pods command, out of the 3 pods, two run on k8s-worker2, and one runs on k8s-worker1.Thing is, when I open the browser and visit the public IP address of the MASTER (not k8s-worker1/2), it still works, even though the pod doesn't run on the master.
Why is that? Did it route me to one of the pods in the worker nodes? What happened here?
Thanks ahead!
https://redd.it/rgyy7w
@r_devops
reddit
Question about service
I have created a deployment that simply run 3 copies of a pod, each running nginx: root@k8s-master:~# kubectl get services NAME ...
A-List of Best Serverless and AWS Lambda Courses For Beginners in 2021
Learn the fundamentals of serverless architecture and build applications in the cloud with these best serverless and AWS Lambda courses.
https://redd.it/rgxdr2
@r_devops
Learn the fundamentals of serverless architecture and build applications in the cloud with these best serverless and AWS Lambda courses.
https://redd.it/rgxdr2
@r_devops
Coursesity
6 Best Serverless and AWS Lambda Courses in 2022
Serverless computing on AWS Lambda is growing in popularity. With these 6 best serverless and AWS Lambda courses in 2021, learn the fundamentals of serverless architecture and build applications in the cloud.
Merge nexus data from 2 different instances then migrate to a new server
I have a server with 2 different instances of
I was given a new machine "server A" where I have deployed a fresh installation of
Now I don't know whether it is possible to merge the data of the 2 different nexus instances that I have on server B. So If I can merge somehow and correctly the data of these 2 instances, then I can move that merged data to the new installation on "server A", Does it make any sense?, hopefully it does :-D
What can I do on this situation?, any ideas, brainstorming and advises are very welcome.
Cheers.
https://redd.it/rh15zh
@r_devops
I have a server with 2 different instances of
sonatype nexus running (don't ask me why, I just inherited the whole thing like that). I will call that machine "server B", so one version is using nexus ver 3.0 and the other on nexus 3.23.I was given a new machine "server A" where I have deployed a fresh installation of
nexus 3.37 and now I'm researching what is the data folder that I supposed to move, according the official docs (https://help.sonatype.com/repomanager3/installation-and-upgrades/directories#Directories-DataDirectory), all I need is to move the data directory (commonly referred to as $data-dir or ${karaf.data}) to the new installation. Till this point crystal clear.Now I don't know whether it is possible to merge the data of the 2 different nexus instances that I have on server B. So If I can merge somehow and correctly the data of these 2 instances, then I can move that merged data to the new installation on "server A", Does it make any sense?, hopefully it does :-D
What can I do on this situation?, any ideas, brainstorming and advises are very welcome.
Cheers.
https://redd.it/rh15zh
@r_devops
reddit
Merge nexus data from 2 different instances then migrate to a new...
I have a server with 2 different instances of `sonatype nexus` running (don't ask me why, I just inherited the whole thing like that). I will call...
SSH Tunneling
I want to host SonarQube on an EC2 Instance in a secure way by following best practices, I came across a way where you can host your instance in a private subnet and still access it. Here my usecase is to host a service(SonarQube) on port 9000 and allow only my developers to access it and configure Security Group accordingly. I read that we can map our EC2 url to the localhost and access it through pem file.
Is this the correct way to proceed? If yes, how should I move forward?
Thanks!
https://redd.it/rgvr9b
@r_devops
I want to host SonarQube on an EC2 Instance in a secure way by following best practices, I came across a way where you can host your instance in a private subnet and still access it. Here my usecase is to host a service(SonarQube) on port 9000 and allow only my developers to access it and configure Security Group accordingly. I read that we can map our EC2 url to the localhost and access it through pem file.
Is this the correct way to proceed? If yes, how should I move forward?
Thanks!
https://redd.it/rgvr9b
@r_devops
reddit
SSH Tunneling
I want to host SonarQube on an EC2 Instance in a secure way by following best practices, I came across a way where you can host your instance in a...
How do you make OS upgrades without downtime?
We run our services in Docker containers. We have a few worker containers, a front end app container, and then a MySQL container (host mounted volumes).
​
We need to do some OS upgrades, but we don't want to shutdown our system completely. Right now we just take the hit and plan for 10-30 minutes of downtime once per year. This isn't ideal. I'm trying to do some type of "failover", but with the database it's kind of challenging.
​
Anybody doing something similar and can share your strategy?
​
Thanks!
https://redd.it/rh5213
@r_devops
We run our services in Docker containers. We have a few worker containers, a front end app container, and then a MySQL container (host mounted volumes).
​
We need to do some OS upgrades, but we don't want to shutdown our system completely. Right now we just take the hit and plan for 10-30 minutes of downtime once per year. This isn't ideal. I'm trying to do some type of "failover", but with the database it's kind of challenging.
​
Anybody doing something similar and can share your strategy?
​
Thanks!
https://redd.it/rh5213
@r_devops
reddit
How do you make OS upgrades without downtime?
We run our services in Docker containers. We have a few worker containers, a front end app container, and then a MySQL container (host mounted...
Cross account schema
Is there any way that we can allow consumers from the other aws accounts to access the schemas used in glue schema registry?
Would appreciate any inputs.
https://redd.it/rh3nwk
@r_devops
Is there any way that we can allow consumers from the other aws accounts to access the schemas used in glue schema registry?
Would appreciate any inputs.
https://redd.it/rh3nwk
@r_devops
reddit
Cross account schema
Is there any way that we can allow consumers from the other aws accounts to access the schemas used in glue schema registry? Would appreciate...
Having 2 front ( Vue2 no TS Webpack, Vue 3 with TS vet) running concurrently
Hi, we are doing a migration from an old app ( Vue2 no TS Webpack ) plugged on a Node Js API.
Both front and back are bad, has technical debt etc.
They are used in production. We don't have time to recode them.
We are think about using a new app/service for all the new features and recode the piece of code from the old codebase when we need them on new stuff.
​
For the back end no problem, we will add /v2 routes and develop our feature / rebuild there with a new API.
​
But for the front we can't migrate it and cleary don't have time to recode all.
Is there any way we could mix two front project ?
​
Eg:
mycompany.com/user \-> Use the front of the old project
mycompany.com/operations \-> Use the front of the new project.
​
I'm more back end than front end so i don't have all the knowledges on how to mix 2 front projects.
All your suggestions are welcomed
Thanks for reading
https://redd.it/rh7bh2
@r_devops
Hi, we are doing a migration from an old app ( Vue2 no TS Webpack ) plugged on a Node Js API.
Both front and back are bad, has technical debt etc.
They are used in production. We don't have time to recode them.
We are think about using a new app/service for all the new features and recode the piece of code from the old codebase when we need them on new stuff.
​
For the back end no problem, we will add /v2 routes and develop our feature / rebuild there with a new API.
​
But for the front we can't migrate it and cleary don't have time to recode all.
Is there any way we could mix two front project ?
​
Eg:
mycompany.com/user \-> Use the front of the old project
mycompany.com/operations \-> Use the front of the new project.
​
I'm more back end than front end so i don't have all the knowledges on how to mix 2 front projects.
All your suggestions are welcomed
Thanks for reading
https://redd.it/rh7bh2
@r_devops
Software architecture diagramming and design tools
https://softwarearchitecture.tools
https://redd.it/rhb2ze
@r_devops
https://softwarearchitecture.tools
https://redd.it/rhb2ze
@r_devops
softwarearchitecture.tools
Software architecture tools
The best free and paid software architecture diagramming and design tools