OneDev4 - All-in-One DevOps platform
OneDev is an all-in-one devops platform with git management, issue tracking, and docker/kubernetes based CI engine. Project is open source at [https://github.com/theonedev/onedev](https://github.com/theonedev/onedev)
The 4.0 release gets a completely redesigned UI, to be professional and beautiful. Online demo available at [https://code.onedev.io](https://code.onedev.io/)
https://redd.it/jzd6iu
@r_devops
OneDev is an all-in-one devops platform with git management, issue tracking, and docker/kubernetes based CI engine. Project is open source at [https://github.com/theonedev/onedev](https://github.com/theonedev/onedev)
The 4.0 release gets a completely redesigned UI, to be professional and beautiful. Online demo available at [https://code.onedev.io](https://code.onedev.io/)
https://redd.it/jzd6iu
@r_devops
GitHub
GitHub - theonedev/onedev: Git Server with CI/CD, Kanban, and Packages. Seamless integration. Unparalleled experience.
Git Server with CI/CD, Kanban, and Packages. Seamless integration. Unparalleled experience. - theonedev/onedev
Getting traffic to EKS: Using ALB with Ingress controller
Once your application is running on #AWS EKS, you need to get traffic to it. In this video, Pablo Inigo Sanchez will show how to use ALB Ingress Controller to do exactly that: https://youtu.be/cRODPz9GXb0
https://redd.it/jzdyki
@r_devops
Once your application is running on #AWS EKS, you need to get traffic to it. In this video, Pablo Inigo Sanchez will show how to use ALB Ingress Controller to do exactly that: https://youtu.be/cRODPz9GXb0
https://redd.it/jzdyki
@r_devops
YouTube
Getting traffic to EKS: Using ALB Ingress Controller with Amazon EKS on Fargate
What we are going to see today is fascinating. We are going to connect Kubernetes with AWS ALB, by using an Ingress Controller for Application Load Balancer. Ingress objects will automatically hook to ALB and allow us to get traffic to the cluster.
In this…
In this…
Linux arm vm on M1 macbook
https://github.com/JacopoMangiavacchi/M1-Linux-SSH
https://redd.it/jzhdzl
@r_devops
https://github.com/JacopoMangiavacchi/M1-Linux-SSH
https://redd.it/jzhdzl
@r_devops
GitHub
GitHub - JacopoMangiavacchi/M1-Linux-SSH: Apple M1 Linux VM with SSH interface
Apple M1 Linux VM with SSH interface. Contribute to JacopoMangiavacchi/M1-Linux-SSH development by creating an account on GitHub.
advice wanted: going from legacy manual releases to devops without scaring the managers
Let me start by stating my work has zero intentions to start using docker or k8s - aka management are too scared to move onto modern tech... maybe in five years
At the moment everything is configured manually account profiles are controlled centrally (one profile for all environments..) upgrading web server updates old builds and new (including development accounts)
So I’m looking to achieve some level of infrastructure as code.. in terms of building up a “runtime location”.. unzipping predefined versions of tools and preparing environment profile using code defined profile rather than system wide defined profile
We have four components that require different types of deployment and “runtime environments”. Originally I would’ve liked to use a tool built and used by the community for iac solutions but I can’t find anything as granular... Since application has to be deployed to a non-root unix account.. My first thought was to create a script (maybe using ruby or python) that reads a manifest for each component type and installs the required features (Java, app server, etc) and then something that will setup all environment variables required for runtime
Does anyone have any similar experiences with this type of deployment? Or Recommendations for tools?
https://redd.it/jzk56r
@r_devops
Let me start by stating my work has zero intentions to start using docker or k8s - aka management are too scared to move onto modern tech... maybe in five years
At the moment everything is configured manually account profiles are controlled centrally (one profile for all environments..) upgrading web server updates old builds and new (including development accounts)
So I’m looking to achieve some level of infrastructure as code.. in terms of building up a “runtime location”.. unzipping predefined versions of tools and preparing environment profile using code defined profile rather than system wide defined profile
We have four components that require different types of deployment and “runtime environments”. Originally I would’ve liked to use a tool built and used by the community for iac solutions but I can’t find anything as granular... Since application has to be deployed to a non-root unix account.. My first thought was to create a script (maybe using ruby or python) that reads a manifest for each component type and installs the required features (Java, app server, etc) and then something that will setup all environment variables required for runtime
Does anyone have any similar experiences with this type of deployment? Or Recommendations for tools?
https://redd.it/jzk56r
@r_devops
reddit
advice wanted: going from legacy manual releases to devops without...
Let me start by stating my work has zero intentions to start using docker or k8s - aka management are too scared to move onto modern tech... maybe...
Looking back on 2020 - what's been the biggest thing to happen to Devops?
With 2020 winding down, I was reflecting on the year and wanted to know what everyone else thought. Good or bad, what in your opinion has been the most significant thing or things to happen with Devops?
https://redd.it/jzjbdu
@r_devops
With 2020 winding down, I was reflecting on the year and wanted to know what everyone else thought. Good or bad, what in your opinion has been the most significant thing or things to happen with Devops?
https://redd.it/jzjbdu
@r_devops
reddit
Looking back on 2020 - what's been the biggest thing to happen to...
With 2020 winding down, I was reflecting on the year and wanted to know what everyone else thought. Good or bad, what in your opinion has been the...
Continuous Deployment with Github Actions: An Example
Wrote a blog that takes a deeper dive into setting up CD with Github Actions [https://www.dolthub.com/blog/2020-11-23-continous-deployment-with-github-actions/](https://www.dolthub.com/blog/2020-11-23-continous-deployment-with-github-actions/)
https://redd.it/jzo5ev
@r_devops
Wrote a blog that takes a deeper dive into setting up CD with Github Actions [https://www.dolthub.com/blog/2020-11-23-continous-deployment-with-github-actions/](https://www.dolthub.com/blog/2020-11-23-continous-deployment-with-github-actions/)
https://redd.it/jzo5ev
@r_devops
Dolthub
Continuous Deployment with Github Actions: An Example
Blog for DoltHub, a website hosting databases made with Dolt, an open-source version-controlled SQL database with Git-like semantics.
Build Your Kubernetes Operator with the Right Tool
You want to build a Kubernetes Operator for your software. Which tool to choose from? Operator SDK with Helm, Ansible, or Go? Or maybe start from scratch with Python, Java, or any other programming language? In this blog post, I discuss different approaches to writing Kubernetes Operators and list each solution’s pros and cons. All that to help you decide which tool is the right one for you!
# Introduction
[Kubernetes Operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) is an **application** that **watches** a custom Kubernetes **resource** and performs **some operations** upon its changes.
This definition is very generic because the operators themselves can do a great variety of things. To make it more digestible, let’s focus on one example that we will use throughout this blog post.
[Full blog post](https://hazelcast.com/blog/build-your-kubernetes-operator-with-the-right-tool/)
https://redd.it/jzpcai
@r_devops
You want to build a Kubernetes Operator for your software. Which tool to choose from? Operator SDK with Helm, Ansible, or Go? Or maybe start from scratch with Python, Java, or any other programming language? In this blog post, I discuss different approaches to writing Kubernetes Operators and list each solution’s pros and cons. All that to help you decide which tool is the right one for you!
# Introduction
[Kubernetes Operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) is an **application** that **watches** a custom Kubernetes **resource** and performs **some operations** upon its changes.
This definition is very generic because the operators themselves can do a great variety of things. To make it more digestible, let’s focus on one example that we will use throughout this blog post.
[Full blog post](https://hazelcast.com/blog/build-your-kubernetes-operator-with-the-right-tool/)
https://redd.it/jzpcai
@r_devops
Kubernetes
Operator pattern
Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop.
Motivation The operator pattern aims to capture the key aim of…
Motivation The operator pattern aims to capture the key aim of…
Describe a non-trivial system
Someone asked me during an interview to describe a non-trivial system, that I could speak at great length. This isn’t the first time a recruiter asks that but I still don’t know how to answer. I’m still not sure what they want to know...
https://redd.it/jzkqji
@r_devops
Someone asked me during an interview to describe a non-trivial system, that I could speak at great length. This isn’t the first time a recruiter asks that but I still don’t know how to answer. I’m still not sure what they want to know...
https://redd.it/jzkqji
@r_devops
reddit
Describe a non-trivial system
Someone asked me during an interview to describe a non-trivial system, that I could speak at great length. This isn’t the first time a recruiter...
Devs and local testing in a CI/CD pipeline
Based in previous posts:
[https://www.reddit.com/r/devops/comments/j2swua/full\_cicd\_pipeline\_for\_degrees\_final\_assignment/](https://www.reddit.com/r/devops/comments/j2swua/full_cicd_pipeline_for_degrees_final_assignment/)
[https://www.reddit.com/r/devops/comments/jo8jy4/developers\_testing\_things\_in\_a\_real\_cicd\_pipeline/gcadtbq/?context=3](https://www.reddit.com/r/devops/comments/jo8jy4/developers_testing_things_in_a_real_cicd_pipeline/gcadtbq/?context=3)
​
I am still working on my final project and although I have most of the environment and I am still hesitating in one of the first stages of an entire CI/CD pipeline and it's the local testing.
I was aiming at triggering a deployment per each push that any developer does in their "tests" branches but after the answers, I received from the previous posts, it seems that many people deal with the local testing in their own PCs with Docker for example.
If so, how do you review a change you want to test by deploying it? Having in my mind all the minimum necessary dependencies to run it?
My concern is that developers will have to handle maybe many resources in their PCs and maybe deal with configs. (Maybe that's the usual thing but as I don't have experience in that field...)
I would like to hear from you!
https://redd.it/jzrbdu
@r_devops
Based in previous posts:
[https://www.reddit.com/r/devops/comments/j2swua/full\_cicd\_pipeline\_for\_degrees\_final\_assignment/](https://www.reddit.com/r/devops/comments/j2swua/full_cicd_pipeline_for_degrees_final_assignment/)
[https://www.reddit.com/r/devops/comments/jo8jy4/developers\_testing\_things\_in\_a\_real\_cicd\_pipeline/gcadtbq/?context=3](https://www.reddit.com/r/devops/comments/jo8jy4/developers_testing_things_in_a_real_cicd_pipeline/gcadtbq/?context=3)
​
I am still working on my final project and although I have most of the environment and I am still hesitating in one of the first stages of an entire CI/CD pipeline and it's the local testing.
I was aiming at triggering a deployment per each push that any developer does in their "tests" branches but after the answers, I received from the previous posts, it seems that many people deal with the local testing in their own PCs with Docker for example.
If so, how do you review a change you want to test by deploying it? Having in my mind all the minimum necessary dependencies to run it?
My concern is that developers will have to handle maybe many resources in their PCs and maybe deal with configs. (Maybe that's the usual thing but as I don't have experience in that field...)
I would like to hear from you!
https://redd.it/jzrbdu
@r_devops
reddit
Full CI/CD pipeline for degree's final assignment
Hi everyone! This is my last semester from the university and I need to pick up a topic for the final assignment which will consist of 4 months...
I don't really understand how LE renewals work (Ansible related)
Hi there,
I'm currently using the acme\_certificate Ansible module to create a new certificate. The interesting task is the following:
`- name: create acme challenge`
`become: false`
`local_action:`
`module: acme_certificate`
`acme_version: 2`
`terms_agreed: yes`
`account_key_src: "{{ certs_path }}/account-key.pem"`
`src: "{{ certs_path }}/{{ server_dns_name }}.csr"`
`cert: "{{ certs_path }}/{{ server_dns_name }}.crt"`
`challenge: dns-01`
`acme_directory: https://acme-v02.api.letsencrypt.org/directory`
`#NOTE: switch to staging letsencrypt endpoint when testing`
`#acme_directory: https://acme-staging-v02.api.letsencrypt.org/directory`
`remaining_days: 60`
`register: challenge`
Works fine (pretty cool actually, love Ansible).
Now I'm gonna implement a scheduled playbook execution to check against local certs, verify whether they're next to expiration and then, if it's the case, renew.
There it comes: I don't understand how renewal on LE works. I always used certbot so I just ignored the underlying complexity.
* Is it actually the same of a new cert creation but LE tracks the fact that cert is already existing?
* Or I need a different module/approach.
The overall target is to not hit rate limits (50 unique certs generation per month/topdomain).
Thanks in advance, I could not find any clearing doc around.
https://redd.it/jzeh5h
@r_devops
Hi there,
I'm currently using the acme\_certificate Ansible module to create a new certificate. The interesting task is the following:
`- name: create acme challenge`
`become: false`
`local_action:`
`module: acme_certificate`
`acme_version: 2`
`terms_agreed: yes`
`account_key_src: "{{ certs_path }}/account-key.pem"`
`src: "{{ certs_path }}/{{ server_dns_name }}.csr"`
`cert: "{{ certs_path }}/{{ server_dns_name }}.crt"`
`challenge: dns-01`
`acme_directory: https://acme-v02.api.letsencrypt.org/directory`
`#NOTE: switch to staging letsencrypt endpoint when testing`
`#acme_directory: https://acme-staging-v02.api.letsencrypt.org/directory`
`remaining_days: 60`
`register: challenge`
Works fine (pretty cool actually, love Ansible).
Now I'm gonna implement a scheduled playbook execution to check against local certs, verify whether they're next to expiration and then, if it's the case, renew.
There it comes: I don't understand how renewal on LE works. I always used certbot so I just ignored the underlying complexity.
* Is it actually the same of a new cert creation but LE tracks the fact that cert is already existing?
* Or I need a different module/approach.
The overall target is to not hit rate limits (50 unique certs generation per month/topdomain).
Thanks in advance, I could not find any clearing doc around.
https://redd.it/jzeh5h
@r_devops
Alert Aggregation Platform
I realize this is a strange question....but right now we have alerts coming in from Pingdom, AWS, Elasticsearch Logs, Rollbar, some monitoring developers wrote that get all sent to slack. We have about a billion slack channels, and things are getting lost in the shuffle. Pagerduty \*almost\* seems like a logical choice, but sometimes we want to just aggregate more "informational" items and not blast out an alert.
Is there some good off-the-shelf system for actively aggregating all these alerts from multiple sources?
https://redd.it/jzu46r
@r_devops
I realize this is a strange question....but right now we have alerts coming in from Pingdom, AWS, Elasticsearch Logs, Rollbar, some monitoring developers wrote that get all sent to slack. We have about a billion slack channels, and things are getting lost in the shuffle. Pagerduty \*almost\* seems like a logical choice, but sometimes we want to just aggregate more "informational" items and not blast out an alert.
Is there some good off-the-shelf system for actively aggregating all these alerts from multiple sources?
https://redd.it/jzu46r
@r_devops
reddit
Alert Aggregation Platform
I realize this is a strange question....but right now we have alerts coming in from Pingdom, AWS, Elasticsearch Logs, Rollbar, some monitoring...
What would you prefer in a Control Panel?
I am working on a control panel and I am curious what administrators would prefer.
I am mainly needing to know about how you would prefer to set configuration options for a specific software like Nginx, Apache, PHP-FPM, MySQL, etc.
Would you prefer a form style list of text field options that you can easy change?
\- Max Upload: 200MB
\- Max Connections: 5
\- etc
Or would you prefer to edit the configuration file?
From my experience using other control panels it was really nice to be able to set the Max Upload Size in a text field, then hit save and have PHP or Nginx reload. This also prevented me from breaking the configuration all together if I wasn't that experienced. However, I have also had times were the option that I needed to change, was not present and having the ability to edit the configuration file would have been better. Or I might need to add some complex configuration options ( like a special Nginx Location block, etc ) that really isn't achievable with a simple option form.
So would you prefer a Simple Options form, but it might be limited as long as it covers all the common settings. Or just have the ability to edit the configuration file knowing that you are responsible for errors and conflicts that might take you whole server down?
[View Poll](https://www.reddit.com/poll/jzun5t)
https://redd.it/jzun5t
@r_devops
I am working on a control panel and I am curious what administrators would prefer.
I am mainly needing to know about how you would prefer to set configuration options for a specific software like Nginx, Apache, PHP-FPM, MySQL, etc.
Would you prefer a form style list of text field options that you can easy change?
\- Max Upload: 200MB
\- Max Connections: 5
\- etc
Or would you prefer to edit the configuration file?
From my experience using other control panels it was really nice to be able to set the Max Upload Size in a text field, then hit save and have PHP or Nginx reload. This also prevented me from breaking the configuration all together if I wasn't that experienced. However, I have also had times were the option that I needed to change, was not present and having the ability to edit the configuration file would have been better. Or I might need to add some complex configuration options ( like a special Nginx Location block, etc ) that really isn't achievable with a simple option form.
So would you prefer a Simple Options form, but it might be limited as long as it covers all the common settings. Or just have the ability to edit the configuration file knowing that you are responsible for errors and conflicts that might take you whole server down?
[View Poll](https://www.reddit.com/poll/jzun5t)
https://redd.it/jzun5t
@r_devops
recommendation for VPN/Zero trust solution for AWS workshop
hi Reddit,
getting lost in the current landscape of VPN/Zerotrust solution for AWS & Kubernetes workshop.
I used OpenVPN in the past and liked the ease of installation & maintenance, I look up the new Hashicorp Boundary project but it's setup guide just sucks, I would be happy to find something that moves the perimeter from the network to the user.
https://redd.it/jzh4ud
@r_devops
hi Reddit,
getting lost in the current landscape of VPN/Zerotrust solution for AWS & Kubernetes workshop.
I used OpenVPN in the past and liked the ease of installation & maintenance, I look up the new Hashicorp Boundary project but it's setup guide just sucks, I would be happy to find something that moves the perimeter from the network to the user.
https://redd.it/jzh4ud
@r_devops
reddit
recommendation for VPN/Zero trust solution for AWS workshop
hi Reddit, getting lost in the current landscape of VPN/Zerotrust solution for AWS & Kubernetes workshop. I used OpenVPN in the past and liked...
Looking for feedback about low-code MLOps platform
Hi there,
Together with my friends, we have created a concept that can take the deployed model (Tensorflow, Keras, Pytorch) and serve it as API in order to inference or implement it on the website/platform. No config needed, no infrastructure setup. The whole idea is to make it super low-code or NO-code.
Would you see any value in this approach, elaborate more about your challenges with AI deployments or be interested to talk with us and give us some feedback?
Please have a look at our website: [https://syndicai.co](https://syndicai.co/)
https://redd.it/jzh23p
@r_devops
Hi there,
Together with my friends, we have created a concept that can take the deployed model (Tensorflow, Keras, Pytorch) and serve it as API in order to inference or implement it on the website/platform. No config needed, no infrastructure setup. The whole idea is to make it super low-code or NO-code.
Would you see any value in this approach, elaborate more about your challenges with AI deployments or be interested to talk with us and give us some feedback?
Please have a look at our website: [https://syndicai.co](https://syndicai.co/)
https://redd.it/jzh23p
@r_devops
reddit
Looking for feedback about low-code MLOps platform
Hi there, Together with my friends, we have created a concept that can take the deployed model (Tensorflow, Keras, Pytorch) and serve it as API...
Easy, intermediate and advanced devops tasks you might have to do as a developer?
Could you list a bunch of tasks I can do to challenge myself and that might actually be useful for me as a developer? A dozen for each level of difficulty could really come a long way to helping me develop as a developer.
https://redd.it/jzxvs6
@r_devops
Could you list a bunch of tasks I can do to challenge myself and that might actually be useful for me as a developer? A dozen for each level of difficulty could really come a long way to helping me develop as a developer.
https://redd.it/jzxvs6
@r_devops
reddit
Easy, intermediate and advanced devops tasks you might have to do...
Could you list a bunch of tasks I can do to challenge myself and that might actually be useful for me as a developer? A dozen for each level of...
Port Domain from Digital Ocean to Google Cloud
I have a domain registered on Digital Ocean also I have a website hosted on a Digital Ocean Server.
Now I want to deploy a React JS Application on Google Cloud Platform and i want to map the ip-address to the sub-domain of already registered domain in Digital Ocean.
So how do i achieve this? Any solution for this would be appreciated :)
https://redd.it/jzcrco
@r_devops
I have a domain registered on Digital Ocean also I have a website hosted on a Digital Ocean Server.
Now I want to deploy a React JS Application on Google Cloud Platform and i want to map the ip-address to the sub-domain of already registered domain in Digital Ocean.
So how do i achieve this? Any solution for this would be appreciated :)
https://redd.it/jzcrco
@r_devops
reddit
Port Domain from Digital Ocean to Google Cloud
I have a domain registered on Digital Ocean also I have a website hosted on a Digital Ocean Server. Now I want to deploy a React JS Application...
What is your favorite learning platform?
I recently left linux academy/ACG due to the content going downhill... I am looking for a new learning platform and wondering what is a good alternative?
https://redd.it/jz87ym
@r_devops
I recently left linux academy/ACG due to the content going downhill... I am looking for a new learning platform and wondering what is a good alternative?
https://redd.it/jz87ym
@r_devops
reddit
What is your favorite learning platform?
I recently left linux academy/ACG due to the content going downhill... I am looking for a new learning platform and wondering what is a good...
Does your company use 1 cloud provider only, or do they float between clouds, or use several?
The company I work for is pretty gung ho about Azure. Curious if it would make sense for me to recommend trying other providers that make hosting OCP4 easiest or cheapest.
https://redd.it/jz6oe3
@r_devops
The company I work for is pretty gung ho about Azure. Curious if it would make sense for me to recommend trying other providers that make hosting OCP4 easiest or cheapest.
https://redd.it/jz6oe3
@r_devops
reddit
Does your company use 1 cloud provider only, or do they float...
The company I work for is pretty gung ho about Azure. Curious if it would make sense for me to recommend trying other providers that make hosting...
What does patching mean to you?
Hi all, I hope this is a good place to ask this:
I know that patching is about keeping the software on hosts up to date, but does it have a common meaning beyond that, or is patching different from company to company and even from host to host?
I've read the general description on patching but I haven't seen much regarding a recommended approach so I have a few questions (these questions apply to long lived hosts):
- Do you get your configuration management tool (e.g. Chef) to install the latest version of all software on install/each run? In other words, you don't pin versions and always get the latest?
- Do you instead install a specific version using said configuration management tool and run the patching manually e.g. by running yum update?
- Do you prefer to install software using the default package manager so that patching is as easy as running a single yum command? If so, is there a good strategy around patching software that can't be installed using the default package manager?
Any tips and tricks welcome and thank you in advance!
https://redd.it/jz4b8e
@r_devops
Hi all, I hope this is a good place to ask this:
I know that patching is about keeping the software on hosts up to date, but does it have a common meaning beyond that, or is patching different from company to company and even from host to host?
I've read the general description on patching but I haven't seen much regarding a recommended approach so I have a few questions (these questions apply to long lived hosts):
- Do you get your configuration management tool (e.g. Chef) to install the latest version of all software on install/each run? In other words, you don't pin versions and always get the latest?
- Do you instead install a specific version using said configuration management tool and run the patching manually e.g. by running yum update?
- Do you prefer to install software using the default package manager so that patching is as easy as running a single yum command? If so, is there a good strategy around patching software that can't be installed using the default package manager?
Any tips and tricks welcome and thank you in advance!
https://redd.it/jz4b8e
@r_devops
reddit
What does patching mean to you?
Hi all, I hope this is a good place to ask this: I know that patching is about keeping the software on hosts up to date, but does it have a...
DNS in docker container
I thought the DNS was a server that resolved a URL string into an IP address. Now, I learned that there are DNS settings on my docker container. Can anyone explain to me what a DNS is and how the DNS inside a container differs from the one the server that resolves my HTTP requests?
https://redd.it/k0a7vr
@r_devops
I thought the DNS was a server that resolved a URL string into an IP address. Now, I learned that there are DNS settings on my docker container. Can anyone explain to me what a DNS is and how the DNS inside a container differs from the one the server that resolves my HTTP requests?
https://redd.it/k0a7vr
@r_devops
reddit
DNS in docker container
I thought the DNS was a server that resolved a URL string into an IP address. Now, I learned that there are DNS settings on my docker container....
Kubernetes on Premise The Hard Way - need tips
Hey guys, I'm currently building test cluster to teach myself K8s++. I'm a developer by trade, but expanding skillset is always nice :) So please note, that my knowledge is limited and lacking; I have only minimal amount of time available for me, so feel free to point out my mistakes, especially in my assumptions.
**Description, context.**
After some trials, I have successfully set up three-node cluster with one control plane via vagrant and ansible. Whole thing operates on Hyper-V on windows, with vagrant+ansible+kubectl+helm are ran through WSL2 ubuntu VM. At this point i have successfully created mDNS externalDNS, Rook+Ceph storage, and methods to provision necessary deployments (From configured helm charts to custom shell scripts for Nexus's configuration via REST). I try to create my cluster to work airgapped; so after initial provisioning (After all the scripts are ran) I want my cluster to be 100% separate and autonomous; both as an excercise, mimicking the sector I'm working in (Banking requires airgapping or severe monitoring), and because of cases like NPM leftpad fiasco or DockerHub rate limiting.
[Current state of affairs](https://github.com/Venthe/Personal-Development-Pipeline/tree/develop). Please note, that shell scripts are now outdated - what is important is `provisioning/cluster_vagrant` and `kubernetes/helm-apps`. Most of the passwords (in LDAP for example) are `secret`, and occasional keys are some sample keys. Don't worry, I'm not posting my own ones :)
Not every thing is automated as of this moment, namely not every ansible playbook is executed via vagrant - afaik playbooks 7* are not yet linked.
**My problems:**
* DNS: My goal is to access services via service.my-domain.internal. If possible, I'd wish to contain the solution to cluster only.
* At this point I have a working [Avahi/mDNS externalDNS fork](https://github.com/tsaarni/k8s-external-mdns). The problem is, I can only create hosts with domain.local; subdomains are NOT working.
* `LoadBalancer` services are resolvable by hostname
* Ingresses with subdomains do not work. They can be accessed with manually changed hostname (i.e. `curl ... --header 'Host: subdomian.domain.local'`) but there is no NS/CNAME/A record for them
* I wish to keep this contained. My current idea is to create `coreDNS` deployment, `externalDNS` resource operator and use this DNS to resolve hostnames from cluster by exposing `LoadBalancer` for `coreDNS`
* Problem is, while I can do `nslookup` from inside the cluser, I cannot do this from outside
* I've tried this by setting windows DNS address, to no avail. WSL2 `nslookup` did not work as well.
* To add insult to injury, *I cannot change DNS in my home router* - ISP is blocking this setting.
* To work around this problem, I can try to somehow expose CoreDNS via `LoadBalancer` and access it by setting my own machine **OR**
* Create VM with OpenWRT or something like that to act as a proxy router **OR**
* Create a proxy, although I have never done this **OR**
* My wildest idea yet, create VPN deployment inside cluster, and tunnel my host to VM through VPN and set DNS this way
* Blob mirroring.
* I wish to mirror all required blobs that are pulled via my system. This means Docker images, Helm packages, Vagrant boxes, NPM packages, APT packages and so on - goal is to be completely independent of remote systems after initial setup
* While I can configure and provision Nexus, I have yet to figure out how to automagically push all traffic from within cluster on certain paths through nexus - ideally, if I pull any docker image, all software should *think* that they are calling original repository, but in reality it should be calling my Nexus service
* This sounds like a proxy to me - but I don't even know where to start in context of reconfiguring the whole system through proxy contained (if possible) within the system.
I am afraid, that keeping everything inside cluster may create chicken-and-egg
Hey guys, I'm currently building test cluster to teach myself K8s++. I'm a developer by trade, but expanding skillset is always nice :) So please note, that my knowledge is limited and lacking; I have only minimal amount of time available for me, so feel free to point out my mistakes, especially in my assumptions.
**Description, context.**
After some trials, I have successfully set up three-node cluster with one control plane via vagrant and ansible. Whole thing operates on Hyper-V on windows, with vagrant+ansible+kubectl+helm are ran through WSL2 ubuntu VM. At this point i have successfully created mDNS externalDNS, Rook+Ceph storage, and methods to provision necessary deployments (From configured helm charts to custom shell scripts for Nexus's configuration via REST). I try to create my cluster to work airgapped; so after initial provisioning (After all the scripts are ran) I want my cluster to be 100% separate and autonomous; both as an excercise, mimicking the sector I'm working in (Banking requires airgapping or severe monitoring), and because of cases like NPM leftpad fiasco or DockerHub rate limiting.
[Current state of affairs](https://github.com/Venthe/Personal-Development-Pipeline/tree/develop). Please note, that shell scripts are now outdated - what is important is `provisioning/cluster_vagrant` and `kubernetes/helm-apps`. Most of the passwords (in LDAP for example) are `secret`, and occasional keys are some sample keys. Don't worry, I'm not posting my own ones :)
Not every thing is automated as of this moment, namely not every ansible playbook is executed via vagrant - afaik playbooks 7* are not yet linked.
**My problems:**
* DNS: My goal is to access services via service.my-domain.internal. If possible, I'd wish to contain the solution to cluster only.
* At this point I have a working [Avahi/mDNS externalDNS fork](https://github.com/tsaarni/k8s-external-mdns). The problem is, I can only create hosts with domain.local; subdomains are NOT working.
* `LoadBalancer` services are resolvable by hostname
* Ingresses with subdomains do not work. They can be accessed with manually changed hostname (i.e. `curl ... --header 'Host: subdomian.domain.local'`) but there is no NS/CNAME/A record for them
* I wish to keep this contained. My current idea is to create `coreDNS` deployment, `externalDNS` resource operator and use this DNS to resolve hostnames from cluster by exposing `LoadBalancer` for `coreDNS`
* Problem is, while I can do `nslookup` from inside the cluser, I cannot do this from outside
* I've tried this by setting windows DNS address, to no avail. WSL2 `nslookup` did not work as well.
* To add insult to injury, *I cannot change DNS in my home router* - ISP is blocking this setting.
* To work around this problem, I can try to somehow expose CoreDNS via `LoadBalancer` and access it by setting my own machine **OR**
* Create VM with OpenWRT or something like that to act as a proxy router **OR**
* Create a proxy, although I have never done this **OR**
* My wildest idea yet, create VPN deployment inside cluster, and tunnel my host to VM through VPN and set DNS this way
* Blob mirroring.
* I wish to mirror all required blobs that are pulled via my system. This means Docker images, Helm packages, Vagrant boxes, NPM packages, APT packages and so on - goal is to be completely independent of remote systems after initial setup
* While I can configure and provision Nexus, I have yet to figure out how to automagically push all traffic from within cluster on certain paths through nexus - ideally, if I pull any docker image, all software should *think* that they are calling original repository, but in reality it should be calling my Nexus service
* This sounds like a proxy to me - but I don't even know where to start in context of reconfiguring the whole system through proxy contained (if possible) within the system.
I am afraid, that keeping everything inside cluster may create chicken-and-egg
GitHub
Venthe/Personal-Development-Pipeline
Contribute to Venthe/Personal-Development-Pipeline development by creating an account on GitHub.