Looking for feedback about low-code MLOps platform
Hi there,
Together with my friends, we have created a concept that can take the deployed model (Tensorflow, Keras, Pytorch) and serve it as API in order to inference or implement it on the website/platform. No config needed, no infrastructure setup. The whole idea is to make it super low-code or NO-code.
Would you see any value in this approach, elaborate more about your challenges with AI deployments or be interested to talk with us and give us some feedback?
Please have a look at our website: [https://syndicai.co](https://syndicai.co/)
https://redd.it/jzh23p
@r_devops
Hi there,
Together with my friends, we have created a concept that can take the deployed model (Tensorflow, Keras, Pytorch) and serve it as API in order to inference or implement it on the website/platform. No config needed, no infrastructure setup. The whole idea is to make it super low-code or NO-code.
Would you see any value in this approach, elaborate more about your challenges with AI deployments or be interested to talk with us and give us some feedback?
Please have a look at our website: [https://syndicai.co](https://syndicai.co/)
https://redd.it/jzh23p
@r_devops
reddit
Looking for feedback about low-code MLOps platform
Hi there, Together with my friends, we have created a concept that can take the deployed model (Tensorflow, Keras, Pytorch) and serve it as API...
Easy, intermediate and advanced devops tasks you might have to do as a developer?
Could you list a bunch of tasks I can do to challenge myself and that might actually be useful for me as a developer? A dozen for each level of difficulty could really come a long way to helping me develop as a developer.
https://redd.it/jzxvs6
@r_devops
Could you list a bunch of tasks I can do to challenge myself and that might actually be useful for me as a developer? A dozen for each level of difficulty could really come a long way to helping me develop as a developer.
https://redd.it/jzxvs6
@r_devops
reddit
Easy, intermediate and advanced devops tasks you might have to do...
Could you list a bunch of tasks I can do to challenge myself and that might actually be useful for me as a developer? A dozen for each level of...
Port Domain from Digital Ocean to Google Cloud
I have a domain registered on Digital Ocean also I have a website hosted on a Digital Ocean Server.
Now I want to deploy a React JS Application on Google Cloud Platform and i want to map the ip-address to the sub-domain of already registered domain in Digital Ocean.
So how do i achieve this? Any solution for this would be appreciated :)
https://redd.it/jzcrco
@r_devops
I have a domain registered on Digital Ocean also I have a website hosted on a Digital Ocean Server.
Now I want to deploy a React JS Application on Google Cloud Platform and i want to map the ip-address to the sub-domain of already registered domain in Digital Ocean.
So how do i achieve this? Any solution for this would be appreciated :)
https://redd.it/jzcrco
@r_devops
reddit
Port Domain from Digital Ocean to Google Cloud
I have a domain registered on Digital Ocean also I have a website hosted on a Digital Ocean Server. Now I want to deploy a React JS Application...
What is your favorite learning platform?
I recently left linux academy/ACG due to the content going downhill... I am looking for a new learning platform and wondering what is a good alternative?
https://redd.it/jz87ym
@r_devops
I recently left linux academy/ACG due to the content going downhill... I am looking for a new learning platform and wondering what is a good alternative?
https://redd.it/jz87ym
@r_devops
reddit
What is your favorite learning platform?
I recently left linux academy/ACG due to the content going downhill... I am looking for a new learning platform and wondering what is a good...
Does your company use 1 cloud provider only, or do they float between clouds, or use several?
The company I work for is pretty gung ho about Azure. Curious if it would make sense for me to recommend trying other providers that make hosting OCP4 easiest or cheapest.
https://redd.it/jz6oe3
@r_devops
The company I work for is pretty gung ho about Azure. Curious if it would make sense for me to recommend trying other providers that make hosting OCP4 easiest or cheapest.
https://redd.it/jz6oe3
@r_devops
reddit
Does your company use 1 cloud provider only, or do they float...
The company I work for is pretty gung ho about Azure. Curious if it would make sense for me to recommend trying other providers that make hosting...
What does patching mean to you?
Hi all, I hope this is a good place to ask this:
I know that patching is about keeping the software on hosts up to date, but does it have a common meaning beyond that, or is patching different from company to company and even from host to host?
I've read the general description on patching but I haven't seen much regarding a recommended approach so I have a few questions (these questions apply to long lived hosts):
- Do you get your configuration management tool (e.g. Chef) to install the latest version of all software on install/each run? In other words, you don't pin versions and always get the latest?
- Do you instead install a specific version using said configuration management tool and run the patching manually e.g. by running yum update?
- Do you prefer to install software using the default package manager so that patching is as easy as running a single yum command? If so, is there a good strategy around patching software that can't be installed using the default package manager?
Any tips and tricks welcome and thank you in advance!
https://redd.it/jz4b8e
@r_devops
Hi all, I hope this is a good place to ask this:
I know that patching is about keeping the software on hosts up to date, but does it have a common meaning beyond that, or is patching different from company to company and even from host to host?
I've read the general description on patching but I haven't seen much regarding a recommended approach so I have a few questions (these questions apply to long lived hosts):
- Do you get your configuration management tool (e.g. Chef) to install the latest version of all software on install/each run? In other words, you don't pin versions and always get the latest?
- Do you instead install a specific version using said configuration management tool and run the patching manually e.g. by running yum update?
- Do you prefer to install software using the default package manager so that patching is as easy as running a single yum command? If so, is there a good strategy around patching software that can't be installed using the default package manager?
Any tips and tricks welcome and thank you in advance!
https://redd.it/jz4b8e
@r_devops
reddit
What does patching mean to you?
Hi all, I hope this is a good place to ask this: I know that patching is about keeping the software on hosts up to date, but does it have a...
DNS in docker container
I thought the DNS was a server that resolved a URL string into an IP address. Now, I learned that there are DNS settings on my docker container. Can anyone explain to me what a DNS is and how the DNS inside a container differs from the one the server that resolves my HTTP requests?
https://redd.it/k0a7vr
@r_devops
I thought the DNS was a server that resolved a URL string into an IP address. Now, I learned that there are DNS settings on my docker container. Can anyone explain to me what a DNS is and how the DNS inside a container differs from the one the server that resolves my HTTP requests?
https://redd.it/k0a7vr
@r_devops
reddit
DNS in docker container
I thought the DNS was a server that resolved a URL string into an IP address. Now, I learned that there are DNS settings on my docker container....
Kubernetes on Premise The Hard Way - need tips
Hey guys, I'm currently building test cluster to teach myself K8s++. I'm a developer by trade, but expanding skillset is always nice :) So please note, that my knowledge is limited and lacking; I have only minimal amount of time available for me, so feel free to point out my mistakes, especially in my assumptions.
**Description, context.**
After some trials, I have successfully set up three-node cluster with one control plane via vagrant and ansible. Whole thing operates on Hyper-V on windows, with vagrant+ansible+kubectl+helm are ran through WSL2 ubuntu VM. At this point i have successfully created mDNS externalDNS, Rook+Ceph storage, and methods to provision necessary deployments (From configured helm charts to custom shell scripts for Nexus's configuration via REST). I try to create my cluster to work airgapped; so after initial provisioning (After all the scripts are ran) I want my cluster to be 100% separate and autonomous; both as an excercise, mimicking the sector I'm working in (Banking requires airgapping or severe monitoring), and because of cases like NPM leftpad fiasco or DockerHub rate limiting.
[Current state of affairs](https://github.com/Venthe/Personal-Development-Pipeline/tree/develop). Please note, that shell scripts are now outdated - what is important is `provisioning/cluster_vagrant` and `kubernetes/helm-apps`. Most of the passwords (in LDAP for example) are `secret`, and occasional keys are some sample keys. Don't worry, I'm not posting my own ones :)
Not every thing is automated as of this moment, namely not every ansible playbook is executed via vagrant - afaik playbooks 7* are not yet linked.
**My problems:**
* DNS: My goal is to access services via service.my-domain.internal. If possible, I'd wish to contain the solution to cluster only.
* At this point I have a working [Avahi/mDNS externalDNS fork](https://github.com/tsaarni/k8s-external-mdns). The problem is, I can only create hosts with domain.local; subdomains are NOT working.
* `LoadBalancer` services are resolvable by hostname
* Ingresses with subdomains do not work. They can be accessed with manually changed hostname (i.e. `curl ... --header 'Host: subdomian.domain.local'`) but there is no NS/CNAME/A record for them
* I wish to keep this contained. My current idea is to create `coreDNS` deployment, `externalDNS` resource operator and use this DNS to resolve hostnames from cluster by exposing `LoadBalancer` for `coreDNS`
* Problem is, while I can do `nslookup` from inside the cluser, I cannot do this from outside
* I've tried this by setting windows DNS address, to no avail. WSL2 `nslookup` did not work as well.
* To add insult to injury, *I cannot change DNS in my home router* - ISP is blocking this setting.
* To work around this problem, I can try to somehow expose CoreDNS via `LoadBalancer` and access it by setting my own machine **OR**
* Create VM with OpenWRT or something like that to act as a proxy router **OR**
* Create a proxy, although I have never done this **OR**
* My wildest idea yet, create VPN deployment inside cluster, and tunnel my host to VM through VPN and set DNS this way
* Blob mirroring.
* I wish to mirror all required blobs that are pulled via my system. This means Docker images, Helm packages, Vagrant boxes, NPM packages, APT packages and so on - goal is to be completely independent of remote systems after initial setup
* While I can configure and provision Nexus, I have yet to figure out how to automagically push all traffic from within cluster on certain paths through nexus - ideally, if I pull any docker image, all software should *think* that they are calling original repository, but in reality it should be calling my Nexus service
* This sounds like a proxy to me - but I don't even know where to start in context of reconfiguring the whole system through proxy contained (if possible) within the system.
I am afraid, that keeping everything inside cluster may create chicken-and-egg
Hey guys, I'm currently building test cluster to teach myself K8s++. I'm a developer by trade, but expanding skillset is always nice :) So please note, that my knowledge is limited and lacking; I have only minimal amount of time available for me, so feel free to point out my mistakes, especially in my assumptions.
**Description, context.**
After some trials, I have successfully set up three-node cluster with one control plane via vagrant and ansible. Whole thing operates on Hyper-V on windows, with vagrant+ansible+kubectl+helm are ran through WSL2 ubuntu VM. At this point i have successfully created mDNS externalDNS, Rook+Ceph storage, and methods to provision necessary deployments (From configured helm charts to custom shell scripts for Nexus's configuration via REST). I try to create my cluster to work airgapped; so after initial provisioning (After all the scripts are ran) I want my cluster to be 100% separate and autonomous; both as an excercise, mimicking the sector I'm working in (Banking requires airgapping or severe monitoring), and because of cases like NPM leftpad fiasco or DockerHub rate limiting.
[Current state of affairs](https://github.com/Venthe/Personal-Development-Pipeline/tree/develop). Please note, that shell scripts are now outdated - what is important is `provisioning/cluster_vagrant` and `kubernetes/helm-apps`. Most of the passwords (in LDAP for example) are `secret`, and occasional keys are some sample keys. Don't worry, I'm not posting my own ones :)
Not every thing is automated as of this moment, namely not every ansible playbook is executed via vagrant - afaik playbooks 7* are not yet linked.
**My problems:**
* DNS: My goal is to access services via service.my-domain.internal. If possible, I'd wish to contain the solution to cluster only.
* At this point I have a working [Avahi/mDNS externalDNS fork](https://github.com/tsaarni/k8s-external-mdns). The problem is, I can only create hosts with domain.local; subdomains are NOT working.
* `LoadBalancer` services are resolvable by hostname
* Ingresses with subdomains do not work. They can be accessed with manually changed hostname (i.e. `curl ... --header 'Host: subdomian.domain.local'`) but there is no NS/CNAME/A record for them
* I wish to keep this contained. My current idea is to create `coreDNS` deployment, `externalDNS` resource operator and use this DNS to resolve hostnames from cluster by exposing `LoadBalancer` for `coreDNS`
* Problem is, while I can do `nslookup` from inside the cluser, I cannot do this from outside
* I've tried this by setting windows DNS address, to no avail. WSL2 `nslookup` did not work as well.
* To add insult to injury, *I cannot change DNS in my home router* - ISP is blocking this setting.
* To work around this problem, I can try to somehow expose CoreDNS via `LoadBalancer` and access it by setting my own machine **OR**
* Create VM with OpenWRT or something like that to act as a proxy router **OR**
* Create a proxy, although I have never done this **OR**
* My wildest idea yet, create VPN deployment inside cluster, and tunnel my host to VM through VPN and set DNS this way
* Blob mirroring.
* I wish to mirror all required blobs that are pulled via my system. This means Docker images, Helm packages, Vagrant boxes, NPM packages, APT packages and so on - goal is to be completely independent of remote systems after initial setup
* While I can configure and provision Nexus, I have yet to figure out how to automagically push all traffic from within cluster on certain paths through nexus - ideally, if I pull any docker image, all software should *think* that they are calling original repository, but in reality it should be calling my Nexus service
* This sounds like a proxy to me - but I don't even know where to start in context of reconfiguring the whole system through proxy contained (if possible) within the system.
I am afraid, that keeping everything inside cluster may create chicken-and-egg
GitHub
Venthe/Personal-Development-Pipeline
Contribute to Venthe/Personal-Development-Pipeline development by creating an account on GitHub.
We broke DevOps. And it’s preventing us from building.
In 2006, Werner Vogels, CTO at Amazon, described DevOps as: “You build it, you run it.”
But today DevOps could mean anything - it can be a practice, or a culture, or the name of a team, or a job title, or even a product you buy from Azure/IBM.
I'm a co-founder of a startup trying to help take DevOps back to its roots around end-to-end ownership of services and systems. Here's a blog post I wrote about it:
[https://www.opslevel.com/2020/11/18/taking-back-devops/](https://www.opslevel.com/2020/11/18/taking-back-devops/)
https://redd.it/k0b142
@r_devops
In 2006, Werner Vogels, CTO at Amazon, described DevOps as: “You build it, you run it.”
But today DevOps could mean anything - it can be a practice, or a culture, or the name of a team, or a job title, or even a product you buy from Azure/IBM.
I'm a co-founder of a startup trying to help take DevOps back to its roots around end-to-end ownership of services and systems. Here's a blog post I wrote about it:
[https://www.opslevel.com/2020/11/18/taking-back-devops/](https://www.opslevel.com/2020/11/18/taking-back-devops/)
https://redd.it/k0b142
@r_devops
OpsLevel
Taking Back DevOps
Let’s get DevOps to mean Service Ownership again. We broke DevOps. And it’s preventing us from building. When the first cloud providers emerged in the mid-2000s, they unlocked a new superpower: the ability to near-instantly provision hardware. Service-oriented…
Gebug has now web UI
Gebug is an open-source command-line tool that helps with debugging of Dockerized go applications.
I've just built a web UI to make the project configuration even more convenient and intuitive.
This is my first experience with Vue.js so I would really appreciate some feedback 😃
[https://github.com/moshebe/gebug#web-ui](https://github.com/moshebe/gebug#web-ui)
https://redd.it/k0glil
@r_devops
Gebug is an open-source command-line tool that helps with debugging of Dockerized go applications.
I've just built a web UI to make the project configuration even more convenient and intuitive.
This is my first experience with Vue.js so I would really appreciate some feedback 😃
[https://github.com/moshebe/gebug#web-ui](https://github.com/moshebe/gebug#web-ui)
https://redd.it/k0glil
@r_devops
GitHub
GitHub - moshebe/gebug: Debug Dockerized Go applications better
Debug Dockerized Go applications better. Contribute to moshebe/gebug development by creating an account on GitHub.
KubeCon North America 2020 Wrapup
Hi folks,
I wrote a wrapup post of the virtual KubeCon that happened last week. There were talks about GitOps, security, and other relevant topics. Read it here: [https://firehydrant.io/blog/kubecon-north-america-2020-wrapup/](https://firehydrant.io/blog/kubecon-north-america-2020-wrapup/)
​
Rich
https://redd.it/k0evhd
@r_devops
Hi folks,
I wrote a wrapup post of the virtual KubeCon that happened last week. There were talks about GitOps, security, and other relevant topics. Read it here: [https://firehydrant.io/blog/kubecon-north-america-2020-wrapup/](https://firehydrant.io/blog/kubecon-north-america-2020-wrapup/)
​
Rich
https://redd.it/k0evhd
@r_devops
firehydrant.io
KubeCon North America 2020 Wrapup
Wrapup of KubeCon + CloudNativeCon North America 2020 Virtual. Recap of talks from speakers including Ian Coldwater, Joe Thompson, Stephen Augustus, Constance Caramanolis, Priyanka Sharma and Cheryl Hung.
Free intro to Linux commandline/server course starts Monday 7 December
This course has been running successfully now every month since February 2020 - more detail at: https://LinuxUpskillChallenge.org - daily lessons appear in the sub-reddit /r/linuxupskillchallenge - which is also used for support/discussion.
Suitable whatever your background, and aims to provide that "base layer" of traditional Linux skills in a fun interactive way.
https://redd.it/k0hw1v
@r_devops
This course has been running successfully now every month since February 2020 - more detail at: https://LinuxUpskillChallenge.org - daily lessons appear in the sub-reddit /r/linuxupskillchallenge - which is also used for support/discussion.
Suitable whatever your background, and aims to provide that "base layer" of traditional Linux skills in a fun interactive way.
https://redd.it/k0hw1v
@r_devops
linuxupskillchallenge.org
Linux Upskill Challenge - Linux Upskill Challenge
A month-long course aimed at those who aspire to get Linux-related jobs in the industry - junior Linux sysadmin, DevOps-related work, and similar. Learn the skills required to sysadmin a remote Linux server from the commandline.
What is the NAT device for virtual machines?
>The NAT device acts as a DNS server for the virtual machines on the NAT network. Actually, the NAT device is a DNS proxy and merely forwards DNS requests from the virtual machines to a DNS server that is known by the host. Responses come back to the NAT device, which then forwards them to the virtual machines.
>
>If they get their configuration information from DHCP, the virtual machines on the NAT network automatically use the NAT device as the DNS server. However, the virtual machines can be statically configured to use another DNS server.
>
>The virtual machines in the private NAT network are not, themselves, accessible via DNS. If you want the virtual machines running on the NAT network to access each other by DNS names, you must set up a private DNS server connected to the NAT network.
The NAT device is the router in my home network, right? Then is the NAT device for my virtual machines always the router? The DHCP is a server from my Internet Provider, right? Then it means that the router is used as the DNS server? If I have a docker swarm running, and I use my router as the DNS server, then my router uses the name of the containers to attribute an IP address to the docker containers dynamically through DHCP, then it means that my router uses a NAT server from Google or other providers to set the IP addresses of my docker containers? If I don't use DHCP and uses static IPs, it means my docker containers don't use a DNS server (router in this case) to resolve their IP addresses? Do all docker containers using a DNS are connected to the Internet then? Did I understand everything correctly?
https://redd.it/k0k4x7
@r_devops
>The NAT device acts as a DNS server for the virtual machines on the NAT network. Actually, the NAT device is a DNS proxy and merely forwards DNS requests from the virtual machines to a DNS server that is known by the host. Responses come back to the NAT device, which then forwards them to the virtual machines.
>
>If they get their configuration information from DHCP, the virtual machines on the NAT network automatically use the NAT device as the DNS server. However, the virtual machines can be statically configured to use another DNS server.
>
>The virtual machines in the private NAT network are not, themselves, accessible via DNS. If you want the virtual machines running on the NAT network to access each other by DNS names, you must set up a private DNS server connected to the NAT network.
The NAT device is the router in my home network, right? Then is the NAT device for my virtual machines always the router? The DHCP is a server from my Internet Provider, right? Then it means that the router is used as the DNS server? If I have a docker swarm running, and I use my router as the DNS server, then my router uses the name of the containers to attribute an IP address to the docker containers dynamically through DHCP, then it means that my router uses a NAT server from Google or other providers to set the IP addresses of my docker containers? If I don't use DHCP and uses static IPs, it means my docker containers don't use a DNS server (router in this case) to resolve their IP addresses? Do all docker containers using a DNS are connected to the Internet then? Did I understand everything correctly?
https://redd.it/k0k4x7
@r_devops
reddit
What is the NAT device for virtual machines?
>The NAT device acts as a DNS server for the virtual machines on the NAT network. Actually, the NAT device is a DNS proxy and merely forwards DNS...
What's the REAL reason to add country_name, organization_name, etc to a CSR?
Since I can create a CSR and get a Lets Encrypt SSL without adding location, email, and company info to a CSR, then what is the real benefit of adding these values?
If it is just so that the info is in the SSL for users to look at, who really goes through SSL's and looks up that info?
https://redd.it/k0ga8p
@r_devops
Since I can create a CSR and get a Lets Encrypt SSL without adding location, email, and company info to a CSR, then what is the real benefit of adding these values?
If it is just so that the info is in the SSL for users to look at, who really goes through SSL's and looks up that info?
https://redd.it/k0ga8p
@r_devops
reddit
What's the REAL reason to add country_name, organization_name, etc...
Since I can create a CSR and get a Lets Encrypt SSL without adding location, email, and company info to a CSR, then what is the real benefit of...
Automating AWS and Google Cloud
We've been working on a product that automates cloud infrastructure from provisioning to deploying, scaling, and securing APIs and UIs in minutes.
As a software engineer, I've always been frustrated with the current solutions for deploying products in the cloud whether that is AWS's web console, CloudFormation, Terraform, etc. It's manual, tedious, time consuming and requires expert knowledge.
We started with automating AWS and Google Cloud Platform and making it a lot simpler to deploy, scale and secure a cloud infrastructure.
I would be interested in some feedback and see what others have in mind that we could make the cloud even simpler.
Oatfin: [https://oatfin.com](https://oatfin.com/)
Demo: [https://vimeo.com/470214984](https://vimeo.com/470214984)
https://redd.it/k07fyp
@r_devops
We've been working on a product that automates cloud infrastructure from provisioning to deploying, scaling, and securing APIs and UIs in minutes.
As a software engineer, I've always been frustrated with the current solutions for deploying products in the cloud whether that is AWS's web console, CloudFormation, Terraform, etc. It's manual, tedious, time consuming and requires expert knowledge.
We started with automating AWS and Google Cloud Platform and making it a lot simpler to deploy, scale and secure a cloud infrastructure.
I would be interested in some feedback and see what others have in mind that we could make the cloud even simpler.
Oatfin: [https://oatfin.com](https://oatfin.com/)
Demo: [https://vimeo.com/470214984](https://vimeo.com/470214984)
https://redd.it/k07fyp
@r_devops
Tired of the AWS Console? Check out Vantage
Are you tired of the AWS web console? So was I....which is how I stumbled upon this and thought I'd share it here. The website is https://vantage.sh/ and they seem to be low key with it right now but are building an alternative to the AWS web console.
I've been using it for a few weeks and they're making good progress despite functionality being light. I hadn't seen any mentions of this on /r/devops and would be curious to see what others thought.
https://redd.it/k0meuu
@r_devops
Are you tired of the AWS web console? So was I....which is how I stumbled upon this and thought I'd share it here. The website is https://vantage.sh/ and they seem to be low key with it right now but are building an alternative to the AWS web console.
I've been using it for a few weeks and they're making good progress despite functionality being light. I hadn't seen any mentions of this on /r/devops and would be curious to see what others thought.
https://redd.it/k0meuu
@r_devops
Vantage
Vantage: Multi Cloud Cost Management & Optimization Tool
Vantage is a self-service cloud cost platform that gives developers the tools they need to analyze, report on and optimize AWS, Azure, and GCP costs.
DevOps engineer that has fully invested into Apple ecosystem: stick to MacOS or switch to Linux?
Hi,
as implied by the title, what would you recommend to a DevOps engineer that has heavily invested in Apple ecosystem (iPhone, iPad, Airpods Pro), and that is in search of a new "top in class" laptop (i.e. 32gb ram, powerful cpu, etc. etc.)?
Principal activities are: managing containers, VMs, working in cloud envitonments, and trying new things in the CI/CD space
Linux is clearly a better platform, and the new Dell XPS 13 9310 seems like a perfect choice...and i say this while typing on a 2019 Dell XPS 13, that has served me very well.On the other hand, a shiny new Macbook is on the other hand a very powerful machine, and one could gain the benefits of the Apple ecosystem.
https://redd.it/k08lti
@r_devops
Hi,
as implied by the title, what would you recommend to a DevOps engineer that has heavily invested in Apple ecosystem (iPhone, iPad, Airpods Pro), and that is in search of a new "top in class" laptop (i.e. 32gb ram, powerful cpu, etc. etc.)?
Principal activities are: managing containers, VMs, working in cloud envitonments, and trying new things in the CI/CD space
Linux is clearly a better platform, and the new Dell XPS 13 9310 seems like a perfect choice...and i say this while typing on a 2019 Dell XPS 13, that has served me very well.On the other hand, a shiny new Macbook is on the other hand a very powerful machine, and one could gain the benefits of the Apple ecosystem.
https://redd.it/k08lti
@r_devops
reddit
DevOps engineer that has fully invested into Apple ecosystem:...
Hi, as implied by the title, what would you recommend to a DevOps engineer that has heavily invested in Apple ecosystem (iPhone, iPad, Airpods...
Deploy Docker Compose from different repositories
I have 3 separate repositories, which contains API, web app, and Admin projects. All 3 runs on a Digital Ocean docker machine, and I deploy the 3 using docker-compose (I want to keep things simple).
Every time a push to master with a tag happen, GithubActions build each separate project and publish a docker image (with the corresponding tag), and then I manually launch \`docker compose up -d\`. The docker compose file point to the latest version of each image, so it get automatically reloaded and launched.
Is there a better way to automatically reload the docker compose from GitHub Actions? I ideally want that each time a repository build an image and push it to the registry, the docker compose automatically reload.
https://redd.it/k0nrpm
@r_devops
I have 3 separate repositories, which contains API, web app, and Admin projects. All 3 runs on a Digital Ocean docker machine, and I deploy the 3 using docker-compose (I want to keep things simple).
Every time a push to master with a tag happen, GithubActions build each separate project and publish a docker image (with the corresponding tag), and then I manually launch \`docker compose up -d\`. The docker compose file point to the latest version of each image, so it get automatically reloaded and launched.
Is there a better way to automatically reload the docker compose from GitHub Actions? I ideally want that each time a repository build an image and push it to the registry, the docker compose automatically reload.
https://redd.it/k0nrpm
@r_devops
reddit
Deploy Docker Compose from different repositories
I have 3 separate repositories, which contains API, web app, and Admin projects. All 3 runs on a Digital Ocean docker machine, and I deploy the 3...
What do you think of Flux CD v2?
Initially, I was confused with the changes (rewrite) of Flux v2. I even claimed that it does not support multi-environment in the same cluster setup (unlike multi-app support). Nevertheless, after digging more through not-so-good docs, I realized that it does everything I need it to do, and more. So, I created a video about the experience.
​
\>> Video: [https://youtu.be/R6OeIgb7lUI](https://youtu.be/R6OeIgb7lUI)
​
What do you think? Do you prefer Flux v2 or Argo CD? Are you applying GitOps principles?
https://redd.it/k076ct
@r_devops
Initially, I was confused with the changes (rewrite) of Flux v2. I even claimed that it does not support multi-environment in the same cluster setup (unlike multi-app support). Nevertheless, after digging more through not-so-good docs, I realized that it does everything I need it to do, and more. So, I created a video about the experience.
​
\>> Video: [https://youtu.be/R6OeIgb7lUI](https://youtu.be/R6OeIgb7lUI)
​
What do you think? Do you prefer Flux v2 or Argo CD? Are you applying GitOps principles?
https://redd.it/k076ct
@r_devops
YouTube
Flux CD v2 With GitOps Toolkit - Kubernetes Deployment And Sync Mechanism
Flux v2 is a tool for converging the actual state (Kubernetes clusters) into the desired state defined in Git. It is a GitOps-based deployment mechanism often used in continuous delivery (CD) processes.
Timecodes ⏱:
00:00 Intro
01:39 Creating environment…
Timecodes ⏱:
00:00 Intro
01:39 Creating environment…
How I setup my Kubernetes CI/CD pipeline for deploying my spring boot application
Being fairly new to Devops and going through the basics of Kubernetes, I wanted to setup the pipeline which automate the deployment of my Spring Boot application inside kubernetes cluster
And this how it achieved it
1. **Setup Kubernetes cluster** \- When it comes to learning new stuff I like to setup everything from scratch and prefer to have everything running on my laptop. Here are the list of thinks which you need if you want to run Kubernetes cluster on local development machine
1. **Virtual Box** \- This is the first tool you need to install if you are trying to setup your kubernetes cluster
2. **Vagrant** \- I love vagrant and its simplicity, you just need to define
​
Vagrant.configure("2") do |config|
config.vm.define "jenkinsserver" do |jenkinsserver|
jenkinsserver.vm.box_download_insecure = true
jenkinsserver.vm.box = "hashicorp/bionic64"
jenkinsserver.vm.network "forwarded_port", guest: 8080, host: 8080
jenkinsserver.vm.network "forwarded_port", guest: 8081, host: 8081
jenkinsserver.vm.network "forwarded_port", guest: 9090, host: 9090
jenkinsserver.vm.network "private_network", ip: "100.0.0.1"
jenkinsserver.vm.hostname = "jenkinsserver"
jenkinsserver.vm.provider "virtualbox" do |v|
v.name = "jenkinsserver"
v.memory = 2048
v.cpus = 2
end
end
config.vm.define "k8smaster" do |k8smaster|
k8smaster.vm.box_download_insecure = true
k8smaster.vm.box = "hashicorp/bionic64"
k8smaster.vm.network "private_network", ip: "100.0.0.2"
k8smaster.vm.hostname = "k8smaster"
k8smaster.vm.provider "virtualbox" do |v|
v.name = "k8smaster"
v.memory = 2048
v.cpus = 2
end
end
config.vm.define "k8sworker" do |k8sworker|
k8sworker.vm.box_download_insecure = true
k8sworker.vm.box = "hashicorp/bionic64"
k8sworker.vm.network "private_network", ip: "100.0.0.3"
k8sworker.vm.hostname = "k8sworker"
k8sworker.vm.provider "virtualbox" do |v|
v.name = "k8sworker"
v.memory = 2048
v.cpus = 2
end
end
end
3. **Kubespray -** They have done really good job at automating the kubernetes cluster setup using ansible. I would recommend to use kubespray, if you are a newbie like me to setup kubernetes cluster. Here is the lab session where i setup my own kubernetes cluster - [Lab session Demo](https://youtu.be/7dG3vZFjQsE)
4. **Docker** - Since you are working with Kubernetes then you must need Docker because kubernetes is Container orchestration tool. So go ahead and install docker on your local development laptop not in virtual machine
5. **Spring Boot Application** - Now after setting up kubernetes cluster now you need to have an application which you want to deploy inside kubernetes cluster. So I am using **Spring Boot Application**. You can do [Git Clone](https://github.com/rahulwagh/springboot-with-docker) .
In the Git Repository you will find the **Dockerfile** along with **docker-compose.yaml**, which you can user to build your Docker image of spring boot application.
6. **Push Spring Boot to Docker Hub** - Now you got your Spring Boot Application and its time to push your application into Docker Hub. Follow this [Lab Session](https://youtu.be/DFuxCSI4ktY)
7. **Install Jenkins** - Now you need to install Jenkins on the VMs. I prefer to install it on **amaster**(ansible node). Refer to this article [Install jenkins](https://jhooq.com/ci-cd-jenkins-kubernetes/#3-install-jenkins-on-your-jenkinsserve)
8. **Pipeline setup** - This is last step and it is going to be the long one but I prepared a [lab session](https://youtu.be/TPMUxsRI1OA) so that it is easy to understand.
Its
Being fairly new to Devops and going through the basics of Kubernetes, I wanted to setup the pipeline which automate the deployment of my Spring Boot application inside kubernetes cluster
And this how it achieved it
1. **Setup Kubernetes cluster** \- When it comes to learning new stuff I like to setup everything from scratch and prefer to have everything running on my laptop. Here are the list of thinks which you need if you want to run Kubernetes cluster on local development machine
1. **Virtual Box** \- This is the first tool you need to install if you are trying to setup your kubernetes cluster
2. **Vagrant** \- I love vagrant and its simplicity, you just need to define
​
Vagrant.configure("2") do |config|
config.vm.define "jenkinsserver" do |jenkinsserver|
jenkinsserver.vm.box_download_insecure = true
jenkinsserver.vm.box = "hashicorp/bionic64"
jenkinsserver.vm.network "forwarded_port", guest: 8080, host: 8080
jenkinsserver.vm.network "forwarded_port", guest: 8081, host: 8081
jenkinsserver.vm.network "forwarded_port", guest: 9090, host: 9090
jenkinsserver.vm.network "private_network", ip: "100.0.0.1"
jenkinsserver.vm.hostname = "jenkinsserver"
jenkinsserver.vm.provider "virtualbox" do |v|
v.name = "jenkinsserver"
v.memory = 2048
v.cpus = 2
end
end
config.vm.define "k8smaster" do |k8smaster|
k8smaster.vm.box_download_insecure = true
k8smaster.vm.box = "hashicorp/bionic64"
k8smaster.vm.network "private_network", ip: "100.0.0.2"
k8smaster.vm.hostname = "k8smaster"
k8smaster.vm.provider "virtualbox" do |v|
v.name = "k8smaster"
v.memory = 2048
v.cpus = 2
end
end
config.vm.define "k8sworker" do |k8sworker|
k8sworker.vm.box_download_insecure = true
k8sworker.vm.box = "hashicorp/bionic64"
k8sworker.vm.network "private_network", ip: "100.0.0.3"
k8sworker.vm.hostname = "k8sworker"
k8sworker.vm.provider "virtualbox" do |v|
v.name = "k8sworker"
v.memory = 2048
v.cpus = 2
end
end
end
3. **Kubespray -** They have done really good job at automating the kubernetes cluster setup using ansible. I would recommend to use kubespray, if you are a newbie like me to setup kubernetes cluster. Here is the lab session where i setup my own kubernetes cluster - [Lab session Demo](https://youtu.be/7dG3vZFjQsE)
4. **Docker** - Since you are working with Kubernetes then you must need Docker because kubernetes is Container orchestration tool. So go ahead and install docker on your local development laptop not in virtual machine
5. **Spring Boot Application** - Now after setting up kubernetes cluster now you need to have an application which you want to deploy inside kubernetes cluster. So I am using **Spring Boot Application**. You can do [Git Clone](https://github.com/rahulwagh/springboot-with-docker) .
In the Git Repository you will find the **Dockerfile** along with **docker-compose.yaml**, which you can user to build your Docker image of spring boot application.
6. **Push Spring Boot to Docker Hub** - Now you got your Spring Boot Application and its time to push your application into Docker Hub. Follow this [Lab Session](https://youtu.be/DFuxCSI4ktY)
7. **Install Jenkins** - Now you need to install Jenkins on the VMs. I prefer to install it on **amaster**(ansible node). Refer to this article [Install jenkins](https://jhooq.com/ci-cd-jenkins-kubernetes/#3-install-jenkins-on-your-jenkinsserve)
8. **Pipeline setup** - This is last step and it is going to be the long one but I prepared a [lab session](https://youtu.be/TPMUxsRI1OA) so that it is easy to understand.
Its
YouTube
kubespray Kubernetes Cluster setup - Part 3
==================================
Guide for installation instructions - https://jhooq.com/kubespray-12-steps-for-installing-a-production-ready-kubernetes-cluster/
==================================
This tutorial is for the ones who want to try out the…
Guide for installation instructions - https://jhooq.com/kubespray-12-steps-for-installing-a-production-ready-kubernetes-cluster/
==================================
This tutorial is for the ones who want to try out the…