Career Advice I want to move from Civil Engineering to DevOps engineering?
I'm currently doing my bachelor's degree in Civil Engineering technology in South Africa and when I graduate I may become a civil technologist/engineer. However, I want to branch into DevOps. What is the best route for me to become a DevOps engineer ? Is there a bridging honours or masters I can do to become a DevOps Engineer?
https://redd.it/lc3u1z
@r_devops
I'm currently doing my bachelor's degree in Civil Engineering technology in South Africa and when I graduate I may become a civil technologist/engineer. However, I want to branch into DevOps. What is the best route for me to become a DevOps engineer ? Is there a bridging honours or masters I can do to become a DevOps Engineer?
https://redd.it/lc3u1z
@r_devops
reddit
Career Advice I want to move from Civil Engineering to DevOps...
I'm currently doing my bachelor's degree in Civil Engineering technology in South Africa and when I graduate I may become a civil...
Declarative API's
I am wondering whether there's actual use case or its an advanced users feature that is a nice-to-have.
​
Will declarative API's, infra as code capabilities affect your decision when choosing a tool/platform?
View Poll
https://redd.it/lc1qnb
@r_devops
I am wondering whether there's actual use case or its an advanced users feature that is a nice-to-have.
​
Will declarative API's, infra as code capabilities affect your decision when choosing a tool/platform?
View Poll
https://redd.it/lc1qnb
@r_devops
Looking for simple local build system
I'm looking for some kind of simple generic build system that will run entirely locally on my Windows machine (not Docker) that will basically do 4 things:
Execute a sequence of commands
Capture the commands and output
Collect generated files from a build and put them somewhere
Maintain the history of builds, logs, and files
Even better if it could automatically do a lot of things a CI/CD system would do, e.g.
Checkout a Git revision (from a locally hosted Git repo, or a Github repo)
Setup environment variables
Run tests
Generate some reports
Generate a manifest
Identify and collect artifacts
https://redd.it/lby4ta
@r_devops
I'm looking for some kind of simple generic build system that will run entirely locally on my Windows machine (not Docker) that will basically do 4 things:
Execute a sequence of commands
Capture the commands and output
Collect generated files from a build and put them somewhere
Maintain the history of builds, logs, and files
Even better if it could automatically do a lot of things a CI/CD system would do, e.g.
Checkout a Git revision (from a locally hosted Git repo, or a Github repo)
Setup environment variables
Run tests
Generate some reports
Generate a manifest
Identify and collect artifacts
https://redd.it/lby4ta
@r_devops
reddit
Looking for simple local build system
I'm looking for some kind of simple generic build system that will run entirely locally on my Windows machine (not Docker) that will basically do...
Which job should I pick?
I am a middle level DevOps engineer. I am familiar with all general DevOps tools, and have spent quite some effort on AWS (I have 3 certs already, 1 is Specialty), but not much real life experience.
Currently I am receiving two job offers (first of all, the salaries and company sizes are the same):
* Job 1:
* AWS
* Serverless
* No K8S (yet)
* Website and mobile app
* Possible working from home 60%
* Quite a distance from home
* Job 2:
* Azure
* K8S
* IoT
* Possible working from home partly
* 1/2 distance from my home, comparing with the job 1
Which one should I pick, or is there anything I should consider?
https://redd.it/lbwa5w
@r_devops
I am a middle level DevOps engineer. I am familiar with all general DevOps tools, and have spent quite some effort on AWS (I have 3 certs already, 1 is Specialty), but not much real life experience.
Currently I am receiving two job offers (first of all, the salaries and company sizes are the same):
* Job 1:
* AWS
* Serverless
* No K8S (yet)
* Website and mobile app
* Possible working from home 60%
* Quite a distance from home
* Job 2:
* Azure
* K8S
* IoT
* Possible working from home partly
* 1/2 distance from my home, comparing with the job 1
Which one should I pick, or is there anything I should consider?
https://redd.it/lbwa5w
@r_devops
reddit
Which job should I pick?
I am a middle level DevOps engineer. I am familiar with all general DevOps tools, and have spent quite some effort on AWS (I have 3 certs already,...
Can I bulk upload epics and features to a backlog?
Basically the title. I've got about 20 epics with multiple Features cascading under them. I want to be able to bulk upload everything.
https://redd.it/lbvzwd
@r_devops
Basically the title. I've got about 20 epics with multiple Features cascading under them. I want to be able to bulk upload everything.
https://redd.it/lbvzwd
@r_devops
reddit
Can I bulk upload epics and features to a backlog?
Basically the title. I've got about 20 epics with multiple Features cascading under them. I want to be able to bulk upload everything.
CI puppet code using docker image
We're developing puppet code to automate configuration for VMs shipped our customer.
For the moment a simple pipeline is set up to check code and synchronize modules in foreman. Each time we want to check the result, we need to connect to the VMs run the puppet agent and analyze the output.
I would like to setup a pipeline using customize centos/debian docker images (with systemd enable) running puppet server and agent to test new development.
I assume the result should be the same as if I was deploying the manifests into VMs.
Am I right to think that it would have the same effect in productive VMs? Does someone already tested?
https://redd.it/lbtdiv
@r_devops
We're developing puppet code to automate configuration for VMs shipped our customer.
For the moment a simple pipeline is set up to check code and synchronize modules in foreman. Each time we want to check the result, we need to connect to the VMs run the puppet agent and analyze the output.
I would like to setup a pipeline using customize centos/debian docker images (with systemd enable) running puppet server and agent to test new development.
I assume the result should be the same as if I was deploying the manifests into VMs.
Am I right to think that it would have the same effect in productive VMs? Does someone already tested?
https://redd.it/lbtdiv
@r_devops
reddit
CI puppet code using docker image
We're developing puppet code to automate configuration for VMs shipped our customer. For the moment a simple pipeline is set up to check code and...
Need tips on package managers
So my environment has Linux nodes windows nodes and docker images running on both virtual and physical servers all are in the same network
I would like to create a local repository to host windows packages, Linux packages, docker images and packer VM/iso templates in one location.
I believe Linux, docker and packer templates should not be a problem, but I am wondering about windows.
I would like everything to be in one virtual node
Does anyone have a ideas/ tips on what I can explore??
I am open to anything (open sourced of course)
Thanks in advance
https://redd.it/lced8d
@r_devops
So my environment has Linux nodes windows nodes and docker images running on both virtual and physical servers all are in the same network
I would like to create a local repository to host windows packages, Linux packages, docker images and packer VM/iso templates in one location.
I believe Linux, docker and packer templates should not be a problem, but I am wondering about windows.
I would like everything to be in one virtual node
Does anyone have a ideas/ tips on what I can explore??
I am open to anything (open sourced of course)
Thanks in advance
https://redd.it/lced8d
@r_devops
reddit
Need tips on package managers
So my environment has Linux nodes windows nodes and docker images running on both virtual and physical servers all are in the same network I would...
Looking for some good rules of thumb
Hi!
I'm a web app developer and when I have to deploy stuff I always choose the smallest tier, because I have no idea of what specs what traffic/requests can hold approximately.
So if someone with experience can help me in either of the 3 following things, that would be amazing:
1. For a basic JSON API backend server that let's say executes 1 database operation when gets a request (average speed framework for everything - it shouldn't be that big of a difference), how should I think about when choosing hardware, like if I'm expecting at max 5.000 requests/second what hardware can handle that and what about 10.000 req/sec, 20.000 req/sec and so on
2. The same for a basic static file sever that serves static html + css + js. Here again like if the sum of all is for example 3MB and I have X req/sec how should I think
3. Server Side Rendering HTML Server (React SSR or any MVC framework). This one is the hardest, but if someone has a lot of experience there's a chance that there are good rules of thumb for this one: how much heavier it is than a simple JSON server that executes a DB operation
If someone can help me with any of it or link me some good resources I would be very thankful!
https://redd.it/lcfayp
@r_devops
Hi!
I'm a web app developer and when I have to deploy stuff I always choose the smallest tier, because I have no idea of what specs what traffic/requests can hold approximately.
So if someone with experience can help me in either of the 3 following things, that would be amazing:
1. For a basic JSON API backend server that let's say executes 1 database operation when gets a request (average speed framework for everything - it shouldn't be that big of a difference), how should I think about when choosing hardware, like if I'm expecting at max 5.000 requests/second what hardware can handle that and what about 10.000 req/sec, 20.000 req/sec and so on
2. The same for a basic static file sever that serves static html + css + js. Here again like if the sum of all is for example 3MB and I have X req/sec how should I think
3. Server Side Rendering HTML Server (React SSR or any MVC framework). This one is the hardest, but if someone has a lot of experience there's a chance that there are good rules of thumb for this one: how much heavier it is than a simple JSON server that executes a DB operation
If someone can help me with any of it or link me some good resources I would be very thankful!
https://redd.it/lcfayp
@r_devops
reddit
Looking for some good rules of thumb
Hi! I'm a web app developer and when I have to deploy stuff I always choose the smallest tier, because I have no idea of what specs what...
Dell's ALM Tools
Does anybody know which ALM tools Dell is using? Are they using Jira? Azure DevOps? Something else? An in-house to? Looking at moving and wanted a heads-up on what tools I should be looking at.
https://redd.it/lbt87i
@r_devops
Does anybody know which ALM tools Dell is using? Are they using Jira? Azure DevOps? Something else? An in-house to? Looking at moving and wanted a heads-up on what tools I should be looking at.
https://redd.it/lbt87i
@r_devops
reddit
Dell's ALM Tools
Does anybody know which ALM tools Dell is using? Are they using Jira? Azure DevOps? Something else? An in-house to? Looking at moving and wanted a...
Which tool are you using to run workflows/pipelines in Kubernetes
There are two main contestants to be the de-facto standard for CI/CD, machine learning and other types of workflows/pipelines in Kubernetes. Those would be Tekton and Argo Workflows.
Which one do you prefer?
A video about Argo Workflows (Tekton is coming soon as well)...
\>>> https://youtu.be/UMaivwrAyTA
https://redd.it/lchp9y
@r_devops
There are two main contestants to be the de-facto standard for CI/CD, machine learning and other types of workflows/pipelines in Kubernetes. Those would be Tekton and Argo Workflows.
Which one do you prefer?
A video about Argo Workflows (Tekton is coming soon as well)...
\>>> https://youtu.be/UMaivwrAyTA
https://redd.it/lchp9y
@r_devops
YouTube
Argo Workflows and Pipelines - CI/CD, Machine Learning, and Other Kubernetes Workflows
Argo Workflows & Pipelines is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. It is a cloud-native solution designed from ground-up for Kubernetes.
#argo #workflows #pipelines #kubernetes
Timecodes ⏱:
00:00…
#argo #workflows #pipelines #kubernetes
Timecodes ⏱:
00:00…
Switching from devops to sysadmin
Hello, I've been working as a DevOps engineer for almost 4 years and now i got an offer for a sysadmin position. And the money is %40 more than my current. Learning vmware and datacenter operations could be useful but staying away from cloud may not be good for the future offerings. I'm little confused about this shift. What could be the possible pros and cons?
https://redd.it/lbszo3
@r_devops
Hello, I've been working as a DevOps engineer for almost 4 years and now i got an offer for a sysadmin position. And the money is %40 more than my current. Learning vmware and datacenter operations could be useful but staying away from cloud may not be good for the future offerings. I'm little confused about this shift. What could be the possible pros and cons?
https://redd.it/lbszo3
@r_devops
reddit
Switching from devops to sysadmin
Hello, I've been working as a DevOps engineer for almost 4 years and now i got an offer for a sysadmin position. And the money is %40 more than my...
Getting sick of AWS, anyone have anything else they like?
Hey I run the tech side of a mid size company. I have personally used AWS for over ten years and we've been using AWS at the company for three years.
Before AWS I was on bare metal and used things like cPanel and Parallels and was pretty blown away by AWS. AWS was pretty critical in us being able to scale to the level we needed and was super easy to use and programmable.
These days we are just having so many problems with it and I hate how they are trying to be a one stop shop for everything. We are trying to deploy a Kubernetes cluster that has a legal requirement to be multi cloud and have the ability to run bare metal, and I just feel like AWS is doing everything in their power to try to force me to use EKS. We try to hire certified AWS engineers and they have no idea how to do anything outside of AWS products.
We also use the Elastic Stack quite a bit and the feud between them and elastic is not sitting right with me. We also tried using the AWS Elastic but it's poorly maintained and inflexible for our very advance use case.
We also had an AWS rep try to help us migrate to serverless in regards to a service we had with lambdas and it almost shut down our entire company for a day because of a bug in lambda (this was a couple years ago) and ulimtately the lambdas performed so poorly we had to revert.
I know this is silly but I have been having so many problems then today the new UI just kept confusing me and not working and I just lost it. Also their documentation is trash. And we also we keep having weird inconsistencies with their APIs and the CLI....rant...
I have no used any other cloud provider in years so was just curious if there is a consensus for a very developer friendly cloud provider these days?
https://redd.it/lck072
@r_devops
Hey I run the tech side of a mid size company. I have personally used AWS for over ten years and we've been using AWS at the company for three years.
Before AWS I was on bare metal and used things like cPanel and Parallels and was pretty blown away by AWS. AWS was pretty critical in us being able to scale to the level we needed and was super easy to use and programmable.
These days we are just having so many problems with it and I hate how they are trying to be a one stop shop for everything. We are trying to deploy a Kubernetes cluster that has a legal requirement to be multi cloud and have the ability to run bare metal, and I just feel like AWS is doing everything in their power to try to force me to use EKS. We try to hire certified AWS engineers and they have no idea how to do anything outside of AWS products.
We also use the Elastic Stack quite a bit and the feud between them and elastic is not sitting right with me. We also tried using the AWS Elastic but it's poorly maintained and inflexible for our very advance use case.
We also had an AWS rep try to help us migrate to serverless in regards to a service we had with lambdas and it almost shut down our entire company for a day because of a bug in lambda (this was a couple years ago) and ulimtately the lambdas performed so poorly we had to revert.
I know this is silly but I have been having so many problems then today the new UI just kept confusing me and not working and I just lost it. Also their documentation is trash. And we also we keep having weird inconsistencies with their APIs and the CLI....rant...
I have no used any other cloud provider in years so was just curious if there is a consensus for a very developer friendly cloud provider these days?
https://redd.it/lck072
@r_devops
reddit
Getting sick of AWS, anyone have anything else they like?
Hey I run the tech side of a mid size company. I have personally used AWS for over ten years and we've been using AWS at the company for three...
what are the biggest sources of conflict with developers?
Title says it all.
https://redd.it/lckc7j
@r_devops
Title says it all.
https://redd.it/lckc7j
@r_devops
reddit
what are the biggest sources of conflict with developers?
Title says it all.
terratest - providers
Terratest nub here,
Trying to setup a basic module which deploys the loki-stack via helm_release.
I'm literally just trying to deploy the test to my local docker-desktop installation.
There is no provider specified in my project so this happens when I run tests:
TestTerraformBasicExample 2021-02-04T18:29:40Z logger.go:66: Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETESMASTER environment variable
I don't particularly wish to maintain a configured provider spec in my module repo, it's not exactly relevant there.
Realistically the tests will be run in a pipeline and the provider config should be constructed from whichever sources are appropriate.
Can anyone advise on the standard method of configuring providers when using terratest?
tldr;
deploying helm\release with terratest, how to pass provider configuration into tests & not specify it alongside the terraform code.
https://redd.it/lcmk0o
@r_devops
Terratest nub here,
Trying to setup a basic module which deploys the loki-stack via helm_release.
I'm literally just trying to deploy the test to my local docker-desktop installation.
There is no provider specified in my project so this happens when I run tests:
TestTerraformBasicExample 2021-02-04T18:29:40Z logger.go:66: Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETESMASTER environment variable
I don't particularly wish to maintain a configured provider spec in my module repo, it's not exactly relevant there.
Realistically the tests will be run in a pipeline and the provider config should be constructed from whichever sources are appropriate.
Can anyone advise on the standard method of configuring providers when using terratest?
tldr;
deploying helm\release with terratest, how to pass provider configuration into tests & not specify it alongside the terraform code.
https://redd.it/lcmk0o
@r_devops
reddit
terratest - providers
Terratest nub here, Trying to setup a basic module which deploys the loki-stack via helm\_release. I'm literally just trying to deploy the test...
How to get experience working on high scale systems?
I'm on the job market right now trying to get a job working on a high-scale system, but everything I've worked with has been relatively low-scale. The engineering and reliability issues on low-scale systems stop being interesting pretty quick.
The problem is that companies that have high-scale systems want to hire people who already have experience working on high-scale systems. I don't blame them for this. These companies are successful and can afford to be picky with hires.
So I'm in a catch-22 here. How do I get the experience without being able to work at a place where I can get the experience? I can side-project up pretty much any technology, but I can't side-project millions of users.
Is this just a case where because my first DevOps/SRE experience didn't have any high-scale systems means I just don't have any hope of landing a job working on high-scale systems?
https://redd.it/lco5wi
@r_devops
I'm on the job market right now trying to get a job working on a high-scale system, but everything I've worked with has been relatively low-scale. The engineering and reliability issues on low-scale systems stop being interesting pretty quick.
The problem is that companies that have high-scale systems want to hire people who already have experience working on high-scale systems. I don't blame them for this. These companies are successful and can afford to be picky with hires.
So I'm in a catch-22 here. How do I get the experience without being able to work at a place where I can get the experience? I can side-project up pretty much any technology, but I can't side-project millions of users.
Is this just a case where because my first DevOps/SRE experience didn't have any high-scale systems means I just don't have any hope of landing a job working on high-scale systems?
https://redd.it/lco5wi
@r_devops
reddit
How to get experience working on high scale systems?
I'm on the job market right now trying to get a job working on a high-scale system, but everything I've worked with has been relatively low-scale....
Why my Kubernetes Ingress doesn't expose service?
Hello, I have a kubernetes cluster bare metal, I use nginx ingress controller.
Whit the service ip works: curl serviceip:5678 returns "apple"
When I create and ingress I expect to se "apple" on a browser from the public ip of the master, but it doesn't happen.
There no firewall between nodes or me and the master.
Below the kubectl commands and the yaml for pod,service and ingress.
Thank you!
\- Pod:
*kubectl get pods -n ingress-nginx*
*NAME READY STATUS RESTARTS AGE*
*apple-app 1/1 Running 0 83m*
*ingress-nginx-controller-85df779996-4szh5 1/1 Running 6 27h*
\----
kind: Pod
apiVersion: v1
metadata:
name: apple-app
namespace: ingress-nginx
labels:
app: apple
spec:
containers:
\- name: apple-app
image: hashicorp/http-echo
args:
\- "-text=apple"
\-------
\- Service:
*kubectl describe svc apple-service -n ingress-nginx*
*Name: apple-service*
*Namespace: ingress-nginx*
*Labels: <none>*
*Annotations: <none>*
*Selector: app=apple*
*Type: ClusterIP*
*IP Families: <none>*
*IP:* [*10.102.31.58*](https://10.102.31.58/)
*IPs:* [*10.102.31.58*](https://10.102.31.58/)
*Port: <unset> 5678/TCP*
*TargetPort: 5678/TCP*
*Endpoints:* [*10.244.1.15:5678*](https://10.244.1.15:5678/)
*Session Affinity: None*
*Events: <none>*
\------
kind: Service
apiVersion: v1
metadata:
name: apple-service
namespace: ingress-nginx
spec:
selector:
app: apple
ports:
\- port: 5678 # Default port for image
\--------------
\- Ingress:
*kubectl get ingress -n ingress-nginx*
*NAME CLASS HOSTS ADDRESS PORTS AGE*
*apple-ingress <none> \* 80 39s*
*kubectl describe ing apple-ingress -n ingress-nginx*
*Name: apple-ingress*
*Namespace: ingress-nginx*
*Address:* [*10.0.0.2*](https://10.0.0.2/)
*Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)*
*Rules:*
*Host Path Backends*
*---- ---- --------*
*\**
*/ apple-service:5678 (*[*10.244.1.15:5678*](https://10.244.1.15:5678/)*)*
*Annotations:* [*kubernetes.io/ingress.class:*](https://kubernetes.io/ingress.class:) *nginx*
*Events:*
*Type Reason Age From Message*
*---- ------ ---- ---- -------*
*Normal Sync 88s (x2 over 2m13s) nginx-ingress-controller Scheduled for sync*
\--------------
apiVersion: [networking.k8s.io/v1](https://networking.k8s.io/v1)
kind: Ingress
metadata:
name: apple-ingress
namespace: ingress-nginx
annotations:
\# use the shared ingress-nginx
[kubernetes.io/ingress.class:](https://kubernetes.io/ingress.class:) nginx
spec:
rules:
\- http:
paths:
\- path: /
pathType: Prefix
backend:
service:
name: apple-service
port:
number: 5678
\------------
\- NGINX ingress controller
kubectl exec -it $POD\_NAME -n $POD\_NAMESPACE -- /nginx-ingress-controller --version
\-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v0.43.0
Build: f3f6da12ac7c59b85ae7132f321bc3bcf144af04
Repository: [https://github.com/kubernetes/ingress-nginx](https://github.com/kubernetes/ingress-nginx)
nginx version: nginx/1.19.6
https://redd.it/lck69g
@r_devops
Hello, I have a kubernetes cluster bare metal, I use nginx ingress controller.
Whit the service ip works: curl serviceip:5678 returns "apple"
When I create and ingress I expect to se "apple" on a browser from the public ip of the master, but it doesn't happen.
There no firewall between nodes or me and the master.
Below the kubectl commands and the yaml for pod,service and ingress.
Thank you!
\- Pod:
*kubectl get pods -n ingress-nginx*
*NAME READY STATUS RESTARTS AGE*
*apple-app 1/1 Running 0 83m*
*ingress-nginx-controller-85df779996-4szh5 1/1 Running 6 27h*
\----
kind: Pod
apiVersion: v1
metadata:
name: apple-app
namespace: ingress-nginx
labels:
app: apple
spec:
containers:
\- name: apple-app
image: hashicorp/http-echo
args:
\- "-text=apple"
\-------
\- Service:
*kubectl describe svc apple-service -n ingress-nginx*
*Name: apple-service*
*Namespace: ingress-nginx*
*Labels: <none>*
*Annotations: <none>*
*Selector: app=apple*
*Type: ClusterIP*
*IP Families: <none>*
*IP:* [*10.102.31.58*](https://10.102.31.58/)
*IPs:* [*10.102.31.58*](https://10.102.31.58/)
*Port: <unset> 5678/TCP*
*TargetPort: 5678/TCP*
*Endpoints:* [*10.244.1.15:5678*](https://10.244.1.15:5678/)
*Session Affinity: None*
*Events: <none>*
\------
kind: Service
apiVersion: v1
metadata:
name: apple-service
namespace: ingress-nginx
spec:
selector:
app: apple
ports:
\- port: 5678 # Default port for image
\--------------
\- Ingress:
*kubectl get ingress -n ingress-nginx*
*NAME CLASS HOSTS ADDRESS PORTS AGE*
*apple-ingress <none> \* 80 39s*
*kubectl describe ing apple-ingress -n ingress-nginx*
*Name: apple-ingress*
*Namespace: ingress-nginx*
*Address:* [*10.0.0.2*](https://10.0.0.2/)
*Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)*
*Rules:*
*Host Path Backends*
*---- ---- --------*
*\**
*/ apple-service:5678 (*[*10.244.1.15:5678*](https://10.244.1.15:5678/)*)*
*Annotations:* [*kubernetes.io/ingress.class:*](https://kubernetes.io/ingress.class:) *nginx*
*Events:*
*Type Reason Age From Message*
*---- ------ ---- ---- -------*
*Normal Sync 88s (x2 over 2m13s) nginx-ingress-controller Scheduled for sync*
\--------------
apiVersion: [networking.k8s.io/v1](https://networking.k8s.io/v1)
kind: Ingress
metadata:
name: apple-ingress
namespace: ingress-nginx
annotations:
\# use the shared ingress-nginx
[kubernetes.io/ingress.class:](https://kubernetes.io/ingress.class:) nginx
spec:
rules:
\- http:
paths:
\- path: /
pathType: Prefix
backend:
service:
name: apple-service
port:
number: 5678
\------------
\- NGINX ingress controller
kubectl exec -it $POD\_NAME -n $POD\_NAMESPACE -- /nginx-ingress-controller --version
\-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v0.43.0
Build: f3f6da12ac7c59b85ae7132f321bc3bcf144af04
Repository: [https://github.com/kubernetes/ingress-nginx](https://github.com/kubernetes/ingress-nginx)
nginx version: nginx/1.19.6
https://redd.it/lck69g
@r_devops
Migrating to devops from old-school style of working
tl;dr we migrated to git 2-3 years ago, and basically the only thing I could achieve in the last half year was getting rid of the physical machines we had and replace them with VM's(manually managed with RDP/ssh). How to plan a migration to infrastructure/configuration as code way of working?
So I want to devops the shit out our development cycle, but don't know where to start. We're currently running everything in hand-managed systems using a combination of Jenkins, Bitbucket and Artifactory. Other than that it's basically all old-school. Developers have barely heard of Docker or containers. Where the hell do I start? How do I convince management that we need to overhaul our entire infrastructure for our build systems? I need an action plan with demos and hard numbers here, but am feeling a bit lost.
It's currently not possible to do IaC since everything is internal and VM's are manually requested through a service portal(no automation possible, this is the next-next step I want to fix but I would have to go against a global IT department that has way more "power" than I do, ignore it for now).
However I want to change everything to automated configuration. I've already tried out Ansible to install some basic packages on these VM's(which have custom rules/firewalls/sccm on top, which makes it hard to configure properly), but I get the feeling the added value of doing it like this is lost of most people I've shown it to. "I could've done that in a single command" or "I could've scripted that in bash" are common remarks. Keep in mind these are senior developers and managers, and if I can't convince them I might as well stop.
Where do I start? Should I just demo it on AWS which we'll never use(has to be internal)? Should I set up DevStack and run it on that? I need an action plan, but have no idea how to approach this. Suggestions/tips/links/resources would be appreciated.
https://redd.it/lcgebf
@r_devops
tl;dr we migrated to git 2-3 years ago, and basically the only thing I could achieve in the last half year was getting rid of the physical machines we had and replace them with VM's(manually managed with RDP/ssh). How to plan a migration to infrastructure/configuration as code way of working?
So I want to devops the shit out our development cycle, but don't know where to start. We're currently running everything in hand-managed systems using a combination of Jenkins, Bitbucket and Artifactory. Other than that it's basically all old-school. Developers have barely heard of Docker or containers. Where the hell do I start? How do I convince management that we need to overhaul our entire infrastructure for our build systems? I need an action plan with demos and hard numbers here, but am feeling a bit lost.
It's currently not possible to do IaC since everything is internal and VM's are manually requested through a service portal(no automation possible, this is the next-next step I want to fix but I would have to go against a global IT department that has way more "power" than I do, ignore it for now).
However I want to change everything to automated configuration. I've already tried out Ansible to install some basic packages on these VM's(which have custom rules/firewalls/sccm on top, which makes it hard to configure properly), but I get the feeling the added value of doing it like this is lost of most people I've shown it to. "I could've done that in a single command" or "I could've scripted that in bash" are common remarks. Keep in mind these are senior developers and managers, and if I can't convince them I might as well stop.
Where do I start? Should I just demo it on AWS which we'll never use(has to be internal)? Should I set up DevStack and run it on that? I need an action plan, but have no idea how to approach this. Suggestions/tips/links/resources would be appreciated.
https://redd.it/lcgebf
@r_devops
reddit
Migrating to devops from old-school style of working
tl;dr we migrated to git 2-3 years ago, and basically the only thing I could achieve in the last half year was getting rid of the physical...
Packer+QEMU+GitLab CI = Can't SSH
Hey all, I've got a local offline GitLab CI instance and I'm trying to deploy a new QEMU disk using Packer and I'm running into issues. Everything works locally on my machine, however on the gitlab-runner, it can't seem to connect through SSH and running with PACKER\_LOG=1 doesn't provide any insight - it just keeps attempting SSH and failing, which is "normal" and happens locally as well until the reboot and then it succeeds. I'll provide the files in play here with some of the "fluff" snipped out and if anyone spots something I may be missing or knows the issue, please let me know!
[deploy.sh](https://deploy.sh) which is called from the GitLab CI pipe:
...
CHECKPOINT_DISABLE=1 PACKER_LOG=1 packer build -var "http_path=${CI_PROJECT_PATH}" \
-var "vm_name=IFS_${CI_COMMIT_BRANCH}_${CI_COMMIT_TIMESTAMP}" \
-var "iso_url=file:/builds/${CI_PROJECT_PATH}/utilities/packer/CentOS-${CENTOS_VERSION}.iso" \
-var "kickstart=centos7-ks.cfg" \
-var "ssh_pass=${PACKER_SSH_PASS}" \
IFS_minimal.pkr.hcl
...
IFS\_minimal.pkr.hcl
# Variables snipped, but they're all just strings
locals { boot_command = concat(["<tab> text ks=https://{{ .HTTPIP }}:{{ .HTTPPort }}/", var.kickstart, "<enter><wait>"])}
source "qemu" "centos7-minimal" {
accelerator = "kvm"
boot_command = local.boot_command
boot_wait = "3s"
disk_interface = "virtio"
disk_size = "5000M"
format = "qcow2"
headless = "true"
http_directory = var.http_path
iso_checksum = "md5:a4711c4fa6a1fb32bd555fae8d885b12"
iso_url = var.iso_url
net_device = "virtio-net"
output_directory = "packer_images"
shutdown_command = "echo 'packer' | sudo -S shutdown -P now"
ssh_username = "root"
ssh_password = var.ssh_pass
ssh_timeout = "25m"
vm_name = var.vm_name
}
build {
name = "Build 1"
sources = ["source.qemu.centos7-minimal"]
}
centos7-ks.cfg
...
network --bootproto=dhcp --device=eth0 --activate --noipv6
firewall --enabled --http --ssh
services --enabled=network,ssh
...
rootpw --plaintext XXXXXX #matching what's passed above in ${PACKER_SSH_PASS}
sshpw --username=root XXXXXX #matching what's passed above in ${PACKER_SSH_PASS}
reboot
%packages
@core
net-tools
libssh2.x86_64
openssh-clients.x86_64
openssh-server.x86_64
openssh.x86_64
%end
Dockerfile for the runner
...
RUN apk update && apk add --no-cache \
qemu-img \
qemu-system-x86_64 \
libvirt-daemon \
virt-manager \
openssh \
openssh-keygen
RUN adduser -D -S -h /home/gitlab-runner gitlab-runner && \
addgroup gitlab-runner qemu && \
addgroup gitlab-runner libvirt && \
addgroup root libvirt && \
addgroup root qemu
...
RUN sed -i s/#PermitRootLogin.*/PermitRootLogin\ yes/ /etc/ssh/sshd_config \
&& echo "root:XXXXXX" | chpasswd #Same password as ${PACKER_SSH_PASS}
...
EXPOSE 22
https://redd.it/lco69c
@r_devops
Hey all, I've got a local offline GitLab CI instance and I'm trying to deploy a new QEMU disk using Packer and I'm running into issues. Everything works locally on my machine, however on the gitlab-runner, it can't seem to connect through SSH and running with PACKER\_LOG=1 doesn't provide any insight - it just keeps attempting SSH and failing, which is "normal" and happens locally as well until the reboot and then it succeeds. I'll provide the files in play here with some of the "fluff" snipped out and if anyone spots something I may be missing or knows the issue, please let me know!
[deploy.sh](https://deploy.sh) which is called from the GitLab CI pipe:
...
CHECKPOINT_DISABLE=1 PACKER_LOG=1 packer build -var "http_path=${CI_PROJECT_PATH}" \
-var "vm_name=IFS_${CI_COMMIT_BRANCH}_${CI_COMMIT_TIMESTAMP}" \
-var "iso_url=file:/builds/${CI_PROJECT_PATH}/utilities/packer/CentOS-${CENTOS_VERSION}.iso" \
-var "kickstart=centos7-ks.cfg" \
-var "ssh_pass=${PACKER_SSH_PASS}" \
IFS_minimal.pkr.hcl
...
IFS\_minimal.pkr.hcl
# Variables snipped, but they're all just strings
locals { boot_command = concat(["<tab> text ks=https://{{ .HTTPIP }}:{{ .HTTPPort }}/", var.kickstart, "<enter><wait>"])}
source "qemu" "centos7-minimal" {
accelerator = "kvm"
boot_command = local.boot_command
boot_wait = "3s"
disk_interface = "virtio"
disk_size = "5000M"
format = "qcow2"
headless = "true"
http_directory = var.http_path
iso_checksum = "md5:a4711c4fa6a1fb32bd555fae8d885b12"
iso_url = var.iso_url
net_device = "virtio-net"
output_directory = "packer_images"
shutdown_command = "echo 'packer' | sudo -S shutdown -P now"
ssh_username = "root"
ssh_password = var.ssh_pass
ssh_timeout = "25m"
vm_name = var.vm_name
}
build {
name = "Build 1"
sources = ["source.qemu.centos7-minimal"]
}
centos7-ks.cfg
...
network --bootproto=dhcp --device=eth0 --activate --noipv6
firewall --enabled --http --ssh
services --enabled=network,ssh
...
rootpw --plaintext XXXXXX #matching what's passed above in ${PACKER_SSH_PASS}
sshpw --username=root XXXXXX #matching what's passed above in ${PACKER_SSH_PASS}
reboot
%packages
@core
net-tools
libssh2.x86_64
openssh-clients.x86_64
openssh-server.x86_64
openssh.x86_64
%end
Dockerfile for the runner
...
RUN apk update && apk add --no-cache \
qemu-img \
qemu-system-x86_64 \
libvirt-daemon \
virt-manager \
openssh \
openssh-keygen
RUN adduser -D -S -h /home/gitlab-runner gitlab-runner && \
addgroup gitlab-runner qemu && \
addgroup gitlab-runner libvirt && \
addgroup root libvirt && \
addgroup root qemu
...
RUN sed -i s/#PermitRootLogin.*/PermitRootLogin\ yes/ /etc/ssh/sshd_config \
&& echo "root:XXXXXX" | chpasswd #Same password as ${PACKER_SSH_PASS}
...
EXPOSE 22
https://redd.it/lco69c
@r_devops
retaining exit code
I have an rsync that that is piped to a
rsync some-files remote-location | sed 's/\r/\n/g'
Problem is, the thing that runs it and reads the output, is a python code. Because it is piped into a sed, even though it fails sometimes, it still returns succesful.
Is it possible to have it return the success code based on the
https://redd.it/lcl6zg
@r_devops
I have an rsync that that is piped to a
sed. it looks something like this:rsync some-files remote-location | sed 's/\r/\n/g'
Problem is, the thing that runs it and reads the output, is a python code. Because it is piped into a sed, even though it fails sometimes, it still returns succesful.
Is it possible to have it return the success code based on the
rsync? Thanks ahead!https://redd.it/lcl6zg
@r_devops
reddit
retaining exit code
I have an rsync that that is piped to a `sed`. it looks something like this: rsync *some-files* *remote-location* | sed 's/\r/\n/g' Problem...
Change in devops workload after move to Kubernetes
If your company/shop made a move to kubernetes, as a devops team after you setup the kubernetes cluster, do you see reduced workload in devops and maybe more into observability/monitoring?
https://redd.it/lcndpa
@r_devops
If your company/shop made a move to kubernetes, as a devops team after you setup the kubernetes cluster, do you see reduced workload in devops and maybe more into observability/monitoring?
https://redd.it/lcndpa
@r_devops
reddit
Change in devops workload after move to Kubernetes
If your company/shop made a move to kubernetes, as a devops team after you setup the kubernetes cluster, do you see reduced workload in devops and...
What is the better approach to Helm charts with the same specification?
Is it worth to duplicate Helm charts if the application is different, but has the same template, except the docker image used for it. A simplified example is if you have like 5 sites, all of them are a static site (or some simple services). All of them has the same requirements like deployment, service, hpa and other. The only difference is the domain and the docker image.
It sounds logical to just use the same chart and create a different values yaml file for each, but is it a "best practice"? Would it be wiser to duplicate the chart for it? Or creating a chart with sub-charts?
Right now I have a chart for each of them, but when something changes, it's a bit of pain to update everywhere. It has better control over what happens in there, but painful.
https://redd.it/lchg6l
@r_devops
Is it worth to duplicate Helm charts if the application is different, but has the same template, except the docker image used for it. A simplified example is if you have like 5 sites, all of them are a static site (or some simple services). All of them has the same requirements like deployment, service, hpa and other. The only difference is the domain and the docker image.
It sounds logical to just use the same chart and create a different values yaml file for each, but is it a "best practice"? Would it be wiser to duplicate the chart for it? Or creating a chart with sub-charts?
Right now I have a chart for each of them, but when something changes, it's a bit of pain to update everywhere. It has better control over what happens in there, but painful.
https://redd.it/lchg6l
@r_devops
reddit
What is the better approach to Helm charts with the same...
Is it worth to duplicate Helm charts if the application is different, but has the same template, except the docker image used for it. A simplified...