I get to pick 1 online course for professional development, what should I pick to enhance my employability?
I'm a build/release engineer whose main tools each day are jenkins, docker, and various AWS services. I can also code in python, bash, and groovy. As eluded to in the title, this year my company has offered to pay for me to pick one online course to up-level my general devops skills. Since the tech I work with is not very cutting edge and so I'm not in a position to know, my question for the community is what do you think would be the single highest-leverage investment I could make to improve my employability in the current market (I'm not actively job hunting atm, but maybe in the new year)? Probably something new or in high demand, but open to all ideas!
https://redd.it/rgmt8e
@r_devops
I'm a build/release engineer whose main tools each day are jenkins, docker, and various AWS services. I can also code in python, bash, and groovy. As eluded to in the title, this year my company has offered to pay for me to pick one online course to up-level my general devops skills. Since the tech I work with is not very cutting edge and so I'm not in a position to know, my question for the community is what do you think would be the single highest-leverage investment I could make to improve my employability in the current market (I'm not actively job hunting atm, but maybe in the new year)? Probably something new or in high demand, but open to all ideas!
https://redd.it/rgmt8e
@r_devops
reddit
I get to pick 1 online course for professional development, what...
I'm a build/release engineer whose main tools each day are jenkins, docker, and various AWS services. I can also code in python, bash, and groovy....
A Pipeline that creates pipelines?
Hello,
Perhaps I am at the brink of insanity, because at face-value this does seem ridiculous and I've been spinning my wheels for weeks. But hear me out, in a fast growing organization with new projects/modules being spun up everyday with the exact deployment process, I was thinking to myself, how can I automate the creation of these pipelines? Specifically with AWS CodePipeline/AWS Codebuild.
There is no way to scan github and create these pipelines automatically with AWS. So I was thinking to myself, how could I make this possible?
So at face value, AWS treats (most) everything as a resource. Whether that be an API Gateway, an ECR, EC2, Codebuild, CodePipeline, they are all just "resources".
So I was thinking, what's to prevent me from creating a pipeline, that, well creates a resource, specifically another CodePipeline Resource?
The basic principle is this -- and feel free to call this ridiculous because it most likely is.
Please note, this was quickly written and obviously there are some intricacies that need to be refined, but heres the quick and dirty rundown:
I set up a lambda ran on a cronjob (or alternatively can be triggered manually) that scans our organization for repositories, and as it scans the repositories I search for a terraform file that references a terraform module that handles the the setup, stages, etc of the pipeline that we want created. The base terraform file in the repository contains just boilerplate code such as the repository source url, buildspec, type of deployment, etc, and any additional non-secret env variables passed in by the terraform file in the repository. If it contains the terraform file, then it checks out the latest version, sets the pipeline source to the s3 reference key to the repositories name, and from there it then stores the the source code in the s3 bucket with the name of the project, which then triggers the codepipeline to execute. From there the pipeline then takes the source, and in the build step it executes the terraform script which sets up the pipeline for that repository. If the pipeline has already been created, then the terraform script has no changes, then there will have be no updates - and nothing will be changed. If there has been updates to the terraform file, then the pre-existing pipeline will then be updated.
From there bam, we have a pipeline that has been automatically created without any manual work.
This is obviously entirely a novel concept -- but does this seem absolutely ridiculous or could this actually be a feasible solution?
https://redd.it/rglrtv
@r_devops
Hello,
Perhaps I am at the brink of insanity, because at face-value this does seem ridiculous and I've been spinning my wheels for weeks. But hear me out, in a fast growing organization with new projects/modules being spun up everyday with the exact deployment process, I was thinking to myself, how can I automate the creation of these pipelines? Specifically with AWS CodePipeline/AWS Codebuild.
There is no way to scan github and create these pipelines automatically with AWS. So I was thinking to myself, how could I make this possible?
So at face value, AWS treats (most) everything as a resource. Whether that be an API Gateway, an ECR, EC2, Codebuild, CodePipeline, they are all just "resources".
So I was thinking, what's to prevent me from creating a pipeline, that, well creates a resource, specifically another CodePipeline Resource?
The basic principle is this -- and feel free to call this ridiculous because it most likely is.
Please note, this was quickly written and obviously there are some intricacies that need to be refined, but heres the quick and dirty rundown:
I set up a lambda ran on a cronjob (or alternatively can be triggered manually) that scans our organization for repositories, and as it scans the repositories I search for a terraform file that references a terraform module that handles the the setup, stages, etc of the pipeline that we want created. The base terraform file in the repository contains just boilerplate code such as the repository source url, buildspec, type of deployment, etc, and any additional non-secret env variables passed in by the terraform file in the repository. If it contains the terraform file, then it checks out the latest version, sets the pipeline source to the s3 reference key to the repositories name, and from there it then stores the the source code in the s3 bucket with the name of the project, which then triggers the codepipeline to execute. From there the pipeline then takes the source, and in the build step it executes the terraform script which sets up the pipeline for that repository. If the pipeline has already been created, then the terraform script has no changes, then there will have be no updates - and nothing will be changed. If there has been updates to the terraform file, then the pre-existing pipeline will then be updated.
From there bam, we have a pipeline that has been automatically created without any manual work.
This is obviously entirely a novel concept -- but does this seem absolutely ridiculous or could this actually be a feasible solution?
https://redd.it/rglrtv
@r_devops
reddit
A Pipeline that creates pipelines?
Hello, Perhaps I am at the brink of insanity, because at face-value this does seem ridiculous and I've been spinning my wheels for weeks. But...
Has anyone here used Ansible and Packer with Proxmox?
I am trying to now get my template provisioned with ansible. I finally have it so that packer is able to create the vm, configure it, reboot then attempt to connect by SSH. On the output I get waiting for ssh to become available, then connected to ssh. But then it fails to handshake because its unreachable. This is the pastebin I have from my console. I have -vvvv for ansible output so I see that it fails to connect via ssh to openssh and trying to connect to 127.0.0.1
Packers documents make it seem that its just a add provisioner, add the yml and you are good.
https://redd.it/rgjt58
@r_devops
I am trying to now get my template provisioned with ansible. I finally have it so that packer is able to create the vm, configure it, reboot then attempt to connect by SSH. On the output I get waiting for ssh to become available, then connected to ssh. But then it fails to handshake because its unreachable. This is the pastebin I have from my console. I have -vvvv for ansible output so I see that it fails to connect via ssh to openssh and trying to connect to 127.0.0.1
Packers documents make it seem that its just a add provisioner, add the yml and you are good.
https://redd.it/rgjt58
@r_devops
Pastebin
proxmox: output will be in this color.==> proxmox: Creating VM==> proxmox: - Pastebin.com
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
Is kubernetes in demand?
Just took a look on Google trends and seems it has dropped quite a lot... I'm wondering if is still worth the trouble of learning it.. also what could be the culprit for the drop in interest?
Thanks!
https://redd.it/rgu7hr
@r_devops
Just took a look on Google trends and seems it has dropped quite a lot... I'm wondering if is still worth the trouble of learning it.. also what could be the culprit for the drop in interest?
Thanks!
https://redd.it/rgu7hr
@r_devops
reddit
Is kubernetes in demand?
Just took a look on Google trends and seems it has dropped quite a lot... I'm wondering if is still worth the trouble of learning it.. also what...
Deploying microservices in a consistent way using different gitlab repositories
Hi,
I'm looking for a good organization to deploy our solution consisting in multiple apps using Gitlab and K8S.
Our SaaS app that is made of :
A backend app, mostly our API (django)
A user app (React)
An admin app (React)
Both frontend apps are connected to the API.
Currently the backend and user apps are in the same gitlab repository, and are deployed using a CI/CD pipeline that builds the apps, builds the docker images, and deploy them to K8S using a Helm Chart. The Helm Chart is located in the same repo.
I recently added the admin app in another gitlab repository and I'm concerned about keeping all apps consistent, that is to say, both frontend apps have to compatible with the API.
I'm thinking about adding another repository especially for the deployment (let's call this Deploy Repo). This repo could contain:
3 git submodules, one for each sub app,
The Helm chart,
Other files related to deployment
I thought about using git submodules to be able to have separate projects. The devs would update the right versions in the Deploy Repo when a service is ready to be deployed.
The push would then trigger the CI/CD pipeline, build all apps, and deploy all together using the Helm Chart.
Is this a good idea to use submodules like this ? What could be the best practice to link multiple projects together ?
I'm also concerned about how I could do to build only the sub project that has changed instead of all projects.
I have seen that it could be possible to link the pipelines of all subprojects together, and use artifacts to pass the needed files, but I'm not sure if this is a good solution.
https://redd.it/rgtuls
@r_devops
Hi,
I'm looking for a good organization to deploy our solution consisting in multiple apps using Gitlab and K8S.
Our SaaS app that is made of :
A backend app, mostly our API (django)
A user app (React)
An admin app (React)
Both frontend apps are connected to the API.
Currently the backend and user apps are in the same gitlab repository, and are deployed using a CI/CD pipeline that builds the apps, builds the docker images, and deploy them to K8S using a Helm Chart. The Helm Chart is located in the same repo.
I recently added the admin app in another gitlab repository and I'm concerned about keeping all apps consistent, that is to say, both frontend apps have to compatible with the API.
I'm thinking about adding another repository especially for the deployment (let's call this Deploy Repo). This repo could contain:
3 git submodules, one for each sub app,
The Helm chart,
Other files related to deployment
I thought about using git submodules to be able to have separate projects. The devs would update the right versions in the Deploy Repo when a service is ready to be deployed.
The push would then trigger the CI/CD pipeline, build all apps, and deploy all together using the Helm Chart.
Is this a good idea to use submodules like this ? What could be the best practice to link multiple projects together ?
I'm also concerned about how I could do to build only the sub project that has changed instead of all projects.
I have seen that it could be possible to link the pipelines of all subprojects together, and use artifacts to pass the needed files, but I'm not sure if this is a good solution.
https://redd.it/rgtuls
@r_devops
reddit
Deploying microservices in a consistent way using different gitlab...
Hi, I'm looking for a good organization to deploy our solution consisting in multiple apps using Gitlab and K8S. Our SaaS app that is made of...
CoTurn server
Hi all. I have installed Co turn. Can someone pls help on how can I perform load testing for my server?
https://redd.it/rgvzsd
@r_devops
Hi all. I have installed Co turn. Can someone pls help on how can I perform load testing for my server?
https://redd.it/rgvzsd
@r_devops
reddit
CoTurn server
Hi all. I have installed Co turn. Can someone pls help on how can I perform load testing for my server?
ECS + FarGate + AutoScaling
Hi all,
​
I am new to AWS, and I am quite confused of the ECS and farget with autoscaling. How do I launch more of my services (all encapsulated in docker containers) to balance the load? I am a newbie, I have no clue of AWS, and I am working on my app-server solo.
Also, any good tutorial for the same?
Thanks!
https://redd.it/rgwmks
@r_devops
Hi all,
​
I am new to AWS, and I am quite confused of the ECS and farget with autoscaling. How do I launch more of my services (all encapsulated in docker containers) to balance the load? I am a newbie, I have no clue of AWS, and I am working on my app-server solo.
Also, any good tutorial for the same?
Thanks!
https://redd.it/rgwmks
@r_devops
reddit
ECS + FarGate + AutoScaling
Hi all, I am new to AWS, and I am quite confused of the ECS and farget with autoscaling. How do I launch more of my services (all...
Running Collaborative ML Experiments With DVC
Sharing machine learning experiments to compare its models is important when you're working with a team of engineers. You might need to get another opinion on an experiments results or to share a modified dataset or even share the exact reproduction of a specific experiment.
The following tutorial goes through an example of sharing an experiment with DVC remotes: Running Collaborative Experiments - using DVC remotes to share experiments and their data across machines
Setting up DVC remotes in addition to your Git remotes lets you share all of the data, code, and hyperparameters associated with each experiment so anyone can pick up where you left off in the training process. When you use DVC, you can bundle your data and code changes for each experiment and push those to a remote for somebody else to check out.
https://redd.it/rgxnsc
@r_devops
Sharing machine learning experiments to compare its models is important when you're working with a team of engineers. You might need to get another opinion on an experiments results or to share a modified dataset or even share the exact reproduction of a specific experiment.
The following tutorial goes through an example of sharing an experiment with DVC remotes: Running Collaborative Experiments - using DVC remotes to share experiments and their data across machines
Setting up DVC remotes in addition to your Git remotes lets you share all of the data, code, and hyperparameters associated with each experiment so anyone can pick up where you left off in the training process. When you use DVC, you can bundle your data and code changes for each experiment and push those to a remote for somebody else to check out.
https://redd.it/rgxnsc
@r_devops
Data Version Control · DVC
Running Collaborative Experiments
Sharing experiments with teammates can help you build models more efficiently.
Onboarding juniors in DevOps
It started to be difficult to hire juniors as the time to train them simply didn't exist (like in most companies I think) as we are growing in the company I work for.
So we created a program to hire juniors, train them for four months, then put them in a team according to their competence and liking. We are a cloud provider, so the spectrum of choice is pretty large.
Here's the article describing our program in case you're interested :)
https://blog.scaleway.com/devops-onboarding-juniors/
let me know what you thought of it!
https://redd.it/rgyoab
@r_devops
It started to be difficult to hire juniors as the time to train them simply didn't exist (like in most companies I think) as we are growing in the company I work for.
So we created a program to hire juniors, train them for four months, then put them in a team according to their competence and liking. We are a cloud provider, so the spectrum of choice is pretty large.
Here's the article describing our program in case you're interested :)
https://blog.scaleway.com/devops-onboarding-juniors/
let me know what you thought of it!
https://redd.it/rgyoab
@r_devops
Scaleway Blog - European cloud provider
DevOps: Onboarding juniors
Like many other tech companies, Scaleway is growing, and maintaining our values and our culture is a daily challenge. So we built a program to integrate juniors or self-taught profiles into our teams. Meet the Cloud Builder Launchpad. “We can’t hire juniors…
Question about service
I have created a deployment that simply run 3 copies of a pod, each running nginx:
root@k8s-master:~# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-clusterip NodePort 10.108.90.12 <none> 80:30080/TCP 145m
root@k8s-master:~# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
deployment-svc-example 3/3 3 3 165m
root@k8s-master:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-svc-example-9f886b7b-5vwds 1/1 Running 0 165m 192.168.126.29 k8s-worker2 <none> <none>
deployment-svc-example-9f886b7b-f4wk4 1/1 Running 0 165m 192.168.126.28 k8s-worker2 <none> <none>
deployment-svc-example-9f886b7b-scplx 1/1 Running 0 165m 192.168.194.106 k8s-worker1 <none> <none>
As seen in the
Thing is, when I open the browser and visit the public IP address of the MASTER (not k8s-worker1/2), it still works, even though the pod doesn't run on the master.
Why is that? Did it route me to one of the pods in the worker nodes? What happened here?
Thanks ahead!
https://redd.it/rgyy7w
@r_devops
I have created a deployment that simply run 3 copies of a pod, each running nginx:
root@k8s-master:~# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-clusterip NodePort 10.108.90.12 <none> 80:30080/TCP 145m
root@k8s-master:~# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
deployment-svc-example 3/3 3 3 165m
root@k8s-master:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-svc-example-9f886b7b-5vwds 1/1 Running 0 165m 192.168.126.29 k8s-worker2 <none> <none>
deployment-svc-example-9f886b7b-f4wk4 1/1 Running 0 165m 192.168.126.28 k8s-worker2 <none> <none>
deployment-svc-example-9f886b7b-scplx 1/1 Running 0 165m 192.168.194.106 k8s-worker1 <none> <none>
As seen in the
kubectl get pods command, out of the 3 pods, two run on k8s-worker2, and one runs on k8s-worker1.Thing is, when I open the browser and visit the public IP address of the MASTER (not k8s-worker1/2), it still works, even though the pod doesn't run on the master.
Why is that? Did it route me to one of the pods in the worker nodes? What happened here?
Thanks ahead!
https://redd.it/rgyy7w
@r_devops
reddit
Question about service
I have created a deployment that simply run 3 copies of a pod, each running nginx: root@k8s-master:~# kubectl get services NAME ...
A-List of Best Serverless and AWS Lambda Courses For Beginners in 2021
Learn the fundamentals of serverless architecture and build applications in the cloud with these best serverless and AWS Lambda courses.
https://redd.it/rgxdr2
@r_devops
Learn the fundamentals of serverless architecture and build applications in the cloud with these best serverless and AWS Lambda courses.
https://redd.it/rgxdr2
@r_devops
Coursesity
6 Best Serverless and AWS Lambda Courses in 2022
Serverless computing on AWS Lambda is growing in popularity. With these 6 best serverless and AWS Lambda courses in 2021, learn the fundamentals of serverless architecture and build applications in the cloud.
Merge nexus data from 2 different instances then migrate to a new server
I have a server with 2 different instances of
I was given a new machine "server A" where I have deployed a fresh installation of
Now I don't know whether it is possible to merge the data of the 2 different nexus instances that I have on server B. So If I can merge somehow and correctly the data of these 2 instances, then I can move that merged data to the new installation on "server A", Does it make any sense?, hopefully it does :-D
What can I do on this situation?, any ideas, brainstorming and advises are very welcome.
Cheers.
https://redd.it/rh15zh
@r_devops
I have a server with 2 different instances of
sonatype nexus running (don't ask me why, I just inherited the whole thing like that). I will call that machine "server B", so one version is using nexus ver 3.0 and the other on nexus 3.23.I was given a new machine "server A" where I have deployed a fresh installation of
nexus 3.37 and now I'm researching what is the data folder that I supposed to move, according the official docs (https://help.sonatype.com/repomanager3/installation-and-upgrades/directories#Directories-DataDirectory), all I need is to move the data directory (commonly referred to as $data-dir or ${karaf.data}) to the new installation. Till this point crystal clear.Now I don't know whether it is possible to merge the data of the 2 different nexus instances that I have on server B. So If I can merge somehow and correctly the data of these 2 instances, then I can move that merged data to the new installation on "server A", Does it make any sense?, hopefully it does :-D
What can I do on this situation?, any ideas, brainstorming and advises are very welcome.
Cheers.
https://redd.it/rh15zh
@r_devops
reddit
Merge nexus data from 2 different instances then migrate to a new...
I have a server with 2 different instances of `sonatype nexus` running (don't ask me why, I just inherited the whole thing like that). I will call...
SSH Tunneling
I want to host SonarQube on an EC2 Instance in a secure way by following best practices, I came across a way where you can host your instance in a private subnet and still access it. Here my usecase is to host a service(SonarQube) on port 9000 and allow only my developers to access it and configure Security Group accordingly. I read that we can map our EC2 url to the localhost and access it through pem file.
Is this the correct way to proceed? If yes, how should I move forward?
Thanks!
https://redd.it/rgvr9b
@r_devops
I want to host SonarQube on an EC2 Instance in a secure way by following best practices, I came across a way where you can host your instance in a private subnet and still access it. Here my usecase is to host a service(SonarQube) on port 9000 and allow only my developers to access it and configure Security Group accordingly. I read that we can map our EC2 url to the localhost and access it through pem file.
Is this the correct way to proceed? If yes, how should I move forward?
Thanks!
https://redd.it/rgvr9b
@r_devops
reddit
SSH Tunneling
I want to host SonarQube on an EC2 Instance in a secure way by following best practices, I came across a way where you can host your instance in a...
How do you make OS upgrades without downtime?
We run our services in Docker containers. We have a few worker containers, a front end app container, and then a MySQL container (host mounted volumes).
​
We need to do some OS upgrades, but we don't want to shutdown our system completely. Right now we just take the hit and plan for 10-30 minutes of downtime once per year. This isn't ideal. I'm trying to do some type of "failover", but with the database it's kind of challenging.
​
Anybody doing something similar and can share your strategy?
​
Thanks!
https://redd.it/rh5213
@r_devops
We run our services in Docker containers. We have a few worker containers, a front end app container, and then a MySQL container (host mounted volumes).
​
We need to do some OS upgrades, but we don't want to shutdown our system completely. Right now we just take the hit and plan for 10-30 minutes of downtime once per year. This isn't ideal. I'm trying to do some type of "failover", but with the database it's kind of challenging.
​
Anybody doing something similar and can share your strategy?
​
Thanks!
https://redd.it/rh5213
@r_devops
reddit
How do you make OS upgrades without downtime?
We run our services in Docker containers. We have a few worker containers, a front end app container, and then a MySQL container (host mounted...
Cross account schema
Is there any way that we can allow consumers from the other aws accounts to access the schemas used in glue schema registry?
Would appreciate any inputs.
https://redd.it/rh3nwk
@r_devops
Is there any way that we can allow consumers from the other aws accounts to access the schemas used in glue schema registry?
Would appreciate any inputs.
https://redd.it/rh3nwk
@r_devops
reddit
Cross account schema
Is there any way that we can allow consumers from the other aws accounts to access the schemas used in glue schema registry? Would appreciate...
Having 2 front ( Vue2 no TS Webpack, Vue 3 with TS vet) running concurrently
Hi, we are doing a migration from an old app ( Vue2 no TS Webpack ) plugged on a Node Js API.
Both front and back are bad, has technical debt etc.
They are used in production. We don't have time to recode them.
We are think about using a new app/service for all the new features and recode the piece of code from the old codebase when we need them on new stuff.
​
For the back end no problem, we will add /v2 routes and develop our feature / rebuild there with a new API.
​
But for the front we can't migrate it and cleary don't have time to recode all.
Is there any way we could mix two front project ?
​
Eg:
mycompany.com/user \-> Use the front of the old project
mycompany.com/operations \-> Use the front of the new project.
​
I'm more back end than front end so i don't have all the knowledges on how to mix 2 front projects.
All your suggestions are welcomed
Thanks for reading
https://redd.it/rh7bh2
@r_devops
Hi, we are doing a migration from an old app ( Vue2 no TS Webpack ) plugged on a Node Js API.
Both front and back are bad, has technical debt etc.
They are used in production. We don't have time to recode them.
We are think about using a new app/service for all the new features and recode the piece of code from the old codebase when we need them on new stuff.
​
For the back end no problem, we will add /v2 routes and develop our feature / rebuild there with a new API.
​
But for the front we can't migrate it and cleary don't have time to recode all.
Is there any way we could mix two front project ?
​
Eg:
mycompany.com/user \-> Use the front of the old project
mycompany.com/operations \-> Use the front of the new project.
​
I'm more back end than front end so i don't have all the knowledges on how to mix 2 front projects.
All your suggestions are welcomed
Thanks for reading
https://redd.it/rh7bh2
@r_devops
Software architecture diagramming and design tools
https://softwarearchitecture.tools
https://redd.it/rhb2ze
@r_devops
https://softwarearchitecture.tools
https://redd.it/rhb2ze
@r_devops
softwarearchitecture.tools
Software architecture tools
The best free and paid software architecture diagramming and design tools
Gitlab !reference tags in jsonnet
I’m doing something like this in jsonnet:
script: [
'!reference [".git:config", script]',
// install the collection and dependencies
'ansible-galaxy install -r ' + ansible_requirements
],
which is generating this code
"script": [
"!reference [\".git:config\", script]",
"ansible-galaxy install -r ansible/requirements.yml"
],
and so the !reference tag is being treated as a string and not actually doing the reference behavior. Can anybody suggest the “correct” strategy or a workaround? Thanks!
https://redd.it/rh867k
@r_devops
I’m doing something like this in jsonnet:
script: [
'!reference [".git:config", script]',
// install the collection and dependencies
'ansible-galaxy install -r ' + ansible_requirements
],
which is generating this code
"script": [
"!reference [\".git:config\", script]",
"ansible-galaxy install -r ansible/requirements.yml"
],
and so the !reference tag is being treated as a string and not actually doing the reference behavior. Can anybody suggest the “correct” strategy or a workaround? Thanks!
https://redd.it/rh867k
@r_devops
reddit
Gitlab !reference tags in jsonnet
I’m doing something like this in jsonnet: script: [ '!reference [".git:config", script]', // install the collection and...
DevOps boot camp Linux foundation
Hello,
Did anyone try the DevOps boot camp course from the Linux foundation and can give some feedback? I'm thinking of buying it.
Thanks
https://redd.it/rhcwls
@r_devops
Hello,
Did anyone try the DevOps boot camp course from the Linux foundation and can give some feedback? I'm thinking of buying it.
Thanks
https://redd.it/rhcwls
@r_devops
reddit
DevOps boot camp Linux foundation
Hello, Did anyone try the DevOps boot camp course from the Linux foundation and can give some feedback? I'm thinking of buying it. Thanks
New to DevOps. When having trouble getting something to work, what is your thought process for debugging?
I'm new to the dev ops world and as everyone does, I'm having trouble with everything I try to do. Typically I start by looking up my error online, but because I'm in defense, nearly all of the "oh you need this package, go grab it here" type answers aren't options. Ultimately I end up spit balling until either asking one of our senior developers or noting down my issues and what I've tried and moving on.
Obviously this isn't sustainable or good practice. So my question to everyone here is this - how do you debug and go about resolving issues? How do you know if it's a permission issue, ownership issue, or a missing package? What is your thought process?
TIA
https://redd.it/rheqrt
@r_devops
I'm new to the dev ops world and as everyone does, I'm having trouble with everything I try to do. Typically I start by looking up my error online, but because I'm in defense, nearly all of the "oh you need this package, go grab it here" type answers aren't options. Ultimately I end up spit balling until either asking one of our senior developers or noting down my issues and what I've tried and moving on.
Obviously this isn't sustainable or good practice. So my question to everyone here is this - how do you debug and go about resolving issues? How do you know if it's a permission issue, ownership issue, or a missing package? What is your thought process?
TIA
https://redd.it/rheqrt
@r_devops
reddit
New to DevOps. When having trouble getting something to work, what...
I'm new to the dev ops world and as everyone does, I'm having trouble with everything I try to do. Typically I start by looking up my error...