Where to start
Hello. I just graduated with a B.A. in computer science and am considering the DevOps route. Where should I start my focus, also what are some important key pointers for beginners? Thanks in advance!
https://redd.it/n50150
@r_devops
Hello. I just graduated with a B.A. in computer science and am considering the DevOps route. Where should I start my focus, also what are some important key pointers for beginners? Thanks in advance!
https://redd.it/n50150
@r_devops
reddit
Where to start
Hello. I just graduated with a B.A. in computer science and am considering the DevOps route. Where should I start my focus, also what are some...
Transition to DevOps without getting burned
Guys how's it going
As the title says i'm looking to make a transition to DevOps role, but i'm burned out haha
I have a solid background as a Sysadmin using Linux, docker, AWS, bash..and also I have my CCNA
The thing is i'm trying to learn a lot of techs at the same time and it's frying my brain
Last night i've stayed up to 1am (after 9hs of work) with Python/K8s/Ansible/Terraform.. and I got completely ruined.. and got nothing out of it
So i guess my question would be, what to learn next? Python? K8s? More Cloud? Terraform? CI/CD?
Tere are so much things that I honestly don't know where to begin and focus on.
I'm 35 now, and I want to keep learning, but i feel completely stuck.
Thanks!
https://redd.it/n4zroe
@r_devops
Guys how's it going
As the title says i'm looking to make a transition to DevOps role, but i'm burned out haha
I have a solid background as a Sysadmin using Linux, docker, AWS, bash..and also I have my CCNA
The thing is i'm trying to learn a lot of techs at the same time and it's frying my brain
Last night i've stayed up to 1am (after 9hs of work) with Python/K8s/Ansible/Terraform.. and I got completely ruined.. and got nothing out of it
So i guess my question would be, what to learn next? Python? K8s? More Cloud? Terraform? CI/CD?
Tere are so much things that I honestly don't know where to begin and focus on.
I'm 35 now, and I want to keep learning, but i feel completely stuck.
Thanks!
https://redd.it/n4zroe
@r_devops
reddit
Transition to DevOps without getting burned
Guys how's it going As the title says i'm looking to make a transition to DevOps role, but i'm burned out haha I have a solid background as a...
Web development - Smooth transition or clean cut?
Hi everyone,
I am in a kind of "luxury situation" and would appreciate different opinions on my situation in order to make a sound decision.
I worked as a sneior it project manager at a big e-commerce company and right now I am doing a full-time web developer coding bootcamp, which is ending in around 5 weeks. I am having an offer from my old company to start again as a senior it project manager. My ultimate long-term goal would be to work remotely as ruby/JS backend developer.
Should I take the job and transition slowly (doing code wars and own projects on the side), advancing my skills only in my freetime and search a new job on the side or do a clean cut and search a jr dev. position (probably earning 40% less money for the next years) and having more time to focus on advancing my skills as my day-to-day job?
Thanks for your support girls and guys :)
https://redd.it/n4z4eo
@r_devops
Hi everyone,
I am in a kind of "luxury situation" and would appreciate different opinions on my situation in order to make a sound decision.
I worked as a sneior it project manager at a big e-commerce company and right now I am doing a full-time web developer coding bootcamp, which is ending in around 5 weeks. I am having an offer from my old company to start again as a senior it project manager. My ultimate long-term goal would be to work remotely as ruby/JS backend developer.
Should I take the job and transition slowly (doing code wars and own projects on the side), advancing my skills only in my freetime and search a new job on the side or do a clean cut and search a jr dev. position (probably earning 40% less money for the next years) and having more time to focus on advancing my skills as my day-to-day job?
Thanks for your support girls and guys :)
https://redd.it/n4z4eo
@r_devops
reddit
Web development - Smooth transition or clean cut?
Hi everyone, I am in a kind of "luxury situation" and would appreciate different opinions on my situation in order to make a sound decision. I...
What and How each stream works?
Can you ELI5 what each service/framework does that's bolded?
For data ingestion, you write and build a piece of code in IDE (Gradle? is used and Artifactory? is one of the confgs setup for Gradle, Metorikku and DTSv3 for a version config), you commit this code to Stash, use Bamboo to do CI/CD, create a keytab, conf.json, and generate dts credential on a terminal, copy this json file to s3 bucket, and finally you trigger on Airflow?
https://redd.it/n5cln3
@r_devops
Can you ELI5 what each service/framework does that's bolded?
For data ingestion, you write and build a piece of code in IDE (Gradle? is used and Artifactory? is one of the confgs setup for Gradle, Metorikku and DTSv3 for a version config), you commit this code to Stash, use Bamboo to do CI/CD, create a keytab, conf.json, and generate dts credential on a terminal, copy this json file to s3 bucket, and finally you trigger on Airflow?
https://redd.it/n5cln3
@r_devops
reddit
What and How each stream works?
Can you ELI5 what each service/framework does that's bolded? For data ingestion, you write and build a piece of code in IDE (**Gradle**? is used...
Running Jenkins and Gitea itself as container managed by Kubernetes or locally on a server?
Dear Community,
**Fix assumption:** I have a RHEL 7 or 8 server (physically) to setup some CICD Tools. (I know that there are better operating systems for my use case, at least following opinions in certain blogs.)
**Goal:** Setup an experimental DevOps environment with the goal to gather experience to setup in a distant future a real DevOps environment for a small team. I want to use the following tools:
* Gitea
* Jenkins
* tests, deployments etc. are run in pods using a container service and kubernetes to orchestrate the pods containing the containers
**Question:** There are a lot of guides telling you to run Gitea and Jenkins themself as containerized application inside a kubernetes cluster. I would like to understand why and the pro and cons. So which of them should be run as container inside a pod (Gitea, Jenkins, both?)? Why and why maybe not.
**Thoughts:** The probably major factor to run those applications in pods is that the system becomes more resilient. Disadvantage could be that it is more difficult to deal with persistency and consistency of databases and storage. I also already started a thread towards this topic where I also added as comment to some answers this question: [https://www.reddit.com/r/devops/comments/mw6jp7/setting\_up\_cicd\_git/](https://www.reddit.com/r/devops/comments/mw6jp7/setting_up_cicd_git/)
​
I appreciate all your help and thank you very much for you help, time and considerations.
https://redd.it/n5dnqa
@r_devops
Dear Community,
**Fix assumption:** I have a RHEL 7 or 8 server (physically) to setup some CICD Tools. (I know that there are better operating systems for my use case, at least following opinions in certain blogs.)
**Goal:** Setup an experimental DevOps environment with the goal to gather experience to setup in a distant future a real DevOps environment for a small team. I want to use the following tools:
* Gitea
* Jenkins
* tests, deployments etc. are run in pods using a container service and kubernetes to orchestrate the pods containing the containers
**Question:** There are a lot of guides telling you to run Gitea and Jenkins themself as containerized application inside a kubernetes cluster. I would like to understand why and the pro and cons. So which of them should be run as container inside a pod (Gitea, Jenkins, both?)? Why and why maybe not.
**Thoughts:** The probably major factor to run those applications in pods is that the system becomes more resilient. Disadvantage could be that it is more difficult to deal with persistency and consistency of databases and storage. I also already started a thread towards this topic where I also added as comment to some answers this question: [https://www.reddit.com/r/devops/comments/mw6jp7/setting\_up\_cicd\_git/](https://www.reddit.com/r/devops/comments/mw6jp7/setting_up_cicd_git/)
​
I appreciate all your help and thank you very much for you help, time and considerations.
https://redd.it/n5dnqa
@r_devops
reddit
Setting up cicd, git, ...
Dear Community, I am completely new to the devops field. I looked and played around with some tools to get at least a partial understanding for...
Carbon cost of infra-as-code
I've been toying with the idea of showing carbon emission estimates as part of the free/open source Infracost CLI tool for Terraform projects.
I've seen estimates mention that data centers consume around 1% of the global electric supply [1\] and this could increase to between 3-13% by 2030 [2\]. The wider ICT ecosystem accounts for 2% of the world's carbon emissions, putting it on par with the entire aviation industry [3\].
It seems like it might be possible to show "carbon costs" for basic compute (ec2), storage (s3) and data transfer but not easy for services that build on top of these raw primitives, e.g. DynamoDB. However, I'm wondering if people would find that helpful, or if it would change anything about "cloud waste"? That waste is estimated to be around $17bn out of the $50bn that was spent on IaaS in 2020 [4\]. The main causes of the waste are idle resources and over-provisioned resources so maybe if devops/SREs/devs have the carbon costs, they can incentivize people to use those resources more efficiently? Anyone seen infra carbon costs in their organization's carbon accounting reports?
1. https://www.iea.org/reports/data-centres-and-data-transmission-networks
2. https://www.mdpi.com/2078-1547/6/1/117
3. https://www.nature.com/articles/d41586-018-06610-y
4. https://www.gartner.com/en/newsroom/press-releases/2019-11-13-gartner-forecasts-worldwide-public-cloud-revenue-to-grow-17-percent-in-2020
https://redd.it/n5huzs
@r_devops
I've been toying with the idea of showing carbon emission estimates as part of the free/open source Infracost CLI tool for Terraform projects.
I've seen estimates mention that data centers consume around 1% of the global electric supply [1\] and this could increase to between 3-13% by 2030 [2\]. The wider ICT ecosystem accounts for 2% of the world's carbon emissions, putting it on par with the entire aviation industry [3\].
It seems like it might be possible to show "carbon costs" for basic compute (ec2), storage (s3) and data transfer but not easy for services that build on top of these raw primitives, e.g. DynamoDB. However, I'm wondering if people would find that helpful, or if it would change anything about "cloud waste"? That waste is estimated to be around $17bn out of the $50bn that was spent on IaaS in 2020 [4\]. The main causes of the waste are idle resources and over-provisioned resources so maybe if devops/SREs/devs have the carbon costs, they can incentivize people to use those resources more efficiently? Anyone seen infra carbon costs in their organization's carbon accounting reports?
1. https://www.iea.org/reports/data-centres-and-data-transmission-networks
2. https://www.mdpi.com/2078-1547/6/1/117
3. https://www.nature.com/articles/d41586-018-06610-y
4. https://www.gartner.com/en/newsroom/press-releases/2019-11-13-gartner-forecasts-worldwide-public-cloud-revenue-to-grow-17-percent-in-2020
https://redd.it/n5huzs
@r_devops
GitHub
GitHub - infracost/infracost: Cloud cost estimates for Terraform in pull requests💰📉 Shift FinOps Left!
Cloud cost estimates for Terraform in pull requests💰📉 Shift FinOps Left! - infracost/infracost
HIRING Kubernetes Administrator - London
I am currently recruiting for a new Kubernetes Administrator position with a Gartner Magic Quadrant group building massive scale data storage tech.
The team have grown their UK tech team to over one hundred people since opening last year (as part of a large global tech group) and are building out a large and varied Operations and Reliability Engineering group, focused on container growth as part of a large Kubernetes/OpenShift project. They are looking for those who enjoy working with Kubernetes and had proven record building clusters and supporting wider team with use of those technology.
The team are based near St Pauls in more normal circumstances (currently fully remote with flexible post Covid) and we can look at salaries from mid level all the way upto £110,000 plus bonus, pension and private health package.
For more information:
💻 drop me a message on LinkedIn
📩 [email protected]
📞 01727225558
https://redd.it/n5j5x6
@r_devops
I am currently recruiting for a new Kubernetes Administrator position with a Gartner Magic Quadrant group building massive scale data storage tech.
The team have grown their UK tech team to over one hundred people since opening last year (as part of a large global tech group) and are building out a large and varied Operations and Reliability Engineering group, focused on container growth as part of a large Kubernetes/OpenShift project. They are looking for those who enjoy working with Kubernetes and had proven record building clusters and supporting wider team with use of those technology.
The team are based near St Pauls in more normal circumstances (currently fully remote with flexible post Covid) and we can look at salaries from mid level all the way upto £110,000 plus bonus, pension and private health package.
For more information:
💻 drop me a message on LinkedIn
📩 [email protected]
📞 01727225558
https://redd.it/n5j5x6
@r_devops
Understanding Recruitment
Technology Recruitment Experts | Understanding Recruitment
Discover top tech job opportunities and tailored recruitment solutions with Understanding Recruitment. Join our specialised tech network today.
NPM+NODEJS
Hello guys, hope all of you are doing well, yesterday my team lead asked me to do a small session about NPM, explain its purpose and how can we use it as a DevOps engineer, knowing that I have a piece of good knowledge in javascript (basic staffs such function, oop, etc) but I have never used npm, so my question is as a DevOps engineer why we need to learn node js & npm, and where we can use them?
Thanks.
https://redd.it/n5mm07
@r_devops
Hello guys, hope all of you are doing well, yesterday my team lead asked me to do a small session about NPM, explain its purpose and how can we use it as a DevOps engineer, knowing that I have a piece of good knowledge in javascript (basic staffs such function, oop, etc) but I have never used npm, so my question is as a DevOps engineer why we need to learn node js & npm, and where we can use them?
Thanks.
https://redd.it/n5mm07
@r_devops
reddit
NPM+NODEJS
Hello guys, hope all of you are doing well, yesterday my team lead asked me to do a small session about NPM, explain its purpose and how can we...
Choosing proper tool for infrastructure/servers state validation
Hi! We are small devops team deploying openshfit/k8s clusters. We need some tool to validate cluster state, e.g. if k8s API is accessible, image registry is routed and so on. Potentially we might have more devops joining so we want everything as a code, so everyone could run tests and see if a system diverges.
​
I choose between:
1. chef inspec \- Pros - I like the syntax and many out of the box features. And I don't mind to write Ruby DSL as well. Cons - if chef still a thing? Seems like they've dropped opensource support. Also installation footprint seems a bit overkill for us ( will require ruby or maybe other dependencies ).
2. goss \- Pros - One binary install as it's written on go, so easy to deploy. Cons - I am not a fan of YAML DSL / coding, also seems like goss does not show a command's stdout in it's reports which I consider a significant flaw. According to the latest commits date, the project seems a bit abandoned, at least not actively maintained.
3. write our own solution. Using python/bash/whatever. Pros - maximum flexibility. Cons - It'd take some time and efforts. I don't want reinvent the wheel if a tool I really like exists.
https://redd.it/n5ozee
@r_devops
Hi! We are small devops team deploying openshfit/k8s clusters. We need some tool to validate cluster state, e.g. if k8s API is accessible, image registry is routed and so on. Potentially we might have more devops joining so we want everything as a code, so everyone could run tests and see if a system diverges.
​
I choose between:
1. chef inspec \- Pros - I like the syntax and many out of the box features. And I don't mind to write Ruby DSL as well. Cons - if chef still a thing? Seems like they've dropped opensource support. Also installation footprint seems a bit overkill for us ( will require ruby or maybe other dependencies ).
2. goss \- Pros - One binary install as it's written on go, so easy to deploy. Cons - I am not a fan of YAML DSL / coding, also seems like goss does not show a command's stdout in it's reports which I consider a significant flaw. According to the latest commits date, the project seems a bit abandoned, at least not actively maintained.
3. write our own solution. Using python/bash/whatever. Pros - maximum flexibility. Cons - It'd take some time and efforts. I don't want reinvent the wheel if a tool I really like exists.
https://redd.it/n5ozee
@r_devops
reddit
Choosing proper tool for infrastructure/servers state validation
Hi! We are small devops team deploying openshfit/k8s clusters. We need some tool to validate cluster state, e.g. if k8s API is accessible, image...
What is the difference between devops and SRE?
Dear colleagues.
What is the difference between devops and SRE?
Could you please provide an example?
Thanks in advance!
https://redd.it/n5xfix
@r_devops
Dear colleagues.
What is the difference between devops and SRE?
Could you please provide an example?
Thanks in advance!
https://redd.it/n5xfix
@r_devops
reddit
What is the difference between devops and SRE?
Dear colleagues. What is the difference between devops and SRE? Could you please provide an example? Thanks in advance!
Question about moving puppet infrastructure to docker
We use Jenkins to setup puppet infrastructure and install product. There are many 3 components involved. Puppetserver 6.x, Jenkins and Nginx acting as package manager. If this setup to be converted, what is best approach? like clubbing Jenkins and puppet server in on image OR seperate? Nginx will be a separate container.
https://redd.it/n5zzw2
@r_devops
We use Jenkins to setup puppet infrastructure and install product. There are many 3 components involved. Puppetserver 6.x, Jenkins and Nginx acting as package manager. If this setup to be converted, what is best approach? like clubbing Jenkins and puppet server in on image OR seperate? Nginx will be a separate container.
https://redd.it/n5zzw2
@r_devops
reddit
Question about moving puppet infrastructure to docker
We use Jenkins to setup puppet infrastructure and install product. There are many 3 components involved. Puppetserver 6.x, Jenkins and Nginx...
Help required to setup vault with RAFT HA and database storage backend.
I am trying to setup Hashicorp Vault with raft as high availability and postgres as storage backend with TLS enabled. The only problem I'm facing at the moment is that, I am unable to join the various vault nodes into the raft HA cluster.
I'm running vault on docker [ the three nodes are a part of the same docker network \] and used openssl to generate a self-signed certificate to test the TLS setup.
This is my vault.hcl
hastorage "raft" {
path = "/vault/file/"
nodeid = "vault3"
}
storage "postgresql" {
connectionurl = "postgres://<username>:<password>@postgres:5432/<dbname>?sslmode=disable"
}
listener "tcp" {
address = "0.0.0.0:8220"
tlscertfile = "/etc/certs/kms.crt"
tlskeyfile = "/etc/certs/kms.key"
}
defaultleasettl = "2208h"
maxleasettl = "4320h"
disablemlock = true
ui = true
clusteraddr = "https://vault3:8221"
apiaddr = "https://vault3:8220"
The first node, upon unseal and initialization, joins itself to a new raft cluster.
The second, which is unsealed using the keys generated upon init of the first node, goes into standby mode. When I try to join the second node into the raft cluster of the first node, I get the following error :
vault operator raft join -leader-client-cert=/etc/certs/kms.crt -leader-client-key=/etc/certs/kms.key
I also used the -client-cert and -client-key options, same error
core: attempting to join possible raft leader node: leaderaddr=https://vault1:8200
vault1 [INFO] http: TLS handshake error from 172.25.0.6:39286: remote error: tls: bad certificate
vault2 WARN core: join attempt failed: error="error during raft bootstrap init call: Put "https://vault1:8200/v1/sys/storage/raft/bootstrap/challenge": x509: certificate is not valid for any names, but wanted to match vault1"
vault2 [ERROR] core: failed to join raft cluster: error="failed to join any raft leader node"
I recreated the certificate with vault\1 as the FQDN, this gives me the following error :
core: attempting to join possible raft leader node: leaderaddr=https://vault1:8200
vault2 [WARN] core: join attempt failed: error="error during raft bootstrap init call: Put "https://vault1:8200/v1/sys/storage/raft/bootstrap/challenge": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0"
vault2 [ERROR] core: failed to join raft cluster: error="failed to join any raft leader node"
vault1 INFO http: TLS handshake error from 172.18.0.5:47794: remote error: tls: bad certificate
I set the environment variable GODEBUG=x509ignoreCN=0, it didn't fix anything.
Any help would be much appreciated!
https://redd.it/n5znsz
@r_devops
I am trying to setup Hashicorp Vault with raft as high availability and postgres as storage backend with TLS enabled. The only problem I'm facing at the moment is that, I am unable to join the various vault nodes into the raft HA cluster.
I'm running vault on docker [ the three nodes are a part of the same docker network \] and used openssl to generate a self-signed certificate to test the TLS setup.
This is my vault.hcl
hastorage "raft" {
path = "/vault/file/"
nodeid = "vault3"
}
storage "postgresql" {
connectionurl = "postgres://<username>:<password>@postgres:5432/<dbname>?sslmode=disable"
}
listener "tcp" {
address = "0.0.0.0:8220"
tlscertfile = "/etc/certs/kms.crt"
tlskeyfile = "/etc/certs/kms.key"
}
defaultleasettl = "2208h"
maxleasettl = "4320h"
disablemlock = true
ui = true
clusteraddr = "https://vault3:8221"
apiaddr = "https://vault3:8220"
The first node, upon unseal and initialization, joins itself to a new raft cluster.
The second, which is unsealed using the keys generated upon init of the first node, goes into standby mode. When I try to join the second node into the raft cluster of the first node, I get the following error :
vault operator raft join -leader-client-cert=/etc/certs/kms.crt -leader-client-key=/etc/certs/kms.key
I also used the -client-cert and -client-key options, same error
core: attempting to join possible raft leader node: leaderaddr=https://vault1:8200
vault1 [INFO] http: TLS handshake error from 172.25.0.6:39286: remote error: tls: bad certificate
vault2 WARN core: join attempt failed: error="error during raft bootstrap init call: Put "https://vault1:8200/v1/sys/storage/raft/bootstrap/challenge": x509: certificate is not valid for any names, but wanted to match vault1"
vault2 [ERROR] core: failed to join raft cluster: error="failed to join any raft leader node"
I recreated the certificate with vault\1 as the FQDN, this gives me the following error :
core: attempting to join possible raft leader node: leaderaddr=https://vault1:8200
vault2 [WARN] core: join attempt failed: error="error during raft bootstrap init call: Put "https://vault1:8200/v1/sys/storage/raft/bootstrap/challenge": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0"
vault2 [ERROR] core: failed to join raft cluster: error="failed to join any raft leader node"
vault1 INFO http: TLS handshake error from 172.18.0.5:47794: remote error: tls: bad certificate
I set the environment variable GODEBUG=x509ignoreCN=0, it didn't fix anything.
Any help would be much appreciated!
https://redd.it/n5znsz
@r_devops
reddit
Help required to setup vault with RAFT HA and database storage...
I am trying to setup Hashicorp Vault with raft as high availability and postgres as storage backend with TLS enabled. The only problem I'm facing...
Test API of docker container in Azure DevOps CI/CD pipeline
Hi!
I'm working on setting up some ci/cd pipelines for a couple of small containers. The pipeline should be as follow:
1. Build docker image
2. Start container
3. Query the REST api of said container
4. Make sure the response is "reasonable"
5. Push to ACR
6. Deploy to AKS
It's number 3 and 4 that I'm struggling with. It seems kinda basic but I haven't found any good resources online. I'm new to DevOps and I'm guessing I'm just googling the wrong terms, as this sounds like a basic and standard thing one would do in a pipeline. One way, I guess, would be to just
Any suggestions or references to online resources would be highly appreciated!
https://redd.it/n63w2v
@r_devops
Hi!
I'm working on setting up some ci/cd pipelines for a couple of small containers. The pipeline should be as follow:
1. Build docker image
2. Start container
3. Query the REST api of said container
4. Make sure the response is "reasonable"
5. Push to ACR
6. Deploy to AKS
It's number 3 and 4 that I'm struggling with. It seems kinda basic but I haven't found any good resources online. I'm new to DevOps and I'm guessing I'm just googling the wrong terms, as this sounds like a basic and standard thing one would do in a pipeline. One way, I guess, would be to just
docker run the container, curl it with a bash command, regex the response and run exit if the response contains "error". But I'm thinking there's probably a prettier solution out there.Any suggestions or references to online resources would be highly appreciated!
https://redd.it/n63w2v
@r_devops
reddit
Test API of docker container in Azure DevOps CI/CD pipeline
Hi! I'm working on setting up some ci/cd pipelines for a couple of small containers. The pipeline should be as follow: 1. Build docker image 2....
I developed a tool to train neural networks on AWS with a single command
Hey everyone,
My friend and I developed Nimbo, a dead-simple CLI that wraps AWS CLI, allowing you to run code on AWS as if you were running it locally. GitHub: https://github.com/nimbo-sh/nimbo. Docs: https://docs.nimbo.sh.
We decided to build this because we were frustrated with how cumbersome using AWS was, and we just wanted to be able to run jobs on AWS as easily as we run them locally. All in all, we didn't like the current AWS DevOps user experience, and we thought we could drastically simplify it for the machine learning/scientific computing niche.
For this reason, we also provide many useful commands to make it faster and easier to work with AWS, such as one-command Jupyter notebooks on EC2, easily checking prices, logging onto an instance, or syncing data to/from S3 (you can see some useful commands here).
Unlike other similar services, we are solely client-side, meaning that the code runs on your EC2 instances and data is stored in your S3 buckets (we don't have a server; all the infrastructure orchestration happens in the Nimbo package).
We have tons of ideas for Nimbo, such as docker support and one-command neural network deployments.
.
We are happy to receive any feedback and suggestions you have.
https://redd.it/n6486v
@r_devops
Hey everyone,
My friend and I developed Nimbo, a dead-simple CLI that wraps AWS CLI, allowing you to run code on AWS as if you were running it locally. GitHub: https://github.com/nimbo-sh/nimbo. Docs: https://docs.nimbo.sh.
We decided to build this because we were frustrated with how cumbersome using AWS was, and we just wanted to be able to run jobs on AWS as easily as we run them locally. All in all, we didn't like the current AWS DevOps user experience, and we thought we could drastically simplify it for the machine learning/scientific computing niche.
For this reason, we also provide many useful commands to make it faster and easier to work with AWS, such as one-command Jupyter notebooks on EC2, easily checking prices, logging onto an instance, or syncing data to/from S3 (you can see some useful commands here).
Unlike other similar services, we are solely client-side, meaning that the code runs on your EC2 instances and data is stored in your S3 buckets (we don't have a server; all the infrastructure orchestration happens in the Nimbo package).
We have tons of ideas for Nimbo, such as docker support and one-command neural network deployments.
.
We are happy to receive any feedback and suggestions you have.
https://redd.it/n6486v
@r_devops
nimbo.sh
Run jobs on AWS with a single command
Nimbo is a dead-simple CLI that allows you to run code on AWS as if you were running it locally.
Nimbo also provides many useful commands to supercharge your productivity when working with AWS,
such as easily checking prices, logging onto an instance…
Nimbo also provides many useful commands to supercharge your productivity when working with AWS,
such as easily checking prices, logging onto an instance…
build hello world java file in jenkins pipeline
hey folks,
how do we get the hello world class file and jar file and build , test, deploy , release them in Jenkins pipeline?? I am really stuck with creating pom.xml file for the java class file.
I also tried adding git repo in (scripting pipeline) but it says 'the recommended git is none and no credentials provided.
could anyone tell me the exact process to get the jar file and build it in jenkins.
https://redd.it/n65i19
@r_devops
hey folks,
how do we get the hello world class file and jar file and build , test, deploy , release them in Jenkins pipeline?? I am really stuck with creating pom.xml file for the java class file.
I also tried adding git repo in (scripting pipeline) but it says 'the recommended git is none and no credentials provided.
could anyone tell me the exact process to get the jar file and build it in jenkins.
https://redd.it/n65i19
@r_devops
reddit
build hello world java file in jenkins pipeline
hey folks, how do we get the hello world class file and jar file and build , test, deploy , release them in Jenkins pipeline?? I am really stuck...
Mac In cloud alternatives
Amazons new MAC EC2s are expensive and we’re unlikely to get approval to use them.
We currently use Mac In cloud it’s just the builds are slow.
https://redd.it/n69rft
@r_devops
Amazons new MAC EC2s are expensive and we’re unlikely to get approval to use them.
We currently use Mac In cloud it’s just the builds are slow.
https://redd.it/n69rft
@r_devops
reddit
Mac In cloud alternatives
Amazons new MAC EC2s are expensive and we’re unlikely to get approval to use them. We currently use Mac In cloud it’s just the builds are slow.
Oauth flow and its impact on infrastructure
Hello, first post here :)
I'm helping with an OAUTH / OpenID connect implementation by designing the infrastructure and the following issue took me by surprise: How's the deal once the final token was acquired by the app?
​
Let's assume the following scenario:
​
1. Company Inc has a service that verifies personal assets. Now Enterprise Inc wants to use Company's services in order to offload that verification.
2. Company Inc decides to implement Oauth as a way to allow more entities to use Company's services, and decides to eat their own dog food i.e. use Oauth internally.
3. So far so good, suddenly Little Business Ltd decides to send Company's 1000s of assets to be verified.
4. Once the final token was aqcuired by the backend app, how does the backend app know whether the token is still valid? Regardless of expiry time I mean. Should the backend app ask the authentication provider if the token is still valid? Does it need ask it inexorably via API endpoint for each transaction?
5. If the answer to the above question is more or less positive, does it mean I need to build a separate (and big!) infrastructure?
Thanks in advance!
https://redd.it/n68gya
@r_devops
Hello, first post here :)
I'm helping with an OAUTH / OpenID connect implementation by designing the infrastructure and the following issue took me by surprise: How's the deal once the final token was acquired by the app?
​
Let's assume the following scenario:
​
1. Company Inc has a service that verifies personal assets. Now Enterprise Inc wants to use Company's services in order to offload that verification.
2. Company Inc decides to implement Oauth as a way to allow more entities to use Company's services, and decides to eat their own dog food i.e. use Oauth internally.
3. So far so good, suddenly Little Business Ltd decides to send Company's 1000s of assets to be verified.
4. Once the final token was aqcuired by the backend app, how does the backend app know whether the token is still valid? Regardless of expiry time I mean. Should the backend app ask the authentication provider if the token is still valid? Does it need ask it inexorably via API endpoint for each transaction?
5. If the answer to the above question is more or less positive, does it mean I need to build a separate (and big!) infrastructure?
Thanks in advance!
https://redd.it/n68gya
@r_devops
reddit
Oauth flow and its impact on infrastructure
Hello, first post here :) I'm helping with an OAUTH / OpenID connect implementation by designing the infrastructure and the following issue took...
Leave on-prem devops job to pursue cloud?
Hi, could use some advice. I'm an on-prem devops engineer with about 2 years experience. I mostly have learned Ansible, Jenkins, Docker, Linux sysadmin and related things so far at my current job (started as a Junior and now an intermediate).
I like my job (and the people I work with) and I see there's a path for me to grow/become senior devops. My concern is if I should try to switch to a cloud company, as I have no AWS/Azure/GCP experience and due to the nature of my company we never will.
Is it stupid to leave a job I like just so that I can get on the cloud track sooner rather than later? Or is it something I could just learn in my free time? The pay at my current job is fine though if I switched I expect I could get an extra 10-20%.
Not really sure what to do... thanks!
https://redd.it/n62baq
@r_devops
Hi, could use some advice. I'm an on-prem devops engineer with about 2 years experience. I mostly have learned Ansible, Jenkins, Docker, Linux sysadmin and related things so far at my current job (started as a Junior and now an intermediate).
I like my job (and the people I work with) and I see there's a path for me to grow/become senior devops. My concern is if I should try to switch to a cloud company, as I have no AWS/Azure/GCP experience and due to the nature of my company we never will.
Is it stupid to leave a job I like just so that I can get on the cloud track sooner rather than later? Or is it something I could just learn in my free time? The pay at my current job is fine though if I switched I expect I could get an extra 10-20%.
Not really sure what to do... thanks!
https://redd.it/n62baq
@r_devops
reddit
Leave on-prem devops job to pursue cloud?
Hi, could use some advice. I'm an on-prem devops engineer with about 2 years experience. I mostly have learned Ansible, Jenkins, Docker, Linux...
Can I use Github Secrets locally?
So, when I don't do production builds but rather basic local development with my app, a
More context: I could know them ofc but assuming you would have a team of devs, how can they use the secrets for their day-to-day development without actually knowing them secrets?
https://redd.it/n65195
@r_devops
So, when I don't do production builds but rather basic local development with my app, a
Dockerfile and docker-compose up, can I use Github Secrets without knowing the secrets?More context: I could know them ofc but assuming you would have a team of devs, how can they use the secrets for their day-to-day development without actually knowing them secrets?
https://redd.it/n65195
@r_devops
reddit
Can I use Github Secrets locally?
So, when I don't do production builds but rather basic local development with my app, a `Dockerfile` and `docker-compose up`, can I use Github...
Gaming dev industry insight
I have been working as project manager (PM) within ERP for about 9 years as consultant and have managed different type of projects within digitalization which often are cross-functional with a larger group of stakeholders involved and a mix of agile and waterfall dev.
So I have lately been interested of continuing as PM though within the gaming industry and would like to have some insights of how those type of projects looks like on a high level, what type of roles are included, the different project phases, system used for knowledge sharing and tracking (ServiceNow, JIRA etc...).
Thanks in advance! 🙏
https://redd.it/n62pze
@r_devops
I have been working as project manager (PM) within ERP for about 9 years as consultant and have managed different type of projects within digitalization which often are cross-functional with a larger group of stakeholders involved and a mix of agile and waterfall dev.
So I have lately been interested of continuing as PM though within the gaming industry and would like to have some insights of how those type of projects looks like on a high level, what type of roles are included, the different project phases, system used for knowledge sharing and tracking (ServiceNow, JIRA etc...).
Thanks in advance! 🙏
https://redd.it/n62pze
@r_devops
reddit
Gaming dev industry insight
I have been working as project manager (PM) within ERP for about 9 years as consultant and have managed different type of projects within...
Oracle integration with git .
Hello r/DevOps
We had oracle database that we use . We store our schéma in SVN. We are now planning to migrate our code into git. What is the best way to setup git that will track any schema and table update . So the build will build what have changed on the database. we can build with Jenkins to our server.
https://redd.it/n5thor
@r_devops
Hello r/DevOps
We had oracle database that we use . We store our schéma in SVN. We are now planning to migrate our code into git. What is the best way to setup git that will track any schema and table update . So the build will build what have changed on the database. we can build with Jenkins to our server.
https://redd.it/n5thor
@r_devops
reddit
Oracle integration with git .
Hello r/DevOps We had oracle database that we use . We store our schéma in SVN. We are now planning to migrate our code into git. What is the...