macOS 11.1, VMware Fusion and Vagrant Plug-in for kitchen-CI HELP
Hi all,
After Virtual Box stopped working with Big Sur our company has told us to use VMware Fusion as hypervisor choice for corporate Macs. So need to replicate Test-Kitchen setup w/VBox on VMWare/Vagrant.
System configuration:
macOS 11.1
VMWare Fusion 12.1 (Latest)
Vagrant VMWare Plug-in 1.0.17 (purchased today)
Chef Workstation 20.7.96 (Infra Client 16.2.73)
VAGRANT_DEFAULT_PROVIDER
$ echo $VAGRANTDEFAULTPROVIDER
vmwaredesktop
**kitchen.yml:**
driver:
name: vagrant
provider: vmwarefusion
network:
- "forwarded_port", {guest: 5985, host: 55985}
provisioner:
name: chefzero
loglevel: warn
platforms:
- name: W2012-3.2.12-14DEC20
driver:
host: 127.0.0.1
port: 55985
guest: windows
transport:
name: winrm
elevated: true
elevatedusername: SYSTEM
elevatedpassword: null
driverconfig:
gui: true
guest: windows
username: Administrator
password: *********
communicator: winrm
suites:
....
When I run kitchen converge I get this:
-----> Starting Test Kitchen (v2.5.3)
-----> Creating <VM-W2012-3212-14DEC20>...
Bringing machine 'default' up with 'vmwarefusion' provider...
==> default: Box 'W2012-3.2.12-14DEC20' could not be found. Attempting to find and install...
default: Box Provider: vmwaredesktop, vmwarefusion, vmwareworkstation
default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly...
==> default: Adding box 'W2012-3.2.12-14DEC20' (v0) for provider: vmwaredesktop, vmwarefusion, vmwareworkstation
default: Downloading: W2012-3.2.12-14DEC20
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.
Couldn't open file /Users/<pathto>/.kitchen/kitchen-vagrant/VM-W2012-3212-14DEC20/W2012-3.2.12-14DEC20
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: 1 actions failed.
>>>>>> Failed to complete #create action: [Expected process to exit with [0], but received '1'
---- Begin output of vagrant up --no-provision --provider vmwarefusion ----
STDOUT: Bringing machine 'default' up with 'vmwarefusion' provider...
==> default: Box 'W2012-3.2.12-14DEC20' could not be found. Attempting to find and install...
default: Box Provider: vmwaredesktop, vmwarefusion, vmwareworkstation
default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly...
==> default: Adding box 'W2012-3.2.12-14DEC20' (v0) for provider: vmwaredesktop, vmwarefusion, vmwareworkstation
default: Downloading: W2012-3.2.12-14DEC20
STDERR: An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.
Couldn't open file /Users/<pathto>.kitchen/kitchen-vagrant/VM-W2012-3212-14DEC20/W2012-3.2.12-14DEC20
---- End output of vagrant up --no-provision --provider vmwarefusion ----
Ran vagrant up --no-provision --provider vmwarefusion returned 1] on VM-W2012-3212-14DEC20
This gives me the impression the Vagrant VMWare driver is not working. I have had the setup working before but with macOS 10.15, VMWare Fusion 11 and Vagrant VMware Plug-in 1.0.7. Latter don't work with macOS 11.1.
Anyone please able to share wisdom of how to get this working.
Regards and happy holidays.
https://redd.it/kisgfu
@r_devops
Hi all,
After Virtual Box stopped working with Big Sur our company has told us to use VMware Fusion as hypervisor choice for corporate Macs. So need to replicate Test-Kitchen setup w/VBox on VMWare/Vagrant.
System configuration:
macOS 11.1
VMWare Fusion 12.1 (Latest)
Vagrant VMWare Plug-in 1.0.17 (purchased today)
Chef Workstation 20.7.96 (Infra Client 16.2.73)
VAGRANT_DEFAULT_PROVIDER
$ echo $VAGRANTDEFAULTPROVIDER
vmwaredesktop
**kitchen.yml:**
driver:
name: vagrant
provider: vmwarefusion
network:
- "forwarded_port", {guest: 5985, host: 55985}
provisioner:
name: chefzero
loglevel: warn
platforms:
- name: W2012-3.2.12-14DEC20
driver:
host: 127.0.0.1
port: 55985
guest: windows
transport:
name: winrm
elevated: true
elevatedusername: SYSTEM
elevatedpassword: null
driverconfig:
gui: true
guest: windows
username: Administrator
password: *********
communicator: winrm
suites:
....
When I run kitchen converge I get this:
-----> Starting Test Kitchen (v2.5.3)
-----> Creating <VM-W2012-3212-14DEC20>...
Bringing machine 'default' up with 'vmwarefusion' provider...
==> default: Box 'W2012-3.2.12-14DEC20' could not be found. Attempting to find and install...
default: Box Provider: vmwaredesktop, vmwarefusion, vmwareworkstation
default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly...
==> default: Adding box 'W2012-3.2.12-14DEC20' (v0) for provider: vmwaredesktop, vmwarefusion, vmwareworkstation
default: Downloading: W2012-3.2.12-14DEC20
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.
Couldn't open file /Users/<pathto>/.kitchen/kitchen-vagrant/VM-W2012-3212-14DEC20/W2012-3.2.12-14DEC20
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: 1 actions failed.
>>>>>> Failed to complete #create action: [Expected process to exit with [0], but received '1'
---- Begin output of vagrant up --no-provision --provider vmwarefusion ----
STDOUT: Bringing machine 'default' up with 'vmwarefusion' provider...
==> default: Box 'W2012-3.2.12-14DEC20' could not be found. Attempting to find and install...
default: Box Provider: vmwaredesktop, vmwarefusion, vmwareworkstation
default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly...
==> default: Adding box 'W2012-3.2.12-14DEC20' (v0) for provider: vmwaredesktop, vmwarefusion, vmwareworkstation
default: Downloading: W2012-3.2.12-14DEC20
STDERR: An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.
Couldn't open file /Users/<pathto>.kitchen/kitchen-vagrant/VM-W2012-3212-14DEC20/W2012-3.2.12-14DEC20
---- End output of vagrant up --no-provision --provider vmwarefusion ----
Ran vagrant up --no-provision --provider vmwarefusion returned 1] on VM-W2012-3212-14DEC20
This gives me the impression the Vagrant VMWare driver is not working. I have had the setup working before but with macOS 10.15, VMWare Fusion 11 and Vagrant VMware Plug-in 1.0.7. Latter don't work with macOS 11.1.
Anyone please able to share wisdom of how to get this working.
Regards and happy holidays.
https://redd.it/kisgfu
@r_devops
reddit
macOS 11.1, VMware Fusion and Vagrant Plug-in for kitchen-CI [HELP]
Hi all, After Virtual Box stopped working with Big Sur our company has told us to use VMware Fusion as hypervisor choice for corporate Macs. So...
Has anyone figured out a trunk based strategy using GitHub Actions?
I come from Azure DevOps where we use separate build and release pipelines that are linked and therefore implementing this is trivial, the release pipeline has access to various build pipeline variables and published artifacts.
I found this post https://www.reddit.com/r/devops/comments/gnnr5a/functionality_to_trigger_github_actions_builds_on/?utm_source=amp&utm_medium=&utm_content=comments_view_all and is not really what I’m looking for, I’m hoping there is a native solution. The comments on that thread are pretty terrible, not what I’ve come to know from this sub, but I’m a more recent subscriber.
The idea I’m shooting for, if you’re unfamiliar, is building once and promoting artifacts (or containers images) to environments either by approvals, button clicks, or other checks, as opposed to using separate branches that creat new builds for each environment (which seems to be the new norm?, but I’m not interested in doing things that way)
https://redd.it/kiidbl
@r_devops
I come from Azure DevOps where we use separate build and release pipelines that are linked and therefore implementing this is trivial, the release pipeline has access to various build pipeline variables and published artifacts.
I found this post https://www.reddit.com/r/devops/comments/gnnr5a/functionality_to_trigger_github_actions_builds_on/?utm_source=amp&utm_medium=&utm_content=comments_view_all and is not really what I’m looking for, I’m hoping there is a native solution. The comments on that thread are pretty terrible, not what I’ve come to know from this sub, but I’m a more recent subscriber.
The idea I’m shooting for, if you’re unfamiliar, is building once and promoting artifacts (or containers images) to environments either by approvals, button clicks, or other checks, as opposed to using separate branches that creat new builds for each environment (which seems to be the new norm?, but I’m not interested in doing things that way)
https://redd.it/kiidbl
@r_devops
reddit
Functionality to trigger GitHub Actions builds on Approvals
GitHub Actions is lacking native functionality to perform deployments on approvals. A feature request for that was created more than half a year...
Hi Team is there any possibility to periodically rotate the token not with manual automation for hashicorp
Looking for any Feasibility to generate dynamic tokens for kv secret engine for hashicorp vault for automation.
With the cli we can do but without human intervention is there any suggested automated way to do this.
I need to generate new token after some amount of time with hashicorp vault in automated way.
I am looking to write one custom service from outside but is there any way which hashicorp tool itself gave us.
Need some suggestions.
Thanks in advance.
https://redd.it/kiuurx
@r_devops
Looking for any Feasibility to generate dynamic tokens for kv secret engine for hashicorp vault for automation.
With the cli we can do but without human intervention is there any suggested automated way to do this.
I need to generate new token after some amount of time with hashicorp vault in automated way.
I am looking to write one custom service from outside but is there any way which hashicorp tool itself gave us.
Need some suggestions.
Thanks in advance.
https://redd.it/kiuurx
@r_devops
reddit
Hi Team is there any possibility to periodically rotate the token...
Looking for any Feasibility to generate dynamic tokens for kv secret engine for hashicorp vault for automation. With the cli we can do but without...
Exporting DynamoDB to S3 — cross-account and SSE-KMS encryption
Exporting DynamoDB to S3 — cross-account and SSE-KMS encryption
I have written a new article on exporting DynamoDB to S3 - a special case where the bucket is in another account and objects are to be encrypted using SSE-KMS
https://sunilkumarmohanty.medium.com/exporting-dynamodb-to-s3-cross-account-and-sse-kms-encryption-c74193e12438
https://redd.it/kiloc7
@r_devops
Exporting DynamoDB to S3 — cross-account and SSE-KMS encryption
I have written a new article on exporting DynamoDB to S3 - a special case where the bucket is in another account and objects are to be encrypted using SSE-KMS
https://sunilkumarmohanty.medium.com/exporting-dynamodb-to-s3-cross-account-and-sse-kms-encryption-c74193e12438
https://redd.it/kiloc7
@r_devops
Medium
Exporting DynamoDB to S3 — cross-account and SSE-KMS encryption
Most of us who have worked with DynamoDB have had had this requirement of exporting data to S3. Recently, I also had the same requirement…
Template library to meet common OPA-with-Terraform requirements
Hey guys, we know that getting started with OPA can be hard, so we’ve built a reusable kit of templates for use with Terraform to help you get your first policies up and running (resource type whitelisting, regex matching, ...)
https://github.com/scalr-eap/policy-templates
https://redd.it/ki6nul
@r_devops
Hey guys, we know that getting started with OPA can be hard, so we’ve built a reusable kit of templates for use with Terraform to help you get your first policies up and running (resource type whitelisting, regex matching, ...)
https://github.com/scalr-eap/policy-templates
https://redd.it/ki6nul
@r_devops
GitHub
scalr-eap/policy-templates
Contribute to scalr-eap/policy-templates development by creating an account on GitHub.
A good de/centralized credentials repository?
I am wondering whenever such thing exists, something that software like jenkins or Ansible could connect to and automate fetching some of the keys / certs.
I think Vault is the one, but I feel it is a bit complex.
Would appreciate some rotation mechanism / webhooks.
https://redd.it/ki5llh
@r_devops
I am wondering whenever such thing exists, something that software like jenkins or Ansible could connect to and automate fetching some of the keys / certs.
I think Vault is the one, but I feel it is a bit complex.
Would appreciate some rotation mechanism / webhooks.
https://redd.it/ki5llh
@r_devops
reddit
A good de/centralized credentials repository?
I am wondering whenever such thing exists, something that software like jenkins or Ansible could connect to and automate fetching some of the keys...
Screwing up remote access to dozens of servers within seconds
Hey folks,
sharing a story of me screwing up big time back in the days.
https://brennerm.github.io/posts/screwing-up-remote-access-to-servers.html
Feel free to share yours to make me feel better. ;)
Enjoy your holidays!
https://redd.it/ki2vsn
@r_devops
Hey folks,
sharing a story of me screwing up big time back in the days.
https://brennerm.github.io/posts/screwing-up-remote-access-to-servers.html
Feel free to share yours to make me feel better. ;)
Enjoy your holidays!
https://redd.it/ki2vsn
@r_devops
brennerm.github.io
Screwing up remote access to dozens of servers within seconds
A little postmortem story of me using Ansible to disable access to dozens of servers in seconds and what I learned from my mistakes.
Using H2 as a temp in memory DB for test purposes instead of Oracle in docker
Right now I am spinning up a whole oracle database in a pipeline (with docker) to run jobs to test SQL migration scripts. I heard that H2 is able to use Oracle syntax as well. Does anyone have experience with this?
Is an application able to make a connection to this database as well with the oracle JDBC client as well if so?
https://redd.it/kj06xy
@r_devops
Right now I am spinning up a whole oracle database in a pipeline (with docker) to run jobs to test SQL migration scripts. I heard that H2 is able to use Oracle syntax as well. Does anyone have experience with this?
Is an application able to make a connection to this database as well with the oracle JDBC client as well if so?
https://redd.it/kj06xy
@r_devops
reddit
Using H2 as a temp in memory DB for test purposes instead of...
Right now I am spinning up a whole oracle database in a pipeline (with docker) to run jobs to test SQL migration scripts. I heard that H2 is able...
Increasing Base Salary
Happy holidays!
My current title is “Senior DevOps Engineer”. I am based in Seattle area and my current base salary is 170k (total package is about 200k).
I’ve just had initial interview with HR lady in some company and she was really surprised that when I told them I was looking for base of 200k or something close to it (This company does not offer any equity).
I want to eventually go into management. Is the management position the only way to earn above 200k base?
Wanted to see if there are any other options.
Thanks.
https://redd.it/kizjve
@r_devops
Happy holidays!
My current title is “Senior DevOps Engineer”. I am based in Seattle area and my current base salary is 170k (total package is about 200k).
I’ve just had initial interview with HR lady in some company and she was really surprised that when I told them I was looking for base of 200k or something close to it (This company does not offer any equity).
I want to eventually go into management. Is the management position the only way to earn above 200k base?
Wanted to see if there are any other options.
Thanks.
https://redd.it/kizjve
@r_devops
reddit
Increasing Base Salary
Happy holidays! My current title is “Senior DevOps Engineer”. I am based in Seattle area and my current base salary is 170k (total package is...
Resources to start learning
Hi everyone
I am moving to devops soon. My manager recommended me to start learning powershell, Yaml and Jenkins
Do you have any resources I could look into to help me start?
https://redd.it/kj1orw
@r_devops
Hi everyone
I am moving to devops soon. My manager recommended me to start learning powershell, Yaml and Jenkins
Do you have any resources I could look into to help me start?
https://redd.it/kj1orw
@r_devops
reddit
Resources to start learning
Hi everyone I am moving to devops soon. My manager recommended me to start learning powershell, Yaml and Jenkins Do you have any resources I could...
EC2 Public key authorisation failure issue.
Hi Everyone, I hope I’m in the right sub to post this - it’s a new area of learning for me. I recently set up an instance of Linux on EC2 - all good. I remote in via SSH from my Mac using a key pair I generated from the EC2 console (also good). Now I decided to automate the ssh login and I think I did something (like generate a key using a command line on my local machine) and now I can’t ssh in at all. The verbose output indicates it fails right at the end with the public key authorisation. I deleted the EC2 instance and made a fresh one and still the same thing happens. If I use the EC2 console via the browser and login into the instance that way, I can get to the command line but I can’t access the instance from my local machine. I have tried to make sense of the documentation (still working through it) - but it’s proving confusing. Is there another resource someone could point me to or explain where the public key is located and why it’s causing me an issue? Thanks.
https://redd.it/kiyp2q
@r_devops
Hi Everyone, I hope I’m in the right sub to post this - it’s a new area of learning for me. I recently set up an instance of Linux on EC2 - all good. I remote in via SSH from my Mac using a key pair I generated from the EC2 console (also good). Now I decided to automate the ssh login and I think I did something (like generate a key using a command line on my local machine) and now I can’t ssh in at all. The verbose output indicates it fails right at the end with the public key authorisation. I deleted the EC2 instance and made a fresh one and still the same thing happens. If I use the EC2 console via the browser and login into the instance that way, I can get to the command line but I can’t access the instance from my local machine. I have tried to make sense of the documentation (still working through it) - but it’s proving confusing. Is there another resource someone could point me to or explain where the public key is located and why it’s causing me an issue? Thanks.
https://redd.it/kiyp2q
@r_devops
reddit
EC2 Public key authorisation failure issue.
Hi Everyone, I hope I’m in the right sub to post this - it’s a new area of learning for me. I recently set up an instance of Linux on EC2 - all...
Willing to pay someone to do my exam, it consists of kubernetes, ansible and gitops. DM me
Willing to pay someone to do my exam, it consists of kubernetes, ansible and gitops. DM me
https://redd.it/kj4s4a
@r_devops
Willing to pay someone to do my exam, it consists of kubernetes, ansible and gitops. DM me
https://redd.it/kj4s4a
@r_devops
reddit
Willing to pay someone to do my exam, it consists of kubernetes,...
Willing to pay someone to do my exam, it consists of kubernetes, ansible and gitops. DM me
Syncronize time by NTP before starting any services in Linux
Regular NTP clients change clock gradually. So if host started with big clock error(AWS instances sometimes happen to start several minutes in the past), you have timestamps and log events in the past. Not always a good idea.
Article on how to force NTP time syncronization before starting any services using chrony:
https://selivan.github.io/2020/12/23/ntp-sync-time-before-starting-any-services.html
https://redd.it/kixcry
@r_devops
Regular NTP clients change clock gradually. So if host started with big clock error(AWS instances sometimes happen to start several minutes in the past), you have timestamps and log events in the past. Not always a good idea.
Article on how to force NTP time syncronization before starting any services using chrony:
https://selivan.github.io/2020/12/23/ntp-sync-time-before-starting-any-services.html
https://redd.it/kixcry
@r_devops
selivan.github.io
Syncronize time by NTP before starting any services in Linux
Servers often have wrong clock on startup. NTP services, like ntp, chrony and systemd-timesyncd try to correct clock gradually to avoid weird bugs in software. Therefore, if server has a large clock offset on startup, it works with incorrect clock for several…
Crash Course in Linux from a DevOps Perspective?
tldr: what are some good resources for learning Linux from a DevOps perspective, or if that seems less important, some resources for picking up on the DevOps toolchain as a whole?
Hey gang! I've been lucky enough to land a DevOps internship at a mid-sized telecommunications company recently. I just finished up a two year program in mobile development (js, java, swift, kotlin, agile practices etc) and was offered an internship in a non-technical department of the company at the start of quarantine. I didn't feel like being choosey about a paying gig in such an uncertain time. I requested a transfer to one of the software departments, and this opening in DevOps was the first position available to me.
I know imposter syndrome is pretty common in the field, but I really don't feel like I would've been given this position if it wasn't a transfer. I think I would feel this to a smaller degree in a department specifically in development. I have some small experience developing in Linux (with java) and have heard of most of the tools in our DevOps toolchain, but for the most part have a hard time figuring out my tasks.
I don't mind a challenge, and I'm finding that I really enjoy this field. I'm not really getting any training, it's more of throw-them-in-the-deep-end form of training, although I've been given a mentor after asking for one. My concern is that I'm so under-qualified that I'm not pulling my weight, and that I might be fired or transferred. I thought learning more about the DevOps toolchain might be the best kind of course to pick up first, but after my first one-on-one with my boss, it sounds like learning Linux deeper might be a better use of my free time.
My question is what kind of Linux material should I be learning? Should I focus on shell scripting, network engineering, network security, or something else I don't know? All of the above? Maybe they're all more interrelated than I currently realize.
My current resource is Lynda.com, which I have access to from my school account. One nice thing about it is that certificate completions also show up on LinkedIn, but I'm obviously open to any free resources if you know of any specific videos/tutorials/courses. Or if you have any advice for someone that feels overwhelmed to a concerning degree.
I didn't scour the subreddit thoroughly, so my apologies if I missed a simple beginner's resources pin somewhere. Longtime Reddit lurker, I think this might be my first post though, so you know it's important to me!
Thanks ahead of time for any helpful advice or resources. (:
https://redd.it/kivgjb
@r_devops
tldr: what are some good resources for learning Linux from a DevOps perspective, or if that seems less important, some resources for picking up on the DevOps toolchain as a whole?
Hey gang! I've been lucky enough to land a DevOps internship at a mid-sized telecommunications company recently. I just finished up a two year program in mobile development (js, java, swift, kotlin, agile practices etc) and was offered an internship in a non-technical department of the company at the start of quarantine. I didn't feel like being choosey about a paying gig in such an uncertain time. I requested a transfer to one of the software departments, and this opening in DevOps was the first position available to me.
I know imposter syndrome is pretty common in the field, but I really don't feel like I would've been given this position if it wasn't a transfer. I think I would feel this to a smaller degree in a department specifically in development. I have some small experience developing in Linux (with java) and have heard of most of the tools in our DevOps toolchain, but for the most part have a hard time figuring out my tasks.
I don't mind a challenge, and I'm finding that I really enjoy this field. I'm not really getting any training, it's more of throw-them-in-the-deep-end form of training, although I've been given a mentor after asking for one. My concern is that I'm so under-qualified that I'm not pulling my weight, and that I might be fired or transferred. I thought learning more about the DevOps toolchain might be the best kind of course to pick up first, but after my first one-on-one with my boss, it sounds like learning Linux deeper might be a better use of my free time.
My question is what kind of Linux material should I be learning? Should I focus on shell scripting, network engineering, network security, or something else I don't know? All of the above? Maybe they're all more interrelated than I currently realize.
My current resource is Lynda.com, which I have access to from my school account. One nice thing about it is that certificate completions also show up on LinkedIn, but I'm obviously open to any free resources if you know of any specific videos/tutorials/courses. Or if you have any advice for someone that feels overwhelmed to a concerning degree.
I didn't scour the subreddit thoroughly, so my apologies if I missed a simple beginner's resources pin somewhere. Longtime Reddit lurker, I think this might be my first post though, so you know it's important to me!
Thanks ahead of time for any helpful advice or resources. (:
https://redd.it/kivgjb
@r_devops
LinkedIn Learning: Online Training Courses & Skill Building
Accelerate skills & career development for yourself or your team | Business, AI, tech, & creative skills | Find your LinkedIn Learning plan today.
Business side of DevOps
Hi! I've been looking into devops positions as to try to change fields. And I'm wondering, if more business-oriented approach is a plus. In this article business side is highlighted, not tech one - of course, technology is key, but the approach and points themselves. What are your thoughts - being only tech orienter or combining tech+business understanding is better to get hired?
https://redd.it/kjgsxo
@r_devops
Hi! I've been looking into devops positions as to try to change fields. And I'm wondering, if more business-oriented approach is a plus. In this article business side is highlighted, not tech one - of course, technology is key, but the approach and points themselves. What are your thoughts - being only tech orienter or combining tech+business understanding is better to get hired?
https://redd.it/kjgsxo
@r_devops
Custom Software / Apps Development Company
How DevOps Services Can Benefit Businesses
Read more on how DevOps brings transparency to projects, helps development and operations, allows to build quality product in less time and cost. Contact us for professional DevOps services!
Kubernetes API Explained
Kuberenetes API is made of several smaller components. In this video you will see how a request has to go through authentication, authorisation, mutation & validation before it is persisted by Kubernetes.
https://youtu.be/aTFmtac2wCg
https://redd.it/kjdcu5
@r_devops
Kuberenetes API is made of several smaller components. In this video you will see how a request has to go through authentication, authorisation, mutation & validation before it is persisted by Kubernetes.
https://youtu.be/aTFmtac2wCg
https://redd.it/kjdcu5
@r_devops
YouTube
Kubernetes API Explained
Kuberenetes API is made of several smaller components. In this video you will see how a request has to go through Authentication, Authorisation, Mutation & V...
Deployer - an easy trigger for deploy script from remote
Hi everybody! Let us present to you a small useful tool for DevOps - deployer. This tool trigger deploys a new app version to target Linux servers. Main idea: you are adding commands for deployment to a config file and then call them remotely by HTTP/HTTPS.
We are using this tool to deploy new app versions from the GitLab pipeline to our servers. We hope that it will be useful for you as well! It is an open-source tool, and we are open to any suggestions!
This utility is located in repositories:
https://gitlab.com/junte/devops/deployer \- main repository
https://github.com/Junte/deployer \- mirror from GitLab
https://redd.it/kjc21j
@r_devops
Hi everybody! Let us present to you a small useful tool for DevOps - deployer. This tool trigger deploys a new app version to target Linux servers. Main idea: you are adding commands for deployment to a config file and then call them remotely by HTTP/HTTPS.
We are using this tool to deploy new app versions from the GitLab pipeline to our servers. We hope that it will be useful for you as well! It is an open-source tool, and we are open to any suggestions!
This utility is located in repositories:
https://gitlab.com/junte/devops/deployer \- main repository
https://github.com/Junte/deployer \- mirror from GitLab
https://redd.it/kjc21j
@r_devops
GitLab
Junte / DevOps / Deployer
Need advice on how to set up authentication for an internally hosted webserver/service
When it comes to auth I feel I'm in a little over my head. I am the owner of an internally hosted gRPC server for our (large) company, hosted on a basic kubernetes multi-pod cluster deployment. Its endpoints are currently exposed with no authentication enabled, so anybody within the company can hit them. There are a couple of ways the server receives requests:
​
\- Via other gRPC Golang clients hosted outside of the cluster
\- Via web browser
​
My requirements are to lock down the client + endpoints with permissions.
​
\- Users must log into our web-browser client via company SSO, and their token maps to certain permissions that we set in the backend (I am guessing the way to do this is have an "Auth" table that maps SSO tokens to various levels of permissions)
\- Golang clients must be white-listed and authenticate with our server in some way. I am guessing certificate-based auth?
​
Is there somewhere I can read or learn about a standard approach to implementing these authentication requirements?
https://redd.it/kjlpwo
@r_devops
When it comes to auth I feel I'm in a little over my head. I am the owner of an internally hosted gRPC server for our (large) company, hosted on a basic kubernetes multi-pod cluster deployment. Its endpoints are currently exposed with no authentication enabled, so anybody within the company can hit them. There are a couple of ways the server receives requests:
​
\- Via other gRPC Golang clients hosted outside of the cluster
\- Via web browser
​
My requirements are to lock down the client + endpoints with permissions.
​
\- Users must log into our web-browser client via company SSO, and their token maps to certain permissions that we set in the backend (I am guessing the way to do this is have an "Auth" table that maps SSO tokens to various levels of permissions)
\- Golang clients must be white-listed and authenticate with our server in some way. I am guessing certificate-based auth?
​
Is there somewhere I can read or learn about a standard approach to implementing these authentication requirements?
https://redd.it/kjlpwo
@r_devops
reddit
Need advice on how to set up authentication for an internally...
When it comes to auth I feel I'm in a little over my head. I am the owner of an internally hosted gRPC server for our (large) company, hosted on a...
2020 Cloud and Development Trends. Thoughts?
Hey All,
I've been thinking a lot about the different cloud and dev trends throughout 2020. I came up with five that I think are the most relevant and the most eye-opening.
The first is GitHub Codespaces. The more we write code, the more we need a centralized location to write the code. For example, let's say there are 5 devs on a team that are writing code. Each dev (perhaps) has a different workstation setup. If they don't have the same extensions and tools as everyone else, there could be unknown issues that you can't really prepare for.
The second is Azure Arc, specifically Azure Arc for Kubernetes. Although the tech, in general, is really cool, I think it shows that Microsoft is starting to think about on-prem and hybrid again. It's pretty clear that at least not anytime soon, on-prem isn't going away. Because of that, why not manage the on-prem stuff in Azure as well?
The third is infrastructure-as-software. With AWS CDK, Hashicorp CDK, and Pulumi, we're starting to see a HUGE trend in creating and managing cloud services with a general-purpose programming language (Go, JavaScript, etc.)
The fourth is something that we've been seeing for a while, but it's because extremely apparent this year - coding for sysadmins and infrastructure engineers. I think we're going to see an upward trend that everyone will be coding in some way or another
Finally, I love the specialty and career-driven certifications that we're seeing. I've never been a huge certification guy myself, but I'm definitely more interested now that most Azure and AWS certs are career-focused.
What are your thoughts on this? What are your top five trends you saw that are worth noting?
If you're interested, I created a short 5-minute video on the topics above. Let me know your thoughts :) https://www.youtube.com/watch?v=E4nrPXoUV0w
https://redd.it/kjl1wx
@r_devops
Hey All,
I've been thinking a lot about the different cloud and dev trends throughout 2020. I came up with five that I think are the most relevant and the most eye-opening.
The first is GitHub Codespaces. The more we write code, the more we need a centralized location to write the code. For example, let's say there are 5 devs on a team that are writing code. Each dev (perhaps) has a different workstation setup. If they don't have the same extensions and tools as everyone else, there could be unknown issues that you can't really prepare for.
The second is Azure Arc, specifically Azure Arc for Kubernetes. Although the tech, in general, is really cool, I think it shows that Microsoft is starting to think about on-prem and hybrid again. It's pretty clear that at least not anytime soon, on-prem isn't going away. Because of that, why not manage the on-prem stuff in Azure as well?
The third is infrastructure-as-software. With AWS CDK, Hashicorp CDK, and Pulumi, we're starting to see a HUGE trend in creating and managing cloud services with a general-purpose programming language (Go, JavaScript, etc.)
The fourth is something that we've been seeing for a while, but it's because extremely apparent this year - coding for sysadmins and infrastructure engineers. I think we're going to see an upward trend that everyone will be coding in some way or another
Finally, I love the specialty and career-driven certifications that we're seeing. I've never been a huge certification guy myself, but I'm definitely more interested now that most Azure and AWS certs are career-focused.
What are your thoughts on this? What are your top five trends you saw that are worth noting?
If you're interested, I created a short 5-minute video on the topics above. Let me know your thoughts :) https://www.youtube.com/watch?v=E4nrPXoUV0w
https://redd.it/kjl1wx
@r_devops
YouTube
Cloud Computing 2020 Wrap-Up
The end of the year is upon us! What does that mean? It's time to chat about the FIVE THINGS that made tech in 2020 pretty awesome!
1. GitHub Codespaces
2. Azure Arc for Kubernetes
3. Infrastructure-as-software
4. Coding for sysadmins and infrastructure…
1. GitHub Codespaces
2. Azure Arc for Kubernetes
3. Infrastructure-as-software
4. Coding for sysadmins and infrastructure…
How to create executors of different types in the Circle CI orb?
I got an issue. I have a Circle CI orb that was created by my colleagues. As this orb is in active use, I cannot just change the executor, so I need to add a new executor of a different type. I posted a question in [StackOverflow,](https://stackoverflow.com/questions/65354545/how-can-i-create-several-executors-for-a-job-in-circle-ci-orb) but no success.
How can I adjust the job itself so that it will accept executors of different types? Please, see the example of a job I want to change below.
Executor:
description: >
The executor to run testcontainers without extra setup in Circle CI builds.
parameters:
# https://circleci.com/docs/2.0/configuration-reference/#resource_class
resource-class:
type: enum
default: medium
enum: [medium, large, xlarge, 2xlarge]
tag:
type: string
default: ubuntu-2004:202010-01
resource_class: <<parameters.resource-class>>
machine:
image: <<parameters.tag>>
Another executor is Docker-based.
Job:
parameters:
executor:
type: executor
default: openjdk
resource-class:
type: enum
default: medium
enum: [small, medium, medium+, large, xlarge]
executor: << parameters.executor >>
resource_class: << parameters.resource-class >>
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx3200m
steps:
# Instead of checking out code, just grab it the way it is
- attach_workspace:
at: .
# Guessing this is still necessary (we only attach the project folder)
- configure-maven-settings
- cloudwheel/fetch-and-update-maven-cache
- run:
name: "Deploy to Nexus without running tests"
command: mvn clean deploy -DskipTests
https://redd.it/kjni2z
@r_devops
I got an issue. I have a Circle CI orb that was created by my colleagues. As this orb is in active use, I cannot just change the executor, so I need to add a new executor of a different type. I posted a question in [StackOverflow,](https://stackoverflow.com/questions/65354545/how-can-i-create-several-executors-for-a-job-in-circle-ci-orb) but no success.
How can I adjust the job itself so that it will accept executors of different types? Please, see the example of a job I want to change below.
Executor:
description: >
The executor to run testcontainers without extra setup in Circle CI builds.
parameters:
# https://circleci.com/docs/2.0/configuration-reference/#resource_class
resource-class:
type: enum
default: medium
enum: [medium, large, xlarge, 2xlarge]
tag:
type: string
default: ubuntu-2004:202010-01
resource_class: <<parameters.resource-class>>
machine:
image: <<parameters.tag>>
Another executor is Docker-based.
Job:
parameters:
executor:
type: executor
default: openjdk
resource-class:
type: enum
default: medium
enum: [small, medium, medium+, large, xlarge]
executor: << parameters.executor >>
resource_class: << parameters.resource-class >>
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx3200m
steps:
# Instead of checking out code, just grab it the way it is
- attach_workspace:
at: .
# Guessing this is still necessary (we only attach the project folder)
- configure-maven-settings
- cloudwheel/fetch-and-update-maven-cache
- run:
name: "Deploy to Nexus without running tests"
command: mvn clean deploy -DskipTests
https://redd.it/kjni2z
@r_devops
Stack Overflow
How can I create several executors for a job in Circle CI orb?
NOTE: The actual problem I am trying to solve is run testcontainers in Circle CI.
To make it reusable, I decided to extend the existing orb in my organisation.
The question, how can I create several
To make it reusable, I decided to extend the existing orb in my organisation.
The question, how can I create several
Checkout a specific branch of Jenkins job configuration itself before triggering a build of that job
Hey people!,
First, I'd like to explain the current way we are triggering Jenkins jobs, before asking the main question.
Our CD pipeline is represented by a set of parameterized jobs: some have 1-2 parameters, others have up to 25-30 parameters.
The whole pipeline is driven by a set of shell scripts that trigger Jenkins jobs in correct order with correct parameters, based on what the user wants to do.
We follow IaC practices, so our Jenkins build scripts are version controlled. For that, we have a `ci` repository with folders named after the job names, and inside those folders we have a `build_script` file.
All jobs have a `branch` parameter, and always check out the `ci` repo at master as the first step.
The `branch` parameter tells what branch the job's `build_script` needs to be taken from, so if that branch exists in `ci` repository, that branch is checked out.
Then the `$WORKSPACE/$JOB_NAME/build_script` file is executed as the only job's shell command.
This pattern allows us to work safely on Jenkins build scripts and replicates the usual development feature branch workflow, without affecting the master pipeline execution code.
While this works perfect for us when changes are in build scripts, this method doesn't work when the job configuration itself needs to change. For example, Jenkins job parameters are not "version controlled", and you can't "checkout" a specific version of the Jenkins job that has a different set of parameters, while keeping the master parameters untouched.
This makes it hard to test Jenkins job configuration, and requires introducing backward compatible changes to the single master Job configuration, which instantly affects the master pipeline workflow.
​
My question is: are you guys aware of any way to achieve this?
\- maybe there are some Jenkins plugins that allow you to checkout Jenkins job's code before triggering builds of that job?
\- i'm not very familiar with Jenkins pipelines, but maybe they allow doing something similar?
\- are there any better CI solutions that can handle such use case?
​
Thank you!
https://redd.it/kjieqf
@r_devops
Hey people!,
First, I'd like to explain the current way we are triggering Jenkins jobs, before asking the main question.
Our CD pipeline is represented by a set of parameterized jobs: some have 1-2 parameters, others have up to 25-30 parameters.
The whole pipeline is driven by a set of shell scripts that trigger Jenkins jobs in correct order with correct parameters, based on what the user wants to do.
We follow IaC practices, so our Jenkins build scripts are version controlled. For that, we have a `ci` repository with folders named after the job names, and inside those folders we have a `build_script` file.
All jobs have a `branch` parameter, and always check out the `ci` repo at master as the first step.
The `branch` parameter tells what branch the job's `build_script` needs to be taken from, so if that branch exists in `ci` repository, that branch is checked out.
Then the `$WORKSPACE/$JOB_NAME/build_script` file is executed as the only job's shell command.
This pattern allows us to work safely on Jenkins build scripts and replicates the usual development feature branch workflow, without affecting the master pipeline execution code.
While this works perfect for us when changes are in build scripts, this method doesn't work when the job configuration itself needs to change. For example, Jenkins job parameters are not "version controlled", and you can't "checkout" a specific version of the Jenkins job that has a different set of parameters, while keeping the master parameters untouched.
This makes it hard to test Jenkins job configuration, and requires introducing backward compatible changes to the single master Job configuration, which instantly affects the master pipeline workflow.
​
My question is: are you guys aware of any way to achieve this?
\- maybe there are some Jenkins plugins that allow you to checkout Jenkins job's code before triggering builds of that job?
\- i'm not very familiar with Jenkins pipelines, but maybe they allow doing something similar?
\- are there any better CI solutions that can handle such use case?
​
Thank you!
https://redd.it/kjieqf
@r_devops
reddit
Checkout a specific branch of Jenkins job configuration itself...
Hey people!, First, I'd like to explain the current way we are triggering Jenkins jobs, before asking the main question. Our CD pipeline is...