Does anyone know any video training that pertains specifically to Jenkinsfiles?
Ive got some courses on Jenkins but they only lightly touch on Jenkinsfiles. Is there like a comprehensive training on Jenkinsfiles? I request video training because that's how I (and i think most people) learn the fastest. Thank you :).
https://redd.it/fuk5uy
@r_devops
Ive got some courses on Jenkins but they only lightly touch on Jenkinsfiles. Is there like a comprehensive training on Jenkinsfiles? I request video training because that's how I (and i think most people) learn the fastest. Thank you :).
https://redd.it/fuk5uy
@r_devops
reddit
r/devops - Does anyone know any video training that pertains specifically to Jenkinsfiles?
0 votes and 0 comments so far on Reddit
Automating initial install of new server
Hey all,
​
This is a quarantine-driven exercise as part of a workflow that has been on my to-do list for quite a while. I'm a software dev by day, but have two servers at home that I want to experiment with in this free time, and the name of the game is complete automation.
​
Given that this is an exercise, instead of a quick-fix or anything of that sort, I'm looking to follow industry standards, where possible. One exception being that I am learning with servers at home, so "on-prem", whereas I know the industry is standardizing around cloud-native.
​
Now my question for you guys is, pretending these two servers I have are brand new servers at your job, how is the OS installation handled in an automated way? I can't imagine someone is sitting there babysitting a new server OS install. Or, as a devops person, is the OS installation taken care of before you get your hands on it?
​
With what I currently know, PXE booting the OS seems like the best(only?) option. From my limited reading, I am assuming this can be fully automated, as long as the BIOS boot order is set to check for PXE/network boot. For my two servers, this was enabled by default, so I will assume that is somewhat standard, correct me if I am wrong.
​
Another aspect that I haven't yet had time to research, is an initial-setup script, of sorts, to be run right after the OS installation. Any insights, here?
​
If I am wildly off base please set me straight! I feel that this is a tricky part of the automation workflow to research, or my google-foo is off.
https://redd.it/fuirkl
@r_devops
Hey all,
​
This is a quarantine-driven exercise as part of a workflow that has been on my to-do list for quite a while. I'm a software dev by day, but have two servers at home that I want to experiment with in this free time, and the name of the game is complete automation.
​
Given that this is an exercise, instead of a quick-fix or anything of that sort, I'm looking to follow industry standards, where possible. One exception being that I am learning with servers at home, so "on-prem", whereas I know the industry is standardizing around cloud-native.
​
Now my question for you guys is, pretending these two servers I have are brand new servers at your job, how is the OS installation handled in an automated way? I can't imagine someone is sitting there babysitting a new server OS install. Or, as a devops person, is the OS installation taken care of before you get your hands on it?
​
With what I currently know, PXE booting the OS seems like the best(only?) option. From my limited reading, I am assuming this can be fully automated, as long as the BIOS boot order is set to check for PXE/network boot. For my two servers, this was enabled by default, so I will assume that is somewhat standard, correct me if I am wrong.
​
Another aspect that I haven't yet had time to research, is an initial-setup script, of sorts, to be run right after the OS installation. Any insights, here?
​
If I am wildly off base please set me straight! I feel that this is a tricky part of the automation workflow to research, or my google-foo is off.
https://redd.it/fuirkl
@r_devops
reddit
Automating initial install of new server
Hey all, This is a quarantine-driven exercise as part of a workflow that has been on my to-do list for quite a while. I'm a software...
Network automation, Ansible, and 2FA
Hey all,
So, I'm just getting started down the network automation path.
My current infrastructure is mostly Cisco IOS and Nexus, with some Juniper and Fortinet mixed in. In the future we hope to be moving mostly towards Cumulus + Fortinet.
Right now we have multiple authentication methods for network infrastructure. It's a bit disjointed, but it could be any of:
- TACACS --> Windows AD (1FA), restricted to 2FA-enforced (CAC) hosts and/or Guacamole (Duo)
- AD (LDAP) only (1FA), restricted to 2FA-enforced (CAC) hosts and/or Guacamole (Duo)
- RADIUS --> SecureID
- LDAP --> Duo --> LDAP to AD
- TACACS --> TACACS Server --> LDAP to Duo --> LDAP to AD
- Local
On top of this, we use separate AD accounts for our privileged access to network equipment. We log into our desktops and jumpboxes with our standard accounts (CAC) and then log into devices with our privileged accounts.
One thing that's on my docket is getting *everything* 2FA'd, in some capacity. If that means a mix of 2-3 solutions, so be it...
Has anybody done 2FA in such an environment? Did you just use a bastion host with some sort of key management and just push out public keys to everything via playbooks? I can't determine an easy way to handle this that wouldn't be a total culture shock to everyone. I don't see us just going 0 to 100 overnight with Ansible...but at the same time it doesn't seem Cisco supports any central management of SSH Keys, or using SSH Keys and LDAP/TACACS/RADIUS simultaneously.
https://redd.it/fufn7m
@r_devops
Hey all,
So, I'm just getting started down the network automation path.
My current infrastructure is mostly Cisco IOS and Nexus, with some Juniper and Fortinet mixed in. In the future we hope to be moving mostly towards Cumulus + Fortinet.
Right now we have multiple authentication methods for network infrastructure. It's a bit disjointed, but it could be any of:
- TACACS --> Windows AD (1FA), restricted to 2FA-enforced (CAC) hosts and/or Guacamole (Duo)
- AD (LDAP) only (1FA), restricted to 2FA-enforced (CAC) hosts and/or Guacamole (Duo)
- RADIUS --> SecureID
- LDAP --> Duo --> LDAP to AD
- TACACS --> TACACS Server --> LDAP to Duo --> LDAP to AD
- Local
On top of this, we use separate AD accounts for our privileged access to network equipment. We log into our desktops and jumpboxes with our standard accounts (CAC) and then log into devices with our privileged accounts.
One thing that's on my docket is getting *everything* 2FA'd, in some capacity. If that means a mix of 2-3 solutions, so be it...
Has anybody done 2FA in such an environment? Did you just use a bastion host with some sort of key management and just push out public keys to everything via playbooks? I can't determine an easy way to handle this that wouldn't be a total culture shock to everyone. I don't see us just going 0 to 100 overnight with Ansible...but at the same time it doesn't seem Cisco supports any central management of SSH Keys, or using SSH Keys and LDAP/TACACS/RADIUS simultaneously.
https://redd.it/fufn7m
@r_devops
reddit
Network automation, Ansible, and 2FA
Hey all, So, I'm just getting started down the network automation path. My current infrastructure is mostly Cisco IOS and Nexus, with some...
Digital Ocean VPC setup
Hi y'all.
I need to replicate a multi-az VPC setup in AWS (with public and private subnets, NAT gateway) on Digital Ocean. How should I go about this?
Thanks!
https://redd.it/fucydg
@r_devops
Hi y'all.
I need to replicate a multi-az VPC setup in AWS (with public and private subnets, NAT gateway) on Digital Ocean. How should I go about this?
Thanks!
https://redd.it/fucydg
@r_devops
reddit
Digital Ocean VPC setup
Hi y'all. I need to replicate a multi-az VPC setup in AWS (with public and private subnets, NAT gateway) on Digital Ocean. How should I go about...
Refreshing non-prod Windows MS SQL servers on-premise in a VMware setting
How would you define / process refreshing a non-production environment?
A while back, we were getting rid of CommVault which our DBAs tied their refresh / restore process to but due to timing, we needed to do a refresh process without CommVault since our new backup solution was just starting up.
I was tasked to do this via Storage snapshots using Pure storage. The process works well. Basically, I create new datastore snapshots in Pure, translate it to VMware, and then attach it to the correct VM hands off. I am basically using Pure PowerShell, VMware PowerCli, Windows PowerShell, and a JSON file that has the mappings / translations of source drive to destination all in a Git Repo. This is all ran in Jenkins with a drop down selection for which environment you want to refresh. I can refresh any size server, whether it's 100 GB or 2 TB in the same amount of time which is about 20 - 30 minutes. A downside is I can only do this one at a time because of the scanning that happens in VMware for the datastores.
I guess my question is, does this sound like a good approach to stick with and improve? Or should we be using our backup solution for automated refreshes? I guess a downside to it though is it can take 8+ hours to restore the larger databases. On top of that, aside from restoring, they still need to scrub the data which takes time.
https://redd.it/fum978
@r_devops
How would you define / process refreshing a non-production environment?
A while back, we were getting rid of CommVault which our DBAs tied their refresh / restore process to but due to timing, we needed to do a refresh process without CommVault since our new backup solution was just starting up.
I was tasked to do this via Storage snapshots using Pure storage. The process works well. Basically, I create new datastore snapshots in Pure, translate it to VMware, and then attach it to the correct VM hands off. I am basically using Pure PowerShell, VMware PowerCli, Windows PowerShell, and a JSON file that has the mappings / translations of source drive to destination all in a Git Repo. This is all ran in Jenkins with a drop down selection for which environment you want to refresh. I can refresh any size server, whether it's 100 GB or 2 TB in the same amount of time which is about 20 - 30 minutes. A downside is I can only do this one at a time because of the scanning that happens in VMware for the datastores.
I guess my question is, does this sound like a good approach to stick with and improve? Or should we be using our backup solution for automated refreshes? I guess a downside to it though is it can take 8+ hours to restore the larger databases. On top of that, aside from restoring, they still need to scrub the data which takes time.
https://redd.it/fum978
@r_devops
reddit
Refreshing non-prod Windows MS SQL servers on-premise in a VMware...
How would you define / process refreshing a non-production environment? A while back, we were getting rid of CommVault which our DBAs tied their...
Do you experience devops fomo?
I am interested to hear if people feel FOMO about the tools they think they should use but dont (because they company or their team does not)
This could be anything from Docker to things like Istio. Also maybe If people feel trapped learning technologies they deem obsolete (Chef gets a lot of hate recently) or just dont like at all.
I personally have no issue picking up new tech. I dont claim to be an expert in everything but I can quickly adapt and learn new things although most of them do not really excite me in any way anymore.
https://redd.it/fucj4b
@r_devops
I am interested to hear if people feel FOMO about the tools they think they should use but dont (because they company or their team does not)
This could be anything from Docker to things like Istio. Also maybe If people feel trapped learning technologies they deem obsolete (Chef gets a lot of hate recently) or just dont like at all.
I personally have no issue picking up new tech. I dont claim to be an expert in everything but I can quickly adapt and learn new things although most of them do not really excite me in any way anymore.
https://redd.it/fucj4b
@r_devops
reddit
Do you experience devops fomo?
I am interested to hear if people feel FOMO about the tools they think they should use but dont (because they company or their team does...
Jenkins Pipeline Maven Project |Jenkins Upstream And Downstream Jobs
[https://www.youtube.com/watch?v=ctDJryQU7l4&feature=share](https://www.youtube.com/watch?v=ctDJryQU7l4&feature=share)
https://redd.it/fusdfq
@r_devops
[https://www.youtube.com/watch?v=ctDJryQU7l4&feature=share](https://www.youtube.com/watch?v=ctDJryQU7l4&feature=share)
https://redd.it/fusdfq
@r_devops
YouTube
Jenkins Pipeline Maven Project |Jenkins Upstream And Downstream Jobs
Hello Friends, Welcome back to my channel. We have done several videos on Jenkins, maven and other configurations with jenkins in my previous tutorials. Thos...
Having different parts of websites on different instances - AWS
Hello,
I am getting into web dev and have a project where I am trying to make a homepage of a site on one ec2 instance at [https://mysite.com/index.html](https://mysite.com/index.html)
​
my blog on another ec2 instance under [https://mysite.com/blog](https://mysite.com/blog)
​
and my projects on another ec2 instance under [https://mysite.com/projects](https://mysite.com/projects)
​
and was wondering how I could do this and how I could make a hosted zone that makes two instances have the same domain / how I could/should route between these instances and if I should actually do this
​
any advice would be greatly appreciated!
https://redd.it/fvgy6y
@r_devops
Hello,
I am getting into web dev and have a project where I am trying to make a homepage of a site on one ec2 instance at [https://mysite.com/index.html](https://mysite.com/index.html)
​
my blog on another ec2 instance under [https://mysite.com/blog](https://mysite.com/blog)
​
and my projects on another ec2 instance under [https://mysite.com/projects](https://mysite.com/projects)
​
and was wondering how I could do this and how I could make a hosted zone that makes two instances have the same domain / how I could/should route between these instances and if I should actually do this
​
any advice would be greatly appreciated!
https://redd.it/fvgy6y
@r_devops
Release management raw scripts
Hello
I'm searching a tools with UI to follow deploy version on inté, staging, test and production
I use jenkins to CI, i deploy manualy, i want to use Jenkins to CD but it doesnt get an environment visualization on release by environment like gitlab ci for example
Thanks a lot
https://redd.it/fvgmhg
@r_devops
Hello
I'm searching a tools with UI to follow deploy version on inté, staging, test and production
I use jenkins to CI, i deploy manualy, i want to use Jenkins to CD but it doesnt get an environment visualization on release by environment like gitlab ci for example
Thanks a lot
https://redd.it/fvgmhg
@r_devops
reddit
r/devops - Release management raw scripts
2 votes and 2 comments so far on Reddit
Does anyone still use Apache Solr?
Elasticsearch seems to have completely overtaken it, but is anyone still using it? Is it better than Elasticsearch for certain things?
https://redd.it/fvcn0j
@r_devops
Elasticsearch seems to have completely overtaken it, but is anyone still using it? Is it better than Elasticsearch for certain things?
https://redd.it/fvcn0j
@r_devops
reddit
Does anyone still use Apache Solr?
Elasticsearch seems to have completely overtaken it, but is anyone still using it? Is it better than Elasticsearch for certain things?
Your own Kubernetes controller - Improving and deploying
In the first post of this series, we described the concept behind a Kubernetes controller. In short, it’s just a plain control loop that reconciles the desired state of the cluster with its current state. In the second post, we implemented a sidecar controller in Java. This third and last post will be focused on where to deploy this Java controller and how to improve it to be on par with a Go one with the help of GraalVM AOT compilation.
https://blog.frankel.ch/your-own-kubernetes-controller/3/
https://redd.it/fvggm1
@r_devops
In the first post of this series, we described the concept behind a Kubernetes controller. In short, it’s just a plain control loop that reconciles the desired state of the cluster with its current state. In the second post, we implemented a sidecar controller in Java. This third and last post will be focused on where to deploy this Java controller and how to improve it to be on par with a Go one with the help of GraalVM AOT compilation.
https://blog.frankel.ch/your-own-kubernetes-controller/3/
https://redd.it/fvggm1
@r_devops
A Java geek
Your own Kubernetes controller - Improving and deploying
In the first post of this series, we described the concept behind a Kubernetes controller. In short, it’s just a plain control loop that reconciles the desired state of the cluster with its current state. In the second post, we implemented a sidecar controller…
Video Tutorial on how to configure an AWS Lambda function as a target for an Amazon Application Load Balancer (ALB)
In this video tutorial, I'll demonstrate how you can configure an AWS Lambda function as a target for an Amazon Application Load Balancer (ALB). This might be used to use the intelligent routing features of the ALB when you have multiple functions in an application or it can be used to add an SSL/TLS listener for public-facing endpoint of your application. Watch the video here: [https://youtu.be/56a-wAeEl7E](https://youtu.be/56a-wAeEl7E)
For more details on AWS Lambda, check out the Free Cheat Sheets from digital cloud training: [https://digitalcloud.training/certification-training/aws-developer-associate/aws-compute/aws-lambda/](https://digitalcloud.training/certification-training/aws-developer-associate/aws-compute/aws-lambda/)
This video lesson is an excerpt from our comprehensive training course for the AWS Certified Developer Associate to be released within the next few days! This is a great time to get started with your next certification and make sure your skills are cutting edge. The AWS Certified Developer Associate certification sets you apart from the crowd in a competitive market. Get started now with the comprehensive training course for the AWS Certified Developer Associate from digital cloud training. To secure your special launch offer, simply register your interest here: [https://digitalcloud.training/aws-certified-developer-associate-exam-training](https://digitalcloud.training/aws-certified-developer-associate-exam-training)
https://redd.it/fvomdu
@r_devops
In this video tutorial, I'll demonstrate how you can configure an AWS Lambda function as a target for an Amazon Application Load Balancer (ALB). This might be used to use the intelligent routing features of the ALB when you have multiple functions in an application or it can be used to add an SSL/TLS listener for public-facing endpoint of your application. Watch the video here: [https://youtu.be/56a-wAeEl7E](https://youtu.be/56a-wAeEl7E)
For more details on AWS Lambda, check out the Free Cheat Sheets from digital cloud training: [https://digitalcloud.training/certification-training/aws-developer-associate/aws-compute/aws-lambda/](https://digitalcloud.training/certification-training/aws-developer-associate/aws-compute/aws-lambda/)
This video lesson is an excerpt from our comprehensive training course for the AWS Certified Developer Associate to be released within the next few days! This is a great time to get started with your next certification and make sure your skills are cutting edge. The AWS Certified Developer Associate certification sets you apart from the crowd in a competitive market. Get started now with the comprehensive training course for the AWS Certified Developer Associate from digital cloud training. To secure your special launch offer, simply register your interest here: [https://digitalcloud.training/aws-certified-developer-associate-exam-training](https://digitalcloud.training/aws-certified-developer-associate-exam-training)
https://redd.it/fvomdu
@r_devops
YouTube
Elastic Load Balancing (ELB) - Lambda Functions as Targets | AWS Certified Developer Associate
Watch this free AWS Video tutorial to learn about Application Load Balancers (ALBs) that support AWS Lambda functions as targets. You can register your Lambd...
Pre-production deployment best practices
I'm curious about the accepted best practices surrounding deployment to pre-production environments.
We follow a microservice architecture where each microservice resides in its own git repository. Several cross-functional teams work independently on their own microservices. We have a production environment and a single pre-production environment that is shared among all teams. When a commit is made to the master branch, it will be deployed to the production and pre-production environments automatically. Sometimes developers want to deploy only to pre-production so that they and the POs can test it in a "real world scenario". To do this the developer has to change the `.gitlab-ci.yml` file on their branch so that it will deploy to pre-production and then later change it back before merging into master.
This approach feels kind of "wrong" and "manual" to me. In order to see what state the pre-production environment is in, you have to look at the pipelines and find which one was the last one to deploy. It _can_ also easily happen that one developer overwrites changes to pre-production that another developer was testing without noticing (though in practice this is very rarely a problem). My first idea was to create a `staging` branch that will deploy to pre-production automatically and represent the state of the pre-production environment, analogously to the `master` branch. A significant problem with this is that the `staging` branch has to be resetted to `master` whenever there is a new commit to `master` so that the two branches don't diverge.
How are you handling this? Do you see a problem with our approach too, or am I simply obsessing over details again?
On a related note: How do you handle database rollbacks on the pre-production environment? For example: A developer my test a migration on the pre-production environment that didn't work. How can he rollback to a previous database state to test it again?
https://redd.it/fvd5ns
@r_devops
I'm curious about the accepted best practices surrounding deployment to pre-production environments.
We follow a microservice architecture where each microservice resides in its own git repository. Several cross-functional teams work independently on their own microservices. We have a production environment and a single pre-production environment that is shared among all teams. When a commit is made to the master branch, it will be deployed to the production and pre-production environments automatically. Sometimes developers want to deploy only to pre-production so that they and the POs can test it in a "real world scenario". To do this the developer has to change the `.gitlab-ci.yml` file on their branch so that it will deploy to pre-production and then later change it back before merging into master.
This approach feels kind of "wrong" and "manual" to me. In order to see what state the pre-production environment is in, you have to look at the pipelines and find which one was the last one to deploy. It _can_ also easily happen that one developer overwrites changes to pre-production that another developer was testing without noticing (though in practice this is very rarely a problem). My first idea was to create a `staging` branch that will deploy to pre-production automatically and represent the state of the pre-production environment, analogously to the `master` branch. A significant problem with this is that the `staging` branch has to be resetted to `master` whenever there is a new commit to `master` so that the two branches don't diverge.
How are you handling this? Do you see a problem with our approach too, or am I simply obsessing over details again?
On a related note: How do you handle database rollbacks on the pre-production environment? For example: A developer my test a migration on the pre-production environment that didn't work. How can he rollback to a previous database state to test it again?
https://redd.it/fvd5ns
@r_devops
reddit
Pre-production deployment best practices
I'm curious about the accepted best practices surrounding deployment to pre-production environments. We follow a microservice architecture where...
Packer not able to build a CentOS 8 template on vmware
Hello,I'm running Vmware vcenter 6.7, and packer 1.5.5 on centos 8I have successfully built a centos 7 template, but I'm struggling to make a template with centos 8.Here is my variables file (variables.json)
{
"vsphere_server": "192.168.0.51",
"vsphere_username": "[email protected]",
"vsphere_password": "password",
"vsphere_datacenter": "Datacenter",
"vsphere_datastore": "datastore",
"vsphere_folder": "Templates",
"vsphere_host": "host.domain.local",
"vsphere_network": "network1",
"vsphere_template_folder": "Templates",
"ssh_root_username": "root",
"ssh_root_password": "password",
"ssh_username": "admin",
"ssh_password": "password"
}
Here is my json file (centos8\_buildtemplate.json)
​
{
"builders": [
{
"type": "vsphere-iso",
"vcenter_server": "{{user `vsphere_server`}}",
"username": "{{user `vsphere_username`}}",
"password": "{{user `vsphere_password`}}",
"insecure_connection": "true",
"datacenter": "{{user `vsphere_datacenter`}}",
"host": "{{user `vsphere_host`}}",
"network": "{{user `vsphere_network`}}",
"datastore": "{{user `vsphere_datastore`}}",
"vm_name": "T-CentOS8",
"notes": "Build via Packer",
"guest_os_type": "centos8_64Guest",
"boot_wait": "10s",
"boot_order": "disk,cdrom,floppy",
"ssh_username": "{{user `ssh_root_username`}}",
"ssh_password": "{{user `ssh_root_password`}}",
"CPUs": "1",
"RAM": "2048",
"RAM_reserve_all": false,
"disk_controller_type": "pvscsi",
"disk_size": "32768",
"disk_thin_provisioned": false,
"network_card": "vmxnet3",
"convert_to_template": true,
"folder": "{{user `vsphere_template_folder`}}",
"iso_paths": ["[datastore] ISO/Linux/CentOS-8.1.1911-x86_64-dvd1.iso"],
"floppy_files": ["centos8_kickstart.cfg"],
"boot_command": [
"<esc><wait>",
"linux ks=hd:fd0:/centos8_kickstart.cfg<enter>"
]
}
]
}
Here is my kickstart file (centos8\_kickstart.cfg)
​
install
cdrom
lang en_US.UTF-8
keyboard us
network --bootproto=dhcp
rootpw password
firewall --disabled
selinux --permissive
timezone UTC
bootloader --location=mbr
text
skipx
zerombr
clearpart --all --initlabel
autopart
auth --enableshadow --passalgo=sha512 --kickstart
firstboot --disabled
eula --agreed
services --enabled=NetworkManager,sshd
user --name=admin --plaintext --password password --groups=wheel
reboot
%packages --ignoremissing --excludedocs
u/Base
u/Core
u/Development Tools
openssh-clients
sudo
openssl-devel
readline-devel
zlib-devel
kernel-headers
kernel-devel
net-tools
vim
wget
curl
rsync
%end
%post
yum update -y
useradd admin
echo "admin" | passwd password --stdin
usermod -a -G wheel admin
# sudo
yum install -y sudo
echo "admin ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/admin
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
yum clean all
%end
Running packer
./packer build -var-file variables.json centos8_buildtemplate.json
Here is the packer output
vsphere-iso: output will be in this color.
==> vsphere-iso: Creating VM...
==> vsphere-iso: Customizing hardware...
==> vsphere-iso: Mounting ISO images...
==> vsphere-iso: Creating floppy disk...
vsphere-iso: Copying files flatly from floppy_files
vsphere-iso: Copying file: centos8_kickstart.cfg
vsphere-iso: Done copying files from floppy_files
vsphere-iso: Collecting paths from floppy_dirs
vsphere-iso: Resulting paths from
Hello,I'm running Vmware vcenter 6.7, and packer 1.5.5 on centos 8I have successfully built a centos 7 template, but I'm struggling to make a template with centos 8.Here is my variables file (variables.json)
{
"vsphere_server": "192.168.0.51",
"vsphere_username": "[email protected]",
"vsphere_password": "password",
"vsphere_datacenter": "Datacenter",
"vsphere_datastore": "datastore",
"vsphere_folder": "Templates",
"vsphere_host": "host.domain.local",
"vsphere_network": "network1",
"vsphere_template_folder": "Templates",
"ssh_root_username": "root",
"ssh_root_password": "password",
"ssh_username": "admin",
"ssh_password": "password"
}
Here is my json file (centos8\_buildtemplate.json)
​
{
"builders": [
{
"type": "vsphere-iso",
"vcenter_server": "{{user `vsphere_server`}}",
"username": "{{user `vsphere_username`}}",
"password": "{{user `vsphere_password`}}",
"insecure_connection": "true",
"datacenter": "{{user `vsphere_datacenter`}}",
"host": "{{user `vsphere_host`}}",
"network": "{{user `vsphere_network`}}",
"datastore": "{{user `vsphere_datastore`}}",
"vm_name": "T-CentOS8",
"notes": "Build via Packer",
"guest_os_type": "centos8_64Guest",
"boot_wait": "10s",
"boot_order": "disk,cdrom,floppy",
"ssh_username": "{{user `ssh_root_username`}}",
"ssh_password": "{{user `ssh_root_password`}}",
"CPUs": "1",
"RAM": "2048",
"RAM_reserve_all": false,
"disk_controller_type": "pvscsi",
"disk_size": "32768",
"disk_thin_provisioned": false,
"network_card": "vmxnet3",
"convert_to_template": true,
"folder": "{{user `vsphere_template_folder`}}",
"iso_paths": ["[datastore] ISO/Linux/CentOS-8.1.1911-x86_64-dvd1.iso"],
"floppy_files": ["centos8_kickstart.cfg"],
"boot_command": [
"<esc><wait>",
"linux ks=hd:fd0:/centos8_kickstart.cfg<enter>"
]
}
]
}
Here is my kickstart file (centos8\_kickstart.cfg)
​
install
cdrom
lang en_US.UTF-8
keyboard us
network --bootproto=dhcp
rootpw password
firewall --disabled
selinux --permissive
timezone UTC
bootloader --location=mbr
text
skipx
zerombr
clearpart --all --initlabel
autopart
auth --enableshadow --passalgo=sha512 --kickstart
firstboot --disabled
eula --agreed
services --enabled=NetworkManager,sshd
user --name=admin --plaintext --password password --groups=wheel
reboot
%packages --ignoremissing --excludedocs
u/Base
u/Core
u/Development Tools
openssh-clients
sudo
openssl-devel
readline-devel
zlib-devel
kernel-headers
kernel-devel
net-tools
vim
wget
curl
rsync
%end
%post
yum update -y
useradd admin
echo "admin" | passwd password --stdin
usermod -a -G wheel admin
# sudo
yum install -y sudo
echo "admin ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/admin
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
yum clean all
%end
Running packer
./packer build -var-file variables.json centos8_buildtemplate.json
Here is the packer output
vsphere-iso: output will be in this color.
==> vsphere-iso: Creating VM...
==> vsphere-iso: Customizing hardware...
==> vsphere-iso: Mounting ISO images...
==> vsphere-iso: Creating floppy disk...
vsphere-iso: Copying files flatly from floppy_files
vsphere-iso: Copying file: centos8_kickstart.cfg
vsphere-iso: Done copying files from floppy_files
vsphere-iso: Collecting paths from floppy_dirs
vsphere-iso: Resulting paths from
floppy_dirs : []
vsphere-iso: Done copying paths from floppy_dirs
==> vsphere-iso: Uploading created floppy image
==> vsphere-iso: Adding generated Floppy...
==> vsphere-iso: Set boot order...
==> vsphere-iso: Power on VM...
==> vsphere-iso: Waiting 10s for boot...
==> vsphere-iso: Typing boot command...
==> vsphere-iso: Waiting for IP...
And console output (sorry for typo made by OCR)
boot: linux ks.hd:fd0:/centos8_kickstart.cfg 6.730445] dracut-pre-udeuI500]: modprobe, FATAL: Module floppy not found in directory /lib/modules/4.18.0-147.e18.x86_64 I OX ] Started Show Plymouth Boot Screen. I OX ] Reached target Local Encrypted Volumes. I OX ] Reached target Paths. I OX ] Started Forward Password Requests to Plymouth Directory Watch. 1 8.998201] sd 0:0:0:0: Isda] Assuming drive cache: write through I OX ] Started udeu Wait for Complete Deuice Initialization. Starting Deuice-Mapper Multipath Deuice Controller... I OX ] Started Deuice-Mapper Multipath Deuice Controller. Starting Open-iSCSI... I OX ] Reached target Local File Systems (Pre). I OX ] Reached target Local File Systems. Starting Create Volatile Files and Directories... I OX ] Started Open-iSCSI. Starting dracut initqueue hook... I OX ] Started Create Volatile Files and Directories. I OX ] Reached target System Initialization. I OX ] Reached target Basic System. 1 9.530734] dracut-initqueue(962]: mount: /run/install/repo: WARNING: de ice w e-pro ected, mou ted read-only.
then
[ 193.337189] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 193.878185] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 194.416283] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 194.954105] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 195.490848] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 196.033057] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 196.572525] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 197.115108] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 197.654665] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 198.190656] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 198.733470] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 199.275330] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 199.822282] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 200.360310] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 200.898201] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 201.437377] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 201.975400] dracut-initqueue9621: Warning: dracut-initqueue timeout - starting timeout scripts [ 202.513043] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 203.050550] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 203.588125] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 204.126720] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 204.667917] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts 1 . dracut-InItqueue[962]: Warning: Could not boot. Starting Setup Virtual Console... [ Oil ] Started Setup Virtual Console. Starting Dracut Emergency Shell...
,enerating ",run'initramfs/rdsosreport.txt"
entering emergency mode. Exit the shell to continue. Type "journalctl" to view system logs. You might want to sane "/run/initramfs/rdsosreport.txt" to a USB stick or /boot •fter mounting them and attach it to a bug
vsphere-iso: Done copying paths from floppy_dirs
==> vsphere-iso: Uploading created floppy image
==> vsphere-iso: Adding generated Floppy...
==> vsphere-iso: Set boot order...
==> vsphere-iso: Power on VM...
==> vsphere-iso: Waiting 10s for boot...
==> vsphere-iso: Typing boot command...
==> vsphere-iso: Waiting for IP...
And console output (sorry for typo made by OCR)
boot: linux ks.hd:fd0:/centos8_kickstart.cfg 6.730445] dracut-pre-udeuI500]: modprobe, FATAL: Module floppy not found in directory /lib/modules/4.18.0-147.e18.x86_64 I OX ] Started Show Plymouth Boot Screen. I OX ] Reached target Local Encrypted Volumes. I OX ] Reached target Paths. I OX ] Started Forward Password Requests to Plymouth Directory Watch. 1 8.998201] sd 0:0:0:0: Isda] Assuming drive cache: write through I OX ] Started udeu Wait for Complete Deuice Initialization. Starting Deuice-Mapper Multipath Deuice Controller... I OX ] Started Deuice-Mapper Multipath Deuice Controller. Starting Open-iSCSI... I OX ] Reached target Local File Systems (Pre). I OX ] Reached target Local File Systems. Starting Create Volatile Files and Directories... I OX ] Started Open-iSCSI. Starting dracut initqueue hook... I OX ] Started Create Volatile Files and Directories. I OX ] Reached target System Initialization. I OX ] Reached target Basic System. 1 9.530734] dracut-initqueue(962]: mount: /run/install/repo: WARNING: de ice w e-pro ected, mou ted read-only.
then
[ 193.337189] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 193.878185] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 194.416283] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 194.954105] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 195.490848] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 196.033057] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 196.572525] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 197.115108] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 197.654665] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 198.190656] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 198.733470] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 199.275330] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 199.822282] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 200.360310] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 200.898201] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 201.437377] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 201.975400] dracut-initqueue9621: Warning: dracut-initqueue timeout - starting timeout scripts [ 202.513043] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 203.050550] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 203.588125] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 204.126720] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts [ 204.667917] dracut-initqueue[962]: Warning: dracut-initqueue timeout - starting timeout scripts 1 . dracut-InItqueue[962]: Warning: Could not boot. Starting Setup Virtual Console... [ Oil ] Started Setup Virtual Console. Starting Dracut Emergency Shell...
,enerating ",run'initramfs/rdsosreport.txt"
entering emergency mode. Exit the shell to continue. Type "journalctl" to view system logs. You might want to sane "/run/initramfs/rdsosreport.txt" to a USB stick or /boot •fter mounting them and attach it to a bug
report.
,racut:41 _
I have to kill the VM then.
The similar files for CentOS 7 work perfectly though.
Does anyone made a successful centos 8 template for vmware, what am I missing ?
Thank you very much for your help
https://redd.it/fvdtwy
@r_devops
,racut:41 _
I have to kill the VM then.
The similar files for CentOS 7 work perfectly though.
Does anyone made a successful centos 8 template for vmware, what am I missing ?
Thank you very much for your help
https://redd.it/fvdtwy
@r_devops
reddit
Packer not able to build a CentOS 8 template on vmware
Hello,I'm running Vmware vcenter 6.7, and packer 1.5.5 on centos 8I have successfully built a centos 7 template, but I'm struggling to make a...
To all cloud engineers: what database Skills do you use or need at work?
I have an interview coming soon for an. Operations engineer role in cloud. Of the job descriptions they mention “database management” skills. I checked other job descriptions of the same role and all they mention is “database skills required”. I couldnt find anything that explains exactly what database skills do cloud operations engineers need or use. Can you please provide examples? Do i need to learn query? Installing of sql servers? Is it just database administration skills?
https://redd.it/fv61lj
@r_devops
I have an interview coming soon for an. Operations engineer role in cloud. Of the job descriptions they mention “database management” skills. I checked other job descriptions of the same role and all they mention is “database skills required”. I couldnt find anything that explains exactly what database skills do cloud operations engineers need or use. Can you please provide examples? Do i need to learn query? Installing of sql servers? Is it just database administration skills?
https://redd.it/fv61lj
@r_devops
reddit
To all cloud engineers: what database Skills do you use or need at...
I have an interview coming soon for an. Operations engineer role in cloud. Of the job descriptions they mention “database management” skills. I...
Jenkins: How to automate CPU profile checks?
Hey,
what do you use for cpu profile check automation?
I’m a bit lost how to get useful information from cpu profile diff
What I want is the following:
1. Start service from the branch
2. Replay bunch of traffic
3. Collect profile
4. Repeat #1 #2 #3 for the master
5. Check profiles' diff
It's not clear how to profile. In different moments of time, the app is doing a different job, not just processing the requests. Reloading something or rebuilding for example.
https://redd.it/fv4b9k
@r_devops
Hey,
what do you use for cpu profile check automation?
I’m a bit lost how to get useful information from cpu profile diff
What I want is the following:
1. Start service from the branch
2. Replay bunch of traffic
3. Collect profile
4. Repeat #1 #2 #3 for the master
5. Check profiles' diff
It's not clear how to profile. In different moments of time, the app is doing a different job, not just processing the requests. Reloading something or rebuilding for example.
https://redd.it/fv4b9k
@r_devops
reddit
Jenkins: How to automate CPU profile checks?
Hey, what do you use for cpu profile check automation? I’m a bit lost how to get useful information from cpu profile diff What I want is...
Free Ansible DevOps Books from Jeff Geerling
Available all month long, via LeanPub. Blog article covering the details: https://www.jeffgeerling.com/blog/2020/my-devops-books-are-free-april-thanks-device42
Happy reading!
https://redd.it/fvst2h
@r_devops
Available all month long, via LeanPub. Blog article covering the details: https://www.jeffgeerling.com/blog/2020/my-devops-books-are-free-april-thanks-device42
Happy reading!
https://redd.it/fvst2h
@r_devops
reddit
Free Ansible DevOps Books from Jeff Geerling
Available all month long, via LeanPub. Blog article covering the details:...
Regarding Github actions & DigitalOcean
I have a React project connected with a remote repo on GitHub, I also have GH Actions set up so that every time I make a push to my master branch, the Action will schedule a job and then my latest changes get deployed on my Linux server. However I noticed that during a script
npm ci
that is in my yaml file, it takes forever to complete b/c I'm guessing it's installing all of the modules, like react, react-dom, babel, etc. So I decided to scrap the npm ci command and decided to just run no scripts, and just have the push apply changes, because I just care about the build file. Since that's the case I might find that in some cases, I may just want a separate branch that only has my build folder
dist
, and have the actions set up jobs every time that separate branch gets a push, but every push will always contain the dist file changes only (new react build). I don't know if it's possible but, could you some how maintain a branch that ONLY has a specific file/s? Because I know if I make a new branch, I would have to make sure that branch does not have all the other files, like the
src
folder for React.
https://redd.it/fvqirv
@r_devops
I have a React project connected with a remote repo on GitHub, I also have GH Actions set up so that every time I make a push to my master branch, the Action will schedule a job and then my latest changes get deployed on my Linux server. However I noticed that during a script
npm ci
that is in my yaml file, it takes forever to complete b/c I'm guessing it's installing all of the modules, like react, react-dom, babel, etc. So I decided to scrap the npm ci command and decided to just run no scripts, and just have the push apply changes, because I just care about the build file. Since that's the case I might find that in some cases, I may just want a separate branch that only has my build folder
dist
, and have the actions set up jobs every time that separate branch gets a push, but every push will always contain the dist file changes only (new react build). I don't know if it's possible but, could you some how maintain a branch that ONLY has a specific file/s? Because I know if I make a new branch, I would have to make sure that branch does not have all the other files, like the
src
folder for React.
https://redd.it/fvqirv
@r_devops
reddit
Regarding Github actions & DigitalOcean
I have a React project connected with a remote repo on GitHub, I also have GH Actions set up so that every time I make a push to my master branch,...
Foreman vs Uyuni vs Spacewalk? what's the best free tool?
What's best for provisioning/config mgmt of Linux servers/workstations?
https://redd.it/fvlucn
@r_devops
What's best for provisioning/config mgmt of Linux servers/workstations?
https://redd.it/fvlucn
@r_devops
reddit
Foreman vs Uyuni vs Spacewalk? what's the best free tool?
What's best for provisioning/config mgmt of Linux servers/workstations?