Need suggestions
I'm getting offer for a U.S client in my state(remotely in India) as a developer (Android,IOS,React, React Native,Angular). I'm in my last semester of B.tech and i was always interested in DevOps. Is it to be good idea to start as developer. I think it would be difficult to change profile after having experience as a developer . Do give your thoughts (i hope i won't regert this opportunity as recession is coming)
https://redd.it/fw7mp3
@r_devops
I'm getting offer for a U.S client in my state(remotely in India) as a developer (Android,IOS,React, React Native,Angular). I'm in my last semester of B.tech and i was always interested in DevOps. Is it to be good idea to start as developer. I think it would be difficult to change profile after having experience as a developer . Do give your thoughts (i hope i won't regert this opportunity as recession is coming)
https://redd.it/fw7mp3
@r_devops
reddit
Need suggestions
I'm getting offer for a U.S client in my state(remotely in India) as a developer (Android,IOS,React, React Native,Angular). I'm in my last...
Packer hangs on building an Ubuntu 18 template on vmware (vsphere-iso: Waiting for SSH to become available...)
Hi,
I'm trying to build a vmware image of Ubuntu 18 with Packer , but it keeps failing with :
**vsphere-iso: Waiting for SSH to become available...**
I'm running Vmware vcenter 6.7, and packer 1.5.5 on a centos 8 host.
I have build centos7 and centos8 templates successfully.
Here is my variables file (variables.json)
{
"vsphere_server": "192.168.0.51",
"vsphere_username": "[email protected]",
"vsphere_password": "password",
"vsphere_datacenter": "Datacenter",
"vsphere_datastore": "datastore",
"vsphere_folder": "Templates",
"vsphere_host": "host.domain.local",
"vsphere_network": "network1",
"vsphere_template_folder": "Templates",
"ssh_root_username": "root",
"ssh_root_password": "password",
"ssh_username": "admin",
"ssh_password": "password"
}
Here is my json file (ubuntu18\_buildtemplate.json)
{
"builders": [
{
"type": "vsphere-iso",
"vcenter_server": "{{user `vsphere_server`}}",
"username": "{{user `vsphere_username`}}",
"password": "{{user `vsphere_password`}}",
"insecure_connection": "true",
"vm_name": "T-ubuntu18",
"datastore": "{{user `vsphere_datastore`}}",
"folder": "{{user `vsphere_folder`}}",
"host": "{{user `vsphere_host`}}",
"convert_to_template": "true",
"network": "{{user `vsphere_network`}}",
"boot_order": "disk,cdrom",
"guest_os_type": "ubuntu64Guest",
"ssh_username": "{{user `ssh_username`}}",
"ssh_password": "{{user `ssh_password`}}",
"CPUs": 1,
"RAM": 1024,
"RAM_reserve_all": false,
"disk_controller_type": "pvscsi",
"disk_size": 32768,
"disk_thin_provisioned": false,
"network_card": "vmxnet3",
"iso_paths": [
"[datastore] ISO/Linux/ubuntu-18.04.4-live-server-amd64.iso"
],
"floppy_files": [
"./ubuntu18_kickstart.cfg"
],
"boot_command": [
"<enter><wait><f6><wait><esc><wait>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs>",
"/install/vmlinuz",
" initrd=/install/initrd.gz",
" priority=critical",
" locale=en_US",
" file=/media/ubuntu18_kickstart.cfg",
"<enter>"
]
}
],
"provisioners": [
{
"type": "shell",
"inline": ["echo 'Template build complete'"]
}
]
}
Here is my kickstart file (ubuntu18\_kickstart.cfg)
​
### Base system installation
d-i base-installer/kernel/override-image string linux-server
## Options to set on the command line
d-i debian-installer/locale string en_US.utf8
d-i console-setup/ask_detect boolean false
d-i console-setup/layout string USA
#--------------------------------------------------------------------------------
# ACCOUNTS
#--------------------------------------------------------------------------------
d-i passwd/user-fullname string admin
d-i passwd/username string admin
d-i passwd/user-password password password
d-i passwd/user-password-again password
Hi,
I'm trying to build a vmware image of Ubuntu 18 with Packer , but it keeps failing with :
**vsphere-iso: Waiting for SSH to become available...**
I'm running Vmware vcenter 6.7, and packer 1.5.5 on a centos 8 host.
I have build centos7 and centos8 templates successfully.
Here is my variables file (variables.json)
{
"vsphere_server": "192.168.0.51",
"vsphere_username": "[email protected]",
"vsphere_password": "password",
"vsphere_datacenter": "Datacenter",
"vsphere_datastore": "datastore",
"vsphere_folder": "Templates",
"vsphere_host": "host.domain.local",
"vsphere_network": "network1",
"vsphere_template_folder": "Templates",
"ssh_root_username": "root",
"ssh_root_password": "password",
"ssh_username": "admin",
"ssh_password": "password"
}
Here is my json file (ubuntu18\_buildtemplate.json)
{
"builders": [
{
"type": "vsphere-iso",
"vcenter_server": "{{user `vsphere_server`}}",
"username": "{{user `vsphere_username`}}",
"password": "{{user `vsphere_password`}}",
"insecure_connection": "true",
"vm_name": "T-ubuntu18",
"datastore": "{{user `vsphere_datastore`}}",
"folder": "{{user `vsphere_folder`}}",
"host": "{{user `vsphere_host`}}",
"convert_to_template": "true",
"network": "{{user `vsphere_network`}}",
"boot_order": "disk,cdrom",
"guest_os_type": "ubuntu64Guest",
"ssh_username": "{{user `ssh_username`}}",
"ssh_password": "{{user `ssh_password`}}",
"CPUs": 1,
"RAM": 1024,
"RAM_reserve_all": false,
"disk_controller_type": "pvscsi",
"disk_size": 32768,
"disk_thin_provisioned": false,
"network_card": "vmxnet3",
"iso_paths": [
"[datastore] ISO/Linux/ubuntu-18.04.4-live-server-amd64.iso"
],
"floppy_files": [
"./ubuntu18_kickstart.cfg"
],
"boot_command": [
"<enter><wait><f6><wait><esc><wait>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
"<bs><bs><bs>",
"/install/vmlinuz",
" initrd=/install/initrd.gz",
" priority=critical",
" locale=en_US",
" file=/media/ubuntu18_kickstart.cfg",
"<enter>"
]
}
],
"provisioners": [
{
"type": "shell",
"inline": ["echo 'Template build complete'"]
}
]
}
Here is my kickstart file (ubuntu18\_kickstart.cfg)
​
### Base system installation
d-i base-installer/kernel/override-image string linux-server
## Options to set on the command line
d-i debian-installer/locale string en_US.utf8
d-i console-setup/ask_detect boolean false
d-i console-setup/layout string USA
#--------------------------------------------------------------------------------
# ACCOUNTS
#--------------------------------------------------------------------------------
d-i passwd/user-fullname string admin
d-i passwd/username string admin
d-i passwd/user-password password password
d-i passwd/user-password-again password
password
d-i user-setup/allow-password-weak boolean true
d-i passwd/root-login boolean true
d-i passwd/root-password password password
d-i passwd/root-password-again password password
#--------------------------------------------------------------------------------
# Clock and time zone setup
#--------------------------------------------------------------------------------
#d-i clock-setup/utc boolean true
#d-i time/zone string UTC
#d-i time/zone string Europe/Paris
#Reboot after installation
reboot
#Use text mode install
text
#Install OS instead of upgrade
install
#--------------------------------------------------------------------------------
# NETWORK
#--------------------------------------------------------------------------------
# netcfg will choose an interface that has link if possible. This makes it
# skip displaying a list if there is more than one interface.
# netcfg/choose_interface=eth0 is set as the boot parameter.
#d-i netcfg/choose_interface select auto
# If you prefer to configure the network manually, uncomment this line and
# the static network configuration below.
#d-i netcfg/disable_autoconfig boolean true
# Disable that annoying WEP key dialog.
#d-i netcfg/wireless_wep string
# Static network configuration.
# IPv4 example
#d-i netcfg/get_ipaddress string xxx.96.102.139
#d-i netcfg/get_netmask string 255.255.255.192
#d-i netcfg/get_gateway string xxx.96.102.129
#d-i netcfg/get_nameservers string xxx.96.102.141
#d-i netcfg/confirm_static boolean true
# Any hostname and domain names assigned from dhcp take precedence over
# values set here. However, setting the values still prevents the questions
# from being shown, even if values come from dhcp.
# Hostname:
#netcfg netcfg/get_hostname string myhost
# Domain name:
#netcfg netcfg/get_domain string demo.local
# If you want to force a hostname, regardless of what either the DHCP
# server returns or what the reverse DNS entry for the IP is, uncomment
# and adjust the following line.
#d-i netcfg/hostname string myhost
#--------------------------------------------------------------------------------
# DISK PARTITIONNING
#--------------------------------------------------------------------------------
d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string regular
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
#--------------------------------------------------------------------------------
# EXTRA PACKAGES
#--------------------------------------------------------------------------------
d-i pkgsel/include string openssh-server open-vm-tools
#--------------------------------------------------------------------------------
# BOOT LOADER
#--------------------------------------------------------------------------------
d-i grub-installer/only_debian boolean true
#--------------------------------------------------------------------------------
# POST SCRIPTS
#--------------------------------------------------------------------------------
d-i preseed/late_command string \
echo 'admin ALL=(ALL) NOPASSWD: ALL' > /target/etc/sudoers.d/admin ; \
in-target chmod 440 /etc/sudoers.d/admin ; \
systemctl enable ssh;\
systemctl start ssh;
#--------------------------------------------------------------------------------
# FINISH INSTALL
#--------------------------------------------------------------------------------
d-i finish-install/reboot_in_progress note
​
​
Running packer:
./packer build -var-file variables.json
d-i user-setup/allow-password-weak boolean true
d-i passwd/root-login boolean true
d-i passwd/root-password password password
d-i passwd/root-password-again password password
#--------------------------------------------------------------------------------
# Clock and time zone setup
#--------------------------------------------------------------------------------
#d-i clock-setup/utc boolean true
#d-i time/zone string UTC
#d-i time/zone string Europe/Paris
#Reboot after installation
reboot
#Use text mode install
text
#Install OS instead of upgrade
install
#--------------------------------------------------------------------------------
# NETWORK
#--------------------------------------------------------------------------------
# netcfg will choose an interface that has link if possible. This makes it
# skip displaying a list if there is more than one interface.
# netcfg/choose_interface=eth0 is set as the boot parameter.
#d-i netcfg/choose_interface select auto
# If you prefer to configure the network manually, uncomment this line and
# the static network configuration below.
#d-i netcfg/disable_autoconfig boolean true
# Disable that annoying WEP key dialog.
#d-i netcfg/wireless_wep string
# Static network configuration.
# IPv4 example
#d-i netcfg/get_ipaddress string xxx.96.102.139
#d-i netcfg/get_netmask string 255.255.255.192
#d-i netcfg/get_gateway string xxx.96.102.129
#d-i netcfg/get_nameservers string xxx.96.102.141
#d-i netcfg/confirm_static boolean true
# Any hostname and domain names assigned from dhcp take precedence over
# values set here. However, setting the values still prevents the questions
# from being shown, even if values come from dhcp.
# Hostname:
#netcfg netcfg/get_hostname string myhost
# Domain name:
#netcfg netcfg/get_domain string demo.local
# If you want to force a hostname, regardless of what either the DHCP
# server returns or what the reverse DNS entry for the IP is, uncomment
# and adjust the following line.
#d-i netcfg/hostname string myhost
#--------------------------------------------------------------------------------
# DISK PARTITIONNING
#--------------------------------------------------------------------------------
d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string regular
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
#--------------------------------------------------------------------------------
# EXTRA PACKAGES
#--------------------------------------------------------------------------------
d-i pkgsel/include string openssh-server open-vm-tools
#--------------------------------------------------------------------------------
# BOOT LOADER
#--------------------------------------------------------------------------------
d-i grub-installer/only_debian boolean true
#--------------------------------------------------------------------------------
# POST SCRIPTS
#--------------------------------------------------------------------------------
d-i preseed/late_command string \
echo 'admin ALL=(ALL) NOPASSWD: ALL' > /target/etc/sudoers.d/admin ; \
in-target chmod 440 /etc/sudoers.d/admin ; \
systemctl enable ssh;\
systemctl start ssh;
#--------------------------------------------------------------------------------
# FINISH INSTALL
#--------------------------------------------------------------------------------
d-i finish-install/reboot_in_progress note
​
​
Running packer:
./packer build -var-file variables.json
ubuntu18_buildtemplate.json
​
​
Here is the packer output:
vsphere-iso: output will be in this color.
==> vsphere-iso: Creating VM...
==> vsphere-iso: Customizing hardware...
==> vsphere-iso: Mounting ISO images...
==> vsphere-iso: Creating floppy disk...
vsphere-iso: Copying files flatly from floppy_files
vsphere-iso: Copying file: ./ubuntu18_kickstart.cfg
vsphere-iso: Done copying files from floppy_files
vsphere-iso: Collecting paths from floppy_dirs
vsphere-iso: Resulting paths from floppy_dirs : []
vsphere-iso: Done copying paths from floppy_dirs
==> vsphere-iso: Uploading created floppy image
==> vsphere-iso: Adding generated Floppy...
==> vsphere-iso: Set boot order...
==> vsphere-iso: Power on VM...
==> vsphere-iso: Waiting 10s for boot...
==> vsphere-iso: Typing boot command...
==> vsphere-iso: Waiting for IP...
==> vsphere-iso: IP address: 192.168.0.22
==> vsphere-iso: Using ssh communicator to connect: 192.168.0.22
==> vsphere-iso: Waiting for SSH to become available...
==> vsphere-iso: Timeout waiting for SSH.
==> vsphere-iso: Power off VM...
==> vsphere-iso: Deleting Floppy image ...
==> vsphere-iso: Destroying VM...
Build 'vsphere-iso' errored: Timeout waiting for SSH.
I don't know why it's hanging on ssh connection.
Also is there a way to generate a proper kickstart file like a on Centos ?
Do you have any clue ?
Thank you very much for your help.
https://redd.it/fw67s6
@r_devops
​
​
Here is the packer output:
vsphere-iso: output will be in this color.
==> vsphere-iso: Creating VM...
==> vsphere-iso: Customizing hardware...
==> vsphere-iso: Mounting ISO images...
==> vsphere-iso: Creating floppy disk...
vsphere-iso: Copying files flatly from floppy_files
vsphere-iso: Copying file: ./ubuntu18_kickstart.cfg
vsphere-iso: Done copying files from floppy_files
vsphere-iso: Collecting paths from floppy_dirs
vsphere-iso: Resulting paths from floppy_dirs : []
vsphere-iso: Done copying paths from floppy_dirs
==> vsphere-iso: Uploading created floppy image
==> vsphere-iso: Adding generated Floppy...
==> vsphere-iso: Set boot order...
==> vsphere-iso: Power on VM...
==> vsphere-iso: Waiting 10s for boot...
==> vsphere-iso: Typing boot command...
==> vsphere-iso: Waiting for IP...
==> vsphere-iso: IP address: 192.168.0.22
==> vsphere-iso: Using ssh communicator to connect: 192.168.0.22
==> vsphere-iso: Waiting for SSH to become available...
==> vsphere-iso: Timeout waiting for SSH.
==> vsphere-iso: Power off VM...
==> vsphere-iso: Deleting Floppy image ...
==> vsphere-iso: Destroying VM...
Build 'vsphere-iso' errored: Timeout waiting for SSH.
I don't know why it's hanging on ssh connection.
Also is there a way to generate a proper kickstart file like a on Centos ?
Do you have any clue ?
Thank you very much for your help.
https://redd.it/fw67s6
@r_devops
reddit
Packer hangs on building an Ubuntu 18 template on vmware...
Hi, I'm trying to build a vmware image of Ubuntu 18 with Packer , but it keeps failing with : **vsphere-iso: Waiting for SSH to become...
AWS Health Check Pricing
According to [Route53 pricing](https://aws.amazon.com/route53/pricing/#Health_Checks), health checks are charged at $0.50 per health check per month with a cap of 200 instances.
Do the health checks mentioned there apply to only manually created health checks (e.g. a self managed EC2 instance) or does it apply to containers automatically created health checks? I've seen documentation that states health checks for some AWS managed resources are free of charge (e.g. lambdas, s3 buckets) but I cannot find an exhaustive list of those resources.
Specifically, I am trying to validate that health checks for containers spun up by ECS are included in the free of charge bucket. I have to imagine this is the case otherwise the cost and 200 limit would be incredibly prohibitive.
https://redd.it/fw4hn0
@r_devops
According to [Route53 pricing](https://aws.amazon.com/route53/pricing/#Health_Checks), health checks are charged at $0.50 per health check per month with a cap of 200 instances.
Do the health checks mentioned there apply to only manually created health checks (e.g. a self managed EC2 instance) or does it apply to containers automatically created health checks? I've seen documentation that states health checks for some AWS managed resources are free of charge (e.g. lambdas, s3 buckets) but I cannot find an exhaustive list of those resources.
Specifically, I am trying to validate that health checks for containers spun up by ECS are included in the free of charge bucket. I have to imagine this is the case otherwise the cost and 200 limit would be incredibly prohibitive.
https://redd.it/fw4hn0
@r_devops
Amazon
Amazon Route 53 pricing
Pricing information for the Amazon Route 53 service, which is a reliable and cost-effective way to route end users to Internet applications.
Issue tracking and sprint planning - open source
My company has been producing enterprise products for the last couple years and we have done all our sprint planning, issue tracking, etc via JIRA. We are going to be releasing a public, open source version of our product soon and I’m looking for good ways to manage publicly submitted issues. We could have them submitted to JIRA as well, but I like issue tracker on GitHub for this since I have a preference for it and think a lot of others do as well. But if we use github for public issues, we still need to manage our internal tasks and sprints, which we currently do in JIRA.
Does anyone have experience using both of these together?
If so, is it cumbersome, or easy to follow and keep things organized?
Does anyone have a better solution they are currently taking advantage of for this use case?
Thanks
https://redd.it/fw53eg
@r_devops
My company has been producing enterprise products for the last couple years and we have done all our sprint planning, issue tracking, etc via JIRA. We are going to be releasing a public, open source version of our product soon and I’m looking for good ways to manage publicly submitted issues. We could have them submitted to JIRA as well, but I like issue tracker on GitHub for this since I have a preference for it and think a lot of others do as well. But if we use github for public issues, we still need to manage our internal tasks and sprints, which we currently do in JIRA.
Does anyone have experience using both of these together?
If so, is it cumbersome, or easy to follow and keep things organized?
Does anyone have a better solution they are currently taking advantage of for this use case?
Thanks
https://redd.it/fw53eg
@r_devops
reddit
Issue tracking and sprint planning - open source
My company has been producing enterprise products for the last couple years and we have done all our sprint planning, issue tracking, etc via...
If switching from Lubuntu to Debian, what will I gain and what will I miss?
I am currently running Lubuntu 18.04, and now Lubuntu 20.04 is coming with LXDE replaced with LXQT and other changes that I am not familiar with.
I am wondering if switching to Debian is a good idea.
- If switching from Lubuntu to Debian, what will I gain and what will I miss?
- Which Debian + which desktop + stable/test/... installation is great?
I don't have a clear idea. The following is some considerations which might be relevant:
1. Light weight. Especially the desktop environment should provide convenience but no much more than necessary. I am fine in command line, but sometimes have to access GUI software as good alternatives. That was the reason that I switched from Ubuntu (Gnome) to Lubuntu (LXDE) a few years ago.
2. support for repositories and package management. Besides regular OS functionalities, I am learning
- data engineering technologies (Apach Spark, Kafka, Hadoop, ...), data analysis (R packages, Sckitlearn, Tensorflow, ...), and
- web services/applications (Flask, ASP.NET Core)
- programming languages and virtual machines (JVM, Java, Scala, .NET Core, C#, Python, R, ...)
- virtual machines (VirtualBox, KVM, ...) and containers (Docker) and compatible layers (Wine which is very essential to me as I am using some Windows programs via Wine).
Thanks.
https://redd.it/fvz00l
@r_devops
I am currently running Lubuntu 18.04, and now Lubuntu 20.04 is coming with LXDE replaced with LXQT and other changes that I am not familiar with.
I am wondering if switching to Debian is a good idea.
- If switching from Lubuntu to Debian, what will I gain and what will I miss?
- Which Debian + which desktop + stable/test/... installation is great?
I don't have a clear idea. The following is some considerations which might be relevant:
1. Light weight. Especially the desktop environment should provide convenience but no much more than necessary. I am fine in command line, but sometimes have to access GUI software as good alternatives. That was the reason that I switched from Ubuntu (Gnome) to Lubuntu (LXDE) a few years ago.
2. support for repositories and package management. Besides regular OS functionalities, I am learning
- data engineering technologies (Apach Spark, Kafka, Hadoop, ...), data analysis (R packages, Sckitlearn, Tensorflow, ...), and
- web services/applications (Flask, ASP.NET Core)
- programming languages and virtual machines (JVM, Java, Scala, .NET Core, C#, Python, R, ...)
- virtual machines (VirtualBox, KVM, ...) and containers (Docker) and compatible layers (Wine which is very essential to me as I am using some Windows programs via Wine).
Thanks.
https://redd.it/fvz00l
@r_devops
reddit
If switching from Lubuntu to Debian, what will I gain and what...
I am currently running Lubuntu 18.04, and now Lubuntu 20.04 is coming with LXDE replaced with LXQT and other changes that I am not familiar with....
Deploying a .war application on a tomcat server
0
I am trying to deploy my hello.war file (Java application) on my tomcat server.
At first I do it from the "Manager App" on tomcat's default page, and it shows off afterwards in the Applications section. (Attached below circled in red)
But when I try to connect to it by clicking on that link ([https://ip-address/hello](https://ip-address/hello)) it gives me a standard "HTTP Status 404 – Not Found" with the description: "The origin server did not find a current representation for the target resource or is not willing to disclose that one exists." (picture below)
I even try putting my hello.war file manually in the server in the appropriate folder location ("/opt/tomcat/apache-tomcat-9.0.33/webapps") and add read, execute permissions to 'others' on the .war file, add user 'tomcat' as the owner of the file, restart the service. But still nothing seems to help and I still get that 404
https://redd.it/fvuld3
@r_devops
0
I am trying to deploy my hello.war file (Java application) on my tomcat server.
At first I do it from the "Manager App" on tomcat's default page, and it shows off afterwards in the Applications section. (Attached below circled in red)
But when I try to connect to it by clicking on that link ([https://ip-address/hello](https://ip-address/hello)) it gives me a standard "HTTP Status 404 – Not Found" with the description: "The origin server did not find a current representation for the target resource or is not willing to disclose that one exists." (picture below)
I even try putting my hello.war file manually in the server in the appropriate folder location ("/opt/tomcat/apache-tomcat-9.0.33/webapps") and add read, execute permissions to 'others' on the .war file, add user 'tomcat' as the owner of the file, restart the service. But still nothing seems to help and I still get that 404
https://redd.it/fvuld3
@r_devops
reddit
Deploying a .war application on a tomcat server
0 I am trying to deploy my hello.war file (Java application) on my tomcat server. At first I do it from the "Manager App" on tomcat's default...
How to show "Prod" tag next to current deployed commit?
Hello,
I was wondering if anyone has used a tool that integrates with your deployment pipeline to show a tag next to the currently deployed commit when looking at your repo?
[A mockup of what I mean](https://familiarcycle.net/assets/images/azure-devops-repo-mockup.png)
Many services will show a green checkmark next to a commit when it has finished going through the CI/CD pipeline, but I find that style of UI to be lacking when there are multiple environments, such as Test, Staging, and Production.
https://redd.it/fvohtr
@r_devops
Hello,
I was wondering if anyone has used a tool that integrates with your deployment pipeline to show a tag next to the currently deployed commit when looking at your repo?
[A mockup of what I mean](https://familiarcycle.net/assets/images/azure-devops-repo-mockup.png)
Many services will show a green checkmark next to a commit when it has finished going through the CI/CD pipeline, but I find that style of UI to be lacking when there are multiple environments, such as Test, Staging, and Production.
https://redd.it/fvohtr
@r_devops
Is there any way to launch some commands on each created node automatically when autoscaling in AWS EKS cluster ?
I have set an AWS EKS cluster and i could launch some commands on each node manually, but when it comes to autoscaling it would be irrelevant to run yourself those commands on each created node, so is there any way to execute automatically some initialization commands on each newly created node and just before scheduling any pod on it ?
https://redd.it/fuwf4n
@r_devops
I have set an AWS EKS cluster and i could launch some commands on each node manually, but when it comes to autoscaling it would be irrelevant to run yourself those commands on each created node, so is there any way to execute automatically some initialization commands on each newly created node and just before scheduling any pod on it ?
https://redd.it/fuwf4n
@r_devops
reddit
Is there any way to launch some commands on each created node...
I have set an AWS EKS cluster and i could launch some commands on each node manually, but when it comes to autoscaling it would be irrelevant to...
The *very first* connection using Ansible
I read lots of "first five minutes/first ten minutes" articles about using Ansible to set up and manage a server. But all of them seem to skip over the initial starting point. Here's the setup:
- I have a brand new VPS from some hosting company. (I'm partial to Ubuntu, but I suspect it doesn't matter.) They tell me the root password, and sure enough, I can SSH in.
- There's no .ssh directory, or any public key stuff installed on the target host.
- My end goal is to have an Ansible script (or set of scripts) that can take that totally-unconfigured server (with root password) and set up a non-root account, block root login, etc.
Any pointers to a single script or set of scripts? Many thanks.
**Update:** I realize I didn't ask the right question... I really want to know:
Do people use two different playbooks: one for the "brand new VPS" that only has root SSH access (and that creates a non-root account, turns off root login, etc) and a second playbook that gets run anytime the config needs to be tweaked?
https://redd.it/fuulkj
@r_devops
I read lots of "first five minutes/first ten minutes" articles about using Ansible to set up and manage a server. But all of them seem to skip over the initial starting point. Here's the setup:
- I have a brand new VPS from some hosting company. (I'm partial to Ubuntu, but I suspect it doesn't matter.) They tell me the root password, and sure enough, I can SSH in.
- There's no .ssh directory, or any public key stuff installed on the target host.
- My end goal is to have an Ansible script (or set of scripts) that can take that totally-unconfigured server (with root password) and set up a non-root account, block root login, etc.
Any pointers to a single script or set of scripts? Many thanks.
**Update:** I realize I didn't ask the right question... I really want to know:
Do people use two different playbooks: one for the "brand new VPS" that only has root SSH access (and that creates a non-root account, turns off root login, etc) and a second playbook that gets run anytime the config needs to be tweaked?
https://redd.it/fuulkj
@r_devops
reddit
The *very first* connection using Ansible
I read lots of "first five minutes/first ten minutes" articles about using Ansible to set up and manage a server. But all of them seem to skip...
Prometheus for monitoring : What features do you think it lacks compared to APM tools?
I am sure many folks here use Prometheus/Grafana for monitoring infrastructure and applications. What features you find that Prometheus/Grafana lacks compared to full-fledged APM like DataDog?
Some features which I can think of are reporting - like generating PDF reports to send it to team or management. Another is a visualization of overall infra - in terms of server maps and microservices in them.
What are other such features you find lacking in Prometheus/Grafana compared to paid solutions?
https://redd.it/futqal
@r_devops
I am sure many folks here use Prometheus/Grafana for monitoring infrastructure and applications. What features you find that Prometheus/Grafana lacks compared to full-fledged APM like DataDog?
Some features which I can think of are reporting - like generating PDF reports to send it to team or management. Another is a visualization of overall infra - in terms of server maps and microservices in them.
What are other such features you find lacking in Prometheus/Grafana compared to paid solutions?
https://redd.it/futqal
@r_devops
reddit
Prometheus for monitoring : What features do you think it lacks...
I am sure many folks here use Prometheus/Grafana for monitoring infrastructure and applications. What features you find that Prometheus/Grafana...
Should an authentication service have its own load balancer or sit behind the API Gateway?
I'm designing my first microservice architecture and I'm starting to understand how all of the pieces fit together. One thing that is confusing me a little is where in the network topology the authentication service should sit. It's not a "regular" microservice (or is it?) so I'm not sure if it should be treated the same as the others.
I'm planning to have the API gateway at `api.example.com` and the authentication service at `identity.example.com`, each with their own load balancer. All microservices and the authentication service will be in a private subnet.
What I like about this, is that users can login at a nice URL: `identity.example.com`.
The alternative is to put the authentication service behind the API gateway, accessible at `api.example.com/identity`. This is not a very elegant URL for users logging in.
[Architecture diagram for context](https://imgur.com/gg26CaW.png)
**What are the pros and cons to either approach?**
https://redd.it/futhi2
@r_devops
I'm designing my first microservice architecture and I'm starting to understand how all of the pieces fit together. One thing that is confusing me a little is where in the network topology the authentication service should sit. It's not a "regular" microservice (or is it?) so I'm not sure if it should be treated the same as the others.
I'm planning to have the API gateway at `api.example.com` and the authentication service at `identity.example.com`, each with their own load balancer. All microservices and the authentication service will be in a private subnet.
What I like about this, is that users can login at a nice URL: `identity.example.com`.
The alternative is to put the authentication service behind the API gateway, accessible at `api.example.com/identity`. This is not a very elegant URL for users logging in.
[Architecture diagram for context](https://imgur.com/gg26CaW.png)
**What are the pros and cons to either approach?**
https://redd.it/futhi2
@r_devops
help finding a tool
I'm struggling to find a tool (that I assume exists), I think mostly because I can't find the right terms to Google.
What I'm looking for is a lean tool where I can manage all my configs/settings/variables that get used across different deployments of a project. So for example I have a 3rd Party API I'm using. For local development and the dev server, I want it to be https://dev.test.com/api/ and for staging and production I want it to be https://test.com/api. Furthermore, I want this variable available in Kubernetes YAML files, Dockerfiles, Python config files, and possibly other places.
Is there a simple tool that will allow me to keep one or two top level config files, then inject those variables into all the other places I need?
I think something like Terraform would work, but is also overkill to integrate just to get this one piece of functionality.
https://redd.it/fwpbzx
@r_devops
I'm struggling to find a tool (that I assume exists), I think mostly because I can't find the right terms to Google.
What I'm looking for is a lean tool where I can manage all my configs/settings/variables that get used across different deployments of a project. So for example I have a 3rd Party API I'm using. For local development and the dev server, I want it to be https://dev.test.com/api/ and for staging and production I want it to be https://test.com/api. Furthermore, I want this variable available in Kubernetes YAML files, Dockerfiles, Python config files, and possibly other places.
Is there a simple tool that will allow me to keep one or two top level config files, then inject those variables into all the other places I need?
I think something like Terraform would work, but is also overkill to integrate just to get this one piece of functionality.
https://redd.it/fwpbzx
@r_devops
How do you manage your DNS infrastructure?
How do you guys do it? How do you handle your Linux Infrastructure / Active Directory / Kubernetes clusters / Multi Cloud Deployments from a DNS perspective?
How you do autodiscovery? How do you link all of them together and avoid to manually manage DNS?
https://redd.it/fured0
@r_devops
How do you guys do it? How do you handle your Linux Infrastructure / Active Directory / Kubernetes clusters / Multi Cloud Deployments from a DNS perspective?
How you do autodiscovery? How do you link all of them together and avoid to manually manage DNS?
https://redd.it/fured0
@r_devops
reddit
How do you manage your DNS infrastructure?
How do you guys do it? How do you handle your Linux Infrastructure / Active Directory / Kubernetes clusters / Multi Cloud Deployments from a DNS...
Can GitLab CI be used instead of Ansible?
As I was studying GitLab, they are claiming that GitLab is the complete package tools for DevOps.
So I wanted to know should I invest my time in learning ansible or just knowing GitLab well will be enough?
Actually I want to know if GitLab is able to do what ansible is built for
https://redd.it/fwrgzo
@r_devops
As I was studying GitLab, they are claiming that GitLab is the complete package tools for DevOps.
So I wanted to know should I invest my time in learning ansible or just knowing GitLab well will be enough?
Actually I want to know if GitLab is able to do what ansible is built for
https://redd.it/fwrgzo
@r_devops
reddit
Can GitLab CI be used instead of Ansible?
As I was studying GitLab, they are claiming that GitLab is the complete package tools for DevOps. So I wanted to know should I invest my time in...
Restrict a particular state from accessing my website?
Hi,
How can i restrict a particular state for example New York, to access my website.
Its hosted on AWS right now.
Any suggestions?
https://redd.it/fuqelp
@r_devops
Hi,
How can i restrict a particular state for example New York, to access my website.
Its hosted on AWS right now.
Any suggestions?
https://redd.it/fuqelp
@r_devops
reddit
Restrict a particular state from accessing my website?
Hi, How can i restrict a particular state for example New York, to access my website. Its hosted on AWS right now. Any suggestions?
Automating deployments to multiple industrial factories (VPN, no cloud, no continuous connection)?
Hi, my team develops a service that is deployed to servers hosted in industrial factories: 1 instance of our app per factory. While we have good practices on the development phase (CI on Azure DevOps with a lot of automation), we're only dipping out toes into automatic deployments to production.
For those with similar experience, would you recommend any of the tools specialized in the task (Chef, Pupper, Ansible, SaltStack), or should we stick to custom scripts orchestrated by Azure DevOps pipelines?
Here's where we're at, at the moment:
* **keeping track of what's been deployed**, where and when: we have a poorly maintained spreadsheet. Knowing what's in production would greatly help us making the decisions that could impact our clients
* **heterogeneous infrastructure**: we do not own the infrastructure. All factories have a VPN, but with different sets of rules, bandwidth throttling, connection schedules restrictions, user accounts, server OS, etc. Today, we connect to the remote machines and manually deploy our .msi installers. Automating this process would mean turning the constraints of each factory into code.
* **configuration management**: the service we develop has a lot of configuration. Each factory needs a different config. Today, there's no track record of the configurations applied. We have to connect to the servers and read the configuration files. Migrating this config during a version update is a pain in the ass. It is 100% manual at the moment.
* **monitoring**: this is too soon to even think about it. That would require constant connection or very frequent connections. Most of our clients are too security frisky to allow that. We need to build a better trust relationship with them before we can consider monitoring.
Thanks for your help!
https://redd.it/fuq9nn
@r_devops
Hi, my team develops a service that is deployed to servers hosted in industrial factories: 1 instance of our app per factory. While we have good practices on the development phase (CI on Azure DevOps with a lot of automation), we're only dipping out toes into automatic deployments to production.
For those with similar experience, would you recommend any of the tools specialized in the task (Chef, Pupper, Ansible, SaltStack), or should we stick to custom scripts orchestrated by Azure DevOps pipelines?
Here's where we're at, at the moment:
* **keeping track of what's been deployed**, where and when: we have a poorly maintained spreadsheet. Knowing what's in production would greatly help us making the decisions that could impact our clients
* **heterogeneous infrastructure**: we do not own the infrastructure. All factories have a VPN, but with different sets of rules, bandwidth throttling, connection schedules restrictions, user accounts, server OS, etc. Today, we connect to the remote machines and manually deploy our .msi installers. Automating this process would mean turning the constraints of each factory into code.
* **configuration management**: the service we develop has a lot of configuration. Each factory needs a different config. Today, there's no track record of the configurations applied. We have to connect to the servers and read the configuration files. Migrating this config during a version update is a pain in the ass. It is 100% manual at the moment.
* **monitoring**: this is too soon to even think about it. That would require constant connection or very frequent connections. Most of our clients are too security frisky to allow that. We need to build a better trust relationship with them before we can consider monitoring.
Thanks for your help!
https://redd.it/fuq9nn
@r_devops
reddit
Automating deployments to multiple industrial factories (VPN, no...
Hi, my team develops a service that is deployed to servers hosted in industrial factories: 1 instance of our app per factory. While we have good...
What tools can be used to enforce code styles in Terraform?
For example, we're trying to standardize on using snake_case for resource names. What tool would be good for this? I looked at [tflint](https://github.com/terraform-linters/tflint) but it doesn't seem to provide that functionality.
https://redd.it/fwrmd0
@r_devops
For example, we're trying to standardize on using snake_case for resource names. What tool would be good for this? I looked at [tflint](https://github.com/terraform-linters/tflint) but it doesn't seem to provide that functionality.
https://redd.it/fwrmd0
@r_devops
GitHub
GitHub - terraform-linters/tflint: A Pluggable Terraform Linter
A Pluggable Terraform Linter. Contribute to terraform-linters/tflint development by creating an account on GitHub.
NEW AWS Certified Developer Associate course launch
Hi Redditors,
We're super excited to announce our brand new [video training course on Udemy](https://www.udemy.com/course/aws-certified-developer-associate-exam-training/?couponCode=LAUNCHSPECIAL) for the **AWS Certified Developer Associate certification**. This is the result of over 3 months of hard work and is the most comprehensive and up-to-date course for the AWS Developer Associate certification available today.
Even if you don't have any development experience, this course will prepare you to ace your exam. **Everything you need is included** to make passing this difficult exam easy for you.
The course includes:
• **25+ hrs of in-depth theory and hands-on labs**
**• 3+ hrs of exam-cram lectures**
**• 110+ quiz questions**
**• 65 exam-difficulty practice exam questions that are timed and scored**
**• 600+ slides (available for download)**
**• Exam-specific Cheat Sheets for every topic (online)**
**• Code snippets for hands-on labs are all provided**
Throughout the course, you'll learn through multiple methodologies including theory, visual and guided practical exercises. This will help you to develop deep knowledge and a strong experience-based skillset.
These are uncertain and challenging times at the moment. Now more than ever, it's essential to make sure you're preparing for your future and making sure you're ready for new opportunities.
The AWS Certified Developer Associate certification is a great addition to your resume and definitely a differentiator that can set you apart from the competition. Take action now and sign up to become an AWS Cloud Developer!
Secure your special launch offer and get this course for $9.99 only with [coupon code LAUNCHSPECIAL](https://www.udemy.com/course/aws-certified-developer-associate-exam-training/?couponCode=LAUNCHSPECIAL) (offer valid until April 12 2020). After April 12 you can use [coupon code UPSKILLNOW](https://www.udemy.com/course/aws-certified-developer-associate-exam-training/?couponCode=UPSKILLNOW)
HAPPY LEARNING,
Neal
https://redd.it/fwvusn
@r_devops
Hi Redditors,
We're super excited to announce our brand new [video training course on Udemy](https://www.udemy.com/course/aws-certified-developer-associate-exam-training/?couponCode=LAUNCHSPECIAL) for the **AWS Certified Developer Associate certification**. This is the result of over 3 months of hard work and is the most comprehensive and up-to-date course for the AWS Developer Associate certification available today.
Even if you don't have any development experience, this course will prepare you to ace your exam. **Everything you need is included** to make passing this difficult exam easy for you.
The course includes:
• **25+ hrs of in-depth theory and hands-on labs**
**• 3+ hrs of exam-cram lectures**
**• 110+ quiz questions**
**• 65 exam-difficulty practice exam questions that are timed and scored**
**• 600+ slides (available for download)**
**• Exam-specific Cheat Sheets for every topic (online)**
**• Code snippets for hands-on labs are all provided**
Throughout the course, you'll learn through multiple methodologies including theory, visual and guided practical exercises. This will help you to develop deep knowledge and a strong experience-based skillset.
These are uncertain and challenging times at the moment. Now more than ever, it's essential to make sure you're preparing for your future and making sure you're ready for new opportunities.
The AWS Certified Developer Associate certification is a great addition to your resume and definitely a differentiator that can set you apart from the competition. Take action now and sign up to become an AWS Cloud Developer!
Secure your special launch offer and get this course for $9.99 only with [coupon code LAUNCHSPECIAL](https://www.udemy.com/course/aws-certified-developer-associate-exam-training/?couponCode=LAUNCHSPECIAL) (offer valid until April 12 2020). After April 12 you can use [coupon code UPSKILLNOW](https://www.udemy.com/course/aws-certified-developer-associate-exam-training/?couponCode=UPSKILLNOW)
HAPPY LEARNING,
Neal
https://redd.it/fwvusn
@r_devops
Udemy
AWS Certified Developer Associate Exam Training 2020 [NEW]
Pass your AWS Certified Developer Exam with this complete AWS Developer Training Package [Videos + Practice Questions]
Using CICD to checkout a file from a repo, modify it and commit it back to that repo?
I have never worked with CI/CD; I am wondering if I can use it to automate my screen scraping and static site generating process. Here's my proposed process flow in Gitlabs:
1. Job runs @ 22:00 daily which triggers a screen scraping script in in *ScreenScrapingRepo*
2. Screen scraping script reads small amount of data and temporarily stores it
3. On completion, the job checks out a JSON file in another repository (*WebsiteRepo*)
4. That JSON file is modified to include the new data for today
5. File is committed back to repo
6. *ScreenScrapingRepo*'s job is now finished
7 *WebsiteRepo* has a build and publish job which triggers on commit
8. Because its data.json file has been modified, this build now includes today's data
The reason I need to write to a JSON file in *WebsiteRepo* is because I am using a static site generator which takes a JSON file as its data source.
I am unsure about is task #2 - where to store the data for that day before writing it to the other repo.
Is this a bad idea? Is there a better way to do this?
https://redd.it/fwufcv
@r_devops
I have never worked with CI/CD; I am wondering if I can use it to automate my screen scraping and static site generating process. Here's my proposed process flow in Gitlabs:
1. Job runs @ 22:00 daily which triggers a screen scraping script in in *ScreenScrapingRepo*
2. Screen scraping script reads small amount of data and temporarily stores it
3. On completion, the job checks out a JSON file in another repository (*WebsiteRepo*)
4. That JSON file is modified to include the new data for today
5. File is committed back to repo
6. *ScreenScrapingRepo*'s job is now finished
7 *WebsiteRepo* has a build and publish job which triggers on commit
8. Because its data.json file has been modified, this build now includes today's data
The reason I need to write to a JSON file in *WebsiteRepo* is because I am using a static site generator which takes a JSON file as its data source.
I am unsure about is task #2 - where to store the data for that day before writing it to the other repo.
Is this a bad idea? Is there a better way to do this?
https://redd.it/fwufcv
@r_devops
reddit
Using CICD to checkout a file from a repo, modify it and commit it...
I have never worked with CI/CD; I am wondering if I can use it to automate my screen scraping and static site generating process. Here's my...