What’s the full list of moving parts needed to build a real financial exchange from scratch?
I’m not talking about a simple trading app. I mean a proper exchange in the league of NYSE, MCX, or LME electronic, possibly with physical settlement that can actually function in the real world.
If someone wanted to create one from the ground up, what exactly would need to be in place? I’m trying to get my head around the entire picture:
Core technology stack and matching engine design
Clearing and settlement systems
Regulatory licensing and jurisdictional differences
Membership structures, listing requirements, and onboarding
Market-making and liquidity provision
Risk management and surveillance systems
Connectivity to participants and data vendors
Physical delivery and warehousing
I’m especially interested in the less obvious operational and legal layers people tend to underestimate. If you’ve ever been involved in building, running, or integrating with an exchange, I’d really value a detailed breakdown from your perspective.
https://redd.it/1mn3er2
@r_devops
I’m not talking about a simple trading app. I mean a proper exchange in the league of NYSE, MCX, or LME electronic, possibly with physical settlement that can actually function in the real world.
If someone wanted to create one from the ground up, what exactly would need to be in place? I’m trying to get my head around the entire picture:
Core technology stack and matching engine design
Clearing and settlement systems
Regulatory licensing and jurisdictional differences
Membership structures, listing requirements, and onboarding
Market-making and liquidity provision
Risk management and surveillance systems
Connectivity to participants and data vendors
Physical delivery and warehousing
I’m especially interested in the less obvious operational and legal layers people tend to underestimate. If you’ve ever been involved in building, running, or integrating with an exchange, I’d really value a detailed breakdown from your perspective.
https://redd.it/1mn3er2
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
My lack of Kubernetes knowledge be what’s stopping me from working in DevOps?
Long story short, I’ve been working in IT for the past 3-4 years, mostly with infrastructure and support. The main reason I started in support was just to get my foot in the door in IT, I’ve always wanted to move into DevOps since that’s what I enjoy. I have the SAA-03 certification, but it doesn’t carry much weight since most of my AWS experience comes from hobby projects (just using EC2, S3, VPCs, and Lambda). The only "proper" CI/CD experience I’ve had was building a pipeline for a B2B e-commerce project at my previous job, using GitHub Actions and Docker for the front-end and back-end.
https://redd.it/1mn6dy1
@r_devops
Long story short, I’ve been working in IT for the past 3-4 years, mostly with infrastructure and support. The main reason I started in support was just to get my foot in the door in IT, I’ve always wanted to move into DevOps since that’s what I enjoy. I have the SAA-03 certification, but it doesn’t carry much weight since most of my AWS experience comes from hobby projects (just using EC2, S3, VPCs, and Lambda). The only "proper" CI/CD experience I’ve had was building a pipeline for a B2B e-commerce project at my previous job, using GitHub Actions and Docker for the front-end and back-end.
https://redd.it/1mn6dy1
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Splash: Transform hard to grok plain text into beautiful color coded logs
Splash(https://github.com/joshi4/splash) is a CLI that automatically transforms logs of various formats into easy to scan color coded logs.
I built Splash after I found myself squinting and leaning in to the screen every time I needed to find a specific log line. It's so frustrating when you realize you've been looking for something that was there in the logs all along but you never saw it.
Other times, I'd add "ASDF" or "####" to certain log lines and use the terminals search to find those lines. To solve this, I built string and regexp matching into splash.
Hope people here find it useful! Always happy to get feature requests.
https://redd.it/1mn69ri
@r_devops
Splash(https://github.com/joshi4/splash) is a CLI that automatically transforms logs of various formats into easy to scan color coded logs.
I built Splash after I found myself squinting and leaning in to the screen every time I needed to find a specific log line. It's so frustrating when you realize you've been looking for something that was there in the logs all along but you never saw it.
Other times, I'd add "ASDF" or "####" to certain log lines and use the terminals search to find those lines. To solve this, I built string and regexp matching into splash.
Hope people here find it useful! Always happy to get feature requests.
https://redd.it/1mn69ri
@r_devops
GitHub
GitHub - joshi4/splash: Add color to your logs
Add color to your logs. Contribute to joshi4/splash development by creating an account on GitHub.
We recently released Zellij 0.43: bringing your terminal to the browser - would love to hear your thoughts
Hi all,
I am the lead maintainer of Zellij* and last week we released a significant version that includes the ability to share existing terminal sessions in the browser, as well as start new ones or resurrect exited ones. This ability includes built-in security measures such as authentication and enforcing HTTPS on external interfaces.
Personally, I am a terminal developer and use the web terminal as my daily driver. Many others use it to securely access their machine remotely. I would be curious to hear feedback from the DevOps community: would you find this useful for other things (eg. in place ssh key exchanges and the like)? If not, what would you be missing?
If you'd like to read more, the announcement is here: https://zellij.dev/news/web-client-multiple-pane-actions/
And I made a brief screencast/tutorial demonstrating the feature as well as how it integrates with native browser features such as bookmarks: https://zellij.dev/tutorials/web-client/
Curious to hear your thoughts.
*Zellij is a terminal workspace and multiplexer - in case you don't know it, there's more info here: https://zellij.dev/about/
https://redd.it/1mn63k0
@r_devops
Hi all,
I am the lead maintainer of Zellij* and last week we released a significant version that includes the ability to share existing terminal sessions in the browser, as well as start new ones or resurrect exited ones. This ability includes built-in security measures such as authentication and enforcing HTTPS on external interfaces.
Personally, I am a terminal developer and use the web terminal as my daily driver. Many others use it to securely access their machine remotely. I would be curious to hear feedback from the DevOps community: would you find this useful for other things (eg. in place ssh key exchanges and the like)? If not, what would you be missing?
If you'd like to read more, the announcement is here: https://zellij.dev/news/web-client-multiple-pane-actions/
And I made a brief screencast/tutorial demonstrating the feature as well as how it integrates with native browser features such as bookmarks: https://zellij.dev/tutorials/web-client/
Curious to hear your thoughts.
*Zellij is a terminal workspace and multiplexer - in case you don't know it, there's more info here: https://zellij.dev/about/
https://redd.it/1mn63k0
@r_devops
zellij.dev
Zellij 0.43.0: web client, multiple pane actions, compact-bar tooltips
Share sessions in the browser, perform bulk operations on multiple panes, toggle tooltips in the compact-bar...
How do you deal with GPU shortages or scheduling?
Feels like every AI project I’m on turns into “The Hunger Games” for GPUs.
* Either they’re all booked
* Or sitting idle somewhere I can’t use them
* Or I’m stuck juggling AWS/GCP/on-prem like a madman
How are you all handling this? Do you have some magic scheduler, or is it just Slack messages and crossed fingers?
Would love to hear your war stories.
https://redd.it/1mn8jvu
@r_devops
Feels like every AI project I’m on turns into “The Hunger Games” for GPUs.
* Either they’re all booked
* Or sitting idle somewhere I can’t use them
* Or I’m stuck juggling AWS/GCP/on-prem like a madman
How are you all handling this? Do you have some magic scheduler, or is it just Slack messages and crossed fingers?
Would love to hear your war stories.
https://redd.it/1mn8jvu
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How are you tracking engineering performance metrics in real time?
We’ve been syncing PR review times, cycle time and throughout into a central dashboard, ours lives in monday dev but I’m curious what pipelines others have built. Do you pull data from github actions or custom scripts? How do you avoid drowning in too many numbers while still spotting real bottlenecks?
https://redd.it/1mn9ixr
@r_devops
We’ve been syncing PR review times, cycle time and throughout into a central dashboard, ours lives in monday dev but I’m curious what pipelines others have built. Do you pull data from github actions or custom scripts? How do you avoid drowning in too many numbers while still spotting real bottlenecks?
https://redd.it/1mn9ixr
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Need recommendations for database archival and purging
Looking for an open-source solution to archive and purge old data in GCP Cloud SQL
Incrementally archive table data older than 3 months into Google Cloud Storage (GCS).
After archiving, automatically purge the archived records from the database.
Ideally, I'd like something that supports incremental runs (so it doesn't reprocess already archived data) and can be scheduled or automated.
Has anyone implemented something similar or can recommend a tool for this?
https://redd.it/1mn9f6p
@r_devops
Looking for an open-source solution to archive and purge old data in GCP Cloud SQL
Incrementally archive table data older than 3 months into Google Cloud Storage (GCS).
After archiving, automatically purge the archived records from the database.
Ideally, I'd like something that supports incremental runs (so it doesn't reprocess already archived data) and can be scheduled or automated.
Has anyone implemented something similar or can recommend a tool for this?
https://redd.it/1mn9f6p
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Looking for some advice on career switch
I am working as a QA engineer for almost 8 years now and have experience in web, mobile and api testing both manual and automation. Currently I am working on mostly performance testing using Jmeter and am looking to switch careers and maybe move into Devops side seeing as I interact with them a lot and am interested.
I have worked on python and java but not that great with programming. I know there is a roadmap shared here and a lot of threads with advice but I'm mostly looking at what I can do to switch and what kind of jobs to look for which may pay me the same or slightly more than what I Currently get.
I tried using copilot to give me a basic rundown but it is suggesting certificates from some sites like devops university etc.. I have also seen some offered by azure, aws and Google but not sure which to go for and where to start. I am from India but am open to looking for jobs anywhere so general advice and guidance would be good. Thanks
https://redd.it/1mn63dy
@r_devops
I am working as a QA engineer for almost 8 years now and have experience in web, mobile and api testing both manual and automation. Currently I am working on mostly performance testing using Jmeter and am looking to switch careers and maybe move into Devops side seeing as I interact with them a lot and am interested.
I have worked on python and java but not that great with programming. I know there is a roadmap shared here and a lot of threads with advice but I'm mostly looking at what I can do to switch and what kind of jobs to look for which may pay me the same or slightly more than what I Currently get.
I tried using copilot to give me a basic rundown but it is suggesting certificates from some sites like devops university etc.. I have also seen some offered by azure, aws and Google but not sure which to go for and where to start. I am from India but am open to looking for jobs anywhere so general advice and guidance would be good. Thanks
https://redd.it/1mn63dy
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
How to learn on a bike tour?
Hey guys!
I'll be on a solo 1-month long bike tour and as I'm a Fullstack Developer who's switching to DevOps I'd also spend my time wisely and learn. I'm highly motivated in DevOps, I've already set up a home lab at home, with Proxmox on it, to which I have access via wireguard.
I know hands-on experience is the best to get a job and to learn the most, but since I can't really do it while biking or on a phone, I'd look for podcasts, courses, Youtube channels, books as I have a Kindle as well, which would be a great resource to spend my with!
https://redd.it/1mnc6lf
@r_devops
Hey guys!
I'll be on a solo 1-month long bike tour and as I'm a Fullstack Developer who's switching to DevOps I'd also spend my time wisely and learn. I'm highly motivated in DevOps, I've already set up a home lab at home, with Proxmox on it, to which I have access via wireguard.
I know hands-on experience is the best to get a job and to learn the most, but since I can't really do it while biking or on a phone, I'd look for podcasts, courses, Youtube channels, books as I have a Kindle as well, which would be a great resource to spend my with!
https://redd.it/1mnc6lf
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
On-site SRE Assessment
I’ve got an upcoming on-site technical assessment for a Site Reliability Engineer role. I already passed the oral technical interview, but I have no idea what they might throw at me in the hands-on part.
Has anyone here been through a similar assessment? What kinds of tasks or challenges should I be prepared for? Any tips on what to focus on in the next few days would be really appreciated.
Thanks in advance!
https://redd.it/1mnja43
@r_devops
I’ve got an upcoming on-site technical assessment for a Site Reliability Engineer role. I already passed the oral technical interview, but I have no idea what they might throw at me in the hands-on part.
Has anyone here been through a similar assessment? What kinds of tasks or challenges should I be prepared for? Any tips on what to focus on in the next few days would be really appreciated.
Thanks in advance!
https://redd.it/1mnja43
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Debian 12 Packer image on Proxmox keeps on waiting for auto configuration network
I'm struggling a bit to make Packer works on my Proxmox Hypervisor to create a VM template.
I keep on getting hit by the "network autoconfiguration failed" even if my preseed.cfg mentionned to disable the network autoconfig.
It seems like the setup in my preseed.cfg isn't used. I've setup a fix ip address, but it's keep on hiting this prompt...
[![Screenshot of Debian 12 prompting "Network autoconfiguration failed"][1]][1]
Here are my files:
## debian12.pkrvars.hcl:
```
// debian12.pkr.hcl
packer {
required_plugins {
name = {
version = "1.1.6"
source = "github.com/hashicorp/proxmox"
}
}
}
variable "bios_type" {
type = string
}
variable "boot_command" {
type = string
}
variable "boot_wait" {
type = string
}
variable "bridge_firewall" {
type = bool
default = false
}
variable "bridge_name" {
type = string
}
variable "cloud_init" {
type = bool
}
variable "iso_file" {
type = string
}
variable "iso_storage_pool" {
type = string
default = "local"
}
variable "machine_default_type" {
type = string
default = "pc"
}
variable "network_model" {
type = string
default = "virtio"
}
variable "os_type" {
type = string
default = "l26"
}
variable "proxmox_api_token_id" {
type = string
}
variable "proxmox_api_token_secret" {
type = string
sensitive = true
}
variable "proxmox_api_url" {
type = string
}
variable "proxmox_node" {
type = string
}
variable "qemu_agent_activation" {
type = bool
default = true
}
variable "scsi_controller_type" {
type = string
}
variable "ssh_timeout" {
type = string
}
variable "tags" {
type = string
}
variable "io_thread" {
type = bool
}
variable "cpu_type" {
type = string
default = "kvm64"
}
variable "vm_info" {
type = string
}
variable "disk_discard" {
type = bool
default = true
}
variable "disk_format" {
type = string
default = "qcow2"
}
variable "disk_size" {
type = string
default = "16G"
}
variable "disk_type" {
type = string
default = "scsi"
}
variable "nb_core" {
type = number
default = 1
}
variable "nb_cpu" {
type = number
default = 1
}
variable "nb_ram" {
type = number
default = 1024
}
variable "ssh_username" {
type = string
}
variable "ssh_password" {
type = string
}
variable "ssh_handshake_attempts" {
type = number
}
variable "storage_pool" {
type = string
default = "local-lvm"
}
variable "vm_id" {
type = number
default = 99999
}
variable "vm_name" {
type = string
}
locals {
packer_timestamp = formatdate("YYYYMMDD-hhmm", timestamp())
}
source "proxmox-iso" "debian12" {
bios = "${var.bios_type}"
boot_command = ["${var.boot_command}"]
boot_wait = "${var.boot_wait}"
cloud_init = "${var.cloud_init}"
cloud_init_storage_pool = "${var.storage_pool}"
communicator = "ssh"
cores = "${var.nb_core}"
cpu_type = "${var.cpu_type}"
http_directory = "autoinstall"
insecure_skip_tls_verify = true
iso_file = "${var.iso_file}"
machine = "${var.machine_default_type}"
memory = "${var.nb_ram}"
node = "${var.proxmox_node}"
os = "${var.os_type}"
proxmox_url = "${var.proxmox_api_url}"
qemu_agent = "${var.qemu_agent_activation}"
scsi_controller = "${var.scsi_controller_type}"
sockets = "${var.nb_cpu}"
ssh_handshake_attempts = "${var.ssh_handshake_attempts}"
ssh_pty = true
ssh_timeout = "${var.ssh_timeout}"
ssh_username = "${var.ssh_username}"
ssh_password = "${var.ssh_password}"
tags = "${var.tags}"
template_description = "${var.vm_info} - ${local.packer_timestamp}"
token = "${var.proxmox_api_token_secret}"
I'm struggling a bit to make Packer works on my Proxmox Hypervisor to create a VM template.
I keep on getting hit by the "network autoconfiguration failed" even if my preseed.cfg mentionned to disable the network autoconfig.
It seems like the setup in my preseed.cfg isn't used. I've setup a fix ip address, but it's keep on hiting this prompt...
[![Screenshot of Debian 12 prompting "Network autoconfiguration failed"][1]][1]
Here are my files:
## debian12.pkrvars.hcl:
```
// debian12.pkr.hcl
packer {
required_plugins {
name = {
version = "1.1.6"
source = "github.com/hashicorp/proxmox"
}
}
}
variable "bios_type" {
type = string
}
variable "boot_command" {
type = string
}
variable "boot_wait" {
type = string
}
variable "bridge_firewall" {
type = bool
default = false
}
variable "bridge_name" {
type = string
}
variable "cloud_init" {
type = bool
}
variable "iso_file" {
type = string
}
variable "iso_storage_pool" {
type = string
default = "local"
}
variable "machine_default_type" {
type = string
default = "pc"
}
variable "network_model" {
type = string
default = "virtio"
}
variable "os_type" {
type = string
default = "l26"
}
variable "proxmox_api_token_id" {
type = string
}
variable "proxmox_api_token_secret" {
type = string
sensitive = true
}
variable "proxmox_api_url" {
type = string
}
variable "proxmox_node" {
type = string
}
variable "qemu_agent_activation" {
type = bool
default = true
}
variable "scsi_controller_type" {
type = string
}
variable "ssh_timeout" {
type = string
}
variable "tags" {
type = string
}
variable "io_thread" {
type = bool
}
variable "cpu_type" {
type = string
default = "kvm64"
}
variable "vm_info" {
type = string
}
variable "disk_discard" {
type = bool
default = true
}
variable "disk_format" {
type = string
default = "qcow2"
}
variable "disk_size" {
type = string
default = "16G"
}
variable "disk_type" {
type = string
default = "scsi"
}
variable "nb_core" {
type = number
default = 1
}
variable "nb_cpu" {
type = number
default = 1
}
variable "nb_ram" {
type = number
default = 1024
}
variable "ssh_username" {
type = string
}
variable "ssh_password" {
type = string
}
variable "ssh_handshake_attempts" {
type = number
}
variable "storage_pool" {
type = string
default = "local-lvm"
}
variable "vm_id" {
type = number
default = 99999
}
variable "vm_name" {
type = string
}
locals {
packer_timestamp = formatdate("YYYYMMDD-hhmm", timestamp())
}
source "proxmox-iso" "debian12" {
bios = "${var.bios_type}"
boot_command = ["${var.boot_command}"]
boot_wait = "${var.boot_wait}"
cloud_init = "${var.cloud_init}"
cloud_init_storage_pool = "${var.storage_pool}"
communicator = "ssh"
cores = "${var.nb_core}"
cpu_type = "${var.cpu_type}"
http_directory = "autoinstall"
insecure_skip_tls_verify = true
iso_file = "${var.iso_file}"
machine = "${var.machine_default_type}"
memory = "${var.nb_ram}"
node = "${var.proxmox_node}"
os = "${var.os_type}"
proxmox_url = "${var.proxmox_api_url}"
qemu_agent = "${var.qemu_agent_activation}"
scsi_controller = "${var.scsi_controller_type}"
sockets = "${var.nb_cpu}"
ssh_handshake_attempts = "${var.ssh_handshake_attempts}"
ssh_pty = true
ssh_timeout = "${var.ssh_timeout}"
ssh_username = "${var.ssh_username}"
ssh_password = "${var.ssh_password}"
tags = "${var.tags}"
template_description = "${var.vm_info} - ${local.packer_timestamp}"
token = "${var.proxmox_api_token_secret}"
unmount_iso = true
username = "${var.proxmox_api_token_id}"
vm_id = "${var.vm_id}"
vm_name = "${var.vm_name}"
disks {
discard = "${var.disk_discard}"
disk_size = "${var.disk_size}"
format = "${var.disk_format}"
io_thread = "${var.io_thread}"
storage_pool = "${var.storage_pool}"
type = "${var.disk_type}"
}
network_adapters {
bridge = "${var.bridge_name}"
firewall = "${var.bridge_firewall}"
model = "${var.network_model}"
}
}
build {
sources = ["source.proxmox-iso.debian12"]
}
```
## debian12.pkrvars.hcl:
```
// custom.pkvars.hcl
bios_type = "seabios"
boot_command = "<esc><wait>auto console-keymaps-at/keymap=fr console-setup/ask_detect=false debconf/frontend=noninteractive fb=false url=https://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg<enter>"
boot_wait = "10s"
bridge_name = "vmbr1"
bridge_firewall = false
cloud_init = true
cpu_type = "x86-64-v2-AES"
disk_discard = true
disk_format = "qcow2"
disk_size = "12G"
disk_type = "scsi"
iso_file = "DIR01:iso/debian-12.5.0-amd64-netinst.iso"
machine_default_type = "pc"
nb_core = 1
nb_cpu = 1
nb_ram = 2048
network_model = "virtio"
io_thread = false
os_type = "l26"
proxmox_api_token_id = "packer@pve!packer"
proxmox_api_token_secret = "token_secret"
proxmox_api_url = "https://ip_address:8006/api2/json"
proxmox_node = "node1"
qemu_agent_activation = true
scsi_controller_type = "virtio-scsi-pci"
ssh_handshake_attempts = 6
ssh_timeout = "35m"
ssh_username = "packer"
ssh_password = ""
storage_pool = "DIR01"
tags = "template"
vm_id = 99999
vm_info = "Debian 12 Packer Template"
vm_name = "pckr-deb12"
```
## autoinstall/preseed.cfg:
```
#_preseed_V1
d-i debian-installer/language string en
d-i debian-installer/country string FR
d-i debian-installer/locale string en_US.UTF-8
d-i localechooser/supported-locales multiselect en_US.UTF-8, fr_FR.UTF-8
d-i keyboard-configuration/xkb-keymap select fr
d-i console-keymaps-at/keymap select fr-latin9
d-i debian-installer/keymap string fr-latin9
# d-i netcfg/dhcp_failed note
# d-i netcfg/dhcp_options select Configure network manually
d-i netcfg/disable_autoconfig boolean true
d-i netcfg/choose_interface select auto
d-i netcfg/get_ipaddress string 10.10.1.250
d-i netcfg/get_netmask string 255.255.255.0
d-i netcfg/get_gateway string 10.10.1.254
d-i netcfg/get_nameservers string 1.1.1.1
d-i netcfg/confirm_static boolean true
d-i netcfg/get_hostname string pckr-deb12
d-i netcfg/get_domain string local.hommet.net
d-i hw-detect/load_firmware boolean false
d-i mirror/country string FR
d-i mirror/http/hostname string deb.debian.org
d-i mirror/http/directory string /debian
d-i mirror/http/proxy string
d-i passwd/root-login boolean true
d-i passwd/make-user boolean true
d-i passwd/root-password password pouetpouet
d-i passwd/root-password-again password pouetpouet
d-i passwd/user-fullname string jho
d-i passwd/username string jho
d-i passwd/user-password password pouetpouet
d-i passwd/user-password-again password pouetpouet
d-i clock-setup/utc boolean true
d-i time/zone string Europe/Paris
d-i clock-setup/ntp boolean true
d-i clock-setup/ntp-server string 0.fr.pool.ntp.org
d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string lvm
d-i partman-auto-lvm/guided_size string max
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
d-i partman-auto/choose_recipe select multi
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean
username = "${var.proxmox_api_token_id}"
vm_id = "${var.vm_id}"
vm_name = "${var.vm_name}"
disks {
discard = "${var.disk_discard}"
disk_size = "${var.disk_size}"
format = "${var.disk_format}"
io_thread = "${var.io_thread}"
storage_pool = "${var.storage_pool}"
type = "${var.disk_type}"
}
network_adapters {
bridge = "${var.bridge_name}"
firewall = "${var.bridge_firewall}"
model = "${var.network_model}"
}
}
build {
sources = ["source.proxmox-iso.debian12"]
}
```
## debian12.pkrvars.hcl:
```
// custom.pkvars.hcl
bios_type = "seabios"
boot_command = "<esc><wait>auto console-keymaps-at/keymap=fr console-setup/ask_detect=false debconf/frontend=noninteractive fb=false url=https://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg<enter>"
boot_wait = "10s"
bridge_name = "vmbr1"
bridge_firewall = false
cloud_init = true
cpu_type = "x86-64-v2-AES"
disk_discard = true
disk_format = "qcow2"
disk_size = "12G"
disk_type = "scsi"
iso_file = "DIR01:iso/debian-12.5.0-amd64-netinst.iso"
machine_default_type = "pc"
nb_core = 1
nb_cpu = 1
nb_ram = 2048
network_model = "virtio"
io_thread = false
os_type = "l26"
proxmox_api_token_id = "packer@pve!packer"
proxmox_api_token_secret = "token_secret"
proxmox_api_url = "https://ip_address:8006/api2/json"
proxmox_node = "node1"
qemu_agent_activation = true
scsi_controller_type = "virtio-scsi-pci"
ssh_handshake_attempts = 6
ssh_timeout = "35m"
ssh_username = "packer"
ssh_password = ""
storage_pool = "DIR01"
tags = "template"
vm_id = 99999
vm_info = "Debian 12 Packer Template"
vm_name = "pckr-deb12"
```
## autoinstall/preseed.cfg:
```
#_preseed_V1
d-i debian-installer/language string en
d-i debian-installer/country string FR
d-i debian-installer/locale string en_US.UTF-8
d-i localechooser/supported-locales multiselect en_US.UTF-8, fr_FR.UTF-8
d-i keyboard-configuration/xkb-keymap select fr
d-i console-keymaps-at/keymap select fr-latin9
d-i debian-installer/keymap string fr-latin9
# d-i netcfg/dhcp_failed note
# d-i netcfg/dhcp_options select Configure network manually
d-i netcfg/disable_autoconfig boolean true
d-i netcfg/choose_interface select auto
d-i netcfg/get_ipaddress string 10.10.1.250
d-i netcfg/get_netmask string 255.255.255.0
d-i netcfg/get_gateway string 10.10.1.254
d-i netcfg/get_nameservers string 1.1.1.1
d-i netcfg/confirm_static boolean true
d-i netcfg/get_hostname string pckr-deb12
d-i netcfg/get_domain string local.hommet.net
d-i hw-detect/load_firmware boolean false
d-i mirror/country string FR
d-i mirror/http/hostname string deb.debian.org
d-i mirror/http/directory string /debian
d-i mirror/http/proxy string
d-i passwd/root-login boolean true
d-i passwd/make-user boolean true
d-i passwd/root-password password pouetpouet
d-i passwd/root-password-again password pouetpouet
d-i passwd/user-fullname string jho
d-i passwd/username string jho
d-i passwd/user-password password pouetpouet
d-i passwd/user-password-again password pouetpouet
d-i clock-setup/utc boolean true
d-i time/zone string Europe/Paris
d-i clock-setup/ntp boolean true
d-i clock-setup/ntp-server string 0.fr.pool.ntp.org
d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string lvm
d-i partman-auto-lvm/guided_size string max
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
d-i partman-auto/choose_recipe select multi
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean
true
d-i partman/confirm_nooverwrite boolean true
d-i partman-md/confirm boolean true
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman/mount_style select uuid
d-i base-installer/install-recommends boolean false
d-i apt-setup/cdrom/set-first boolean false
d-i apt-setup/use_mirror boolean true
d-i apt-setup/security_host string security.debian.org
tasksel tasksel/first multiselect standard, ssh-server
d-i pkgsel/include string qemu-guest-agent sudo ca-certificates cloud-init
d-i pkgsel/upgrade select safe-upgrade
popularity-contest popularity-contest/participate boolean false
d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean false
d-i grub-installer/bootdev string default
d-i finish-install/reboot_in_progress note
d-i cdrom-detect/eject boolean true
```
If you have any idea how to make it work, let me know.
[1]: https://i.sstatic.net/4aGcUjyL.png
I can't understand, I feel like it doesn't take the preseed.cfg file into consideration.
https://redd.it/1mnladp
@r_devops
d-i partman/confirm_nooverwrite boolean true
d-i partman-md/confirm boolean true
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman/mount_style select uuid
d-i base-installer/install-recommends boolean false
d-i apt-setup/cdrom/set-first boolean false
d-i apt-setup/use_mirror boolean true
d-i apt-setup/security_host string security.debian.org
tasksel tasksel/first multiselect standard, ssh-server
d-i pkgsel/include string qemu-guest-agent sudo ca-certificates cloud-init
d-i pkgsel/upgrade select safe-upgrade
popularity-contest popularity-contest/participate boolean false
d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean false
d-i grub-installer/bootdev string default
d-i finish-install/reboot_in_progress note
d-i cdrom-detect/eject boolean true
```
If you have any idea how to make it work, let me know.
[1]: https://i.sstatic.net/4aGcUjyL.png
I can't understand, I feel like it doesn't take the preseed.cfg file into consideration.
https://redd.it/1mnladp
@r_devops
Helping in devops projects for free
hi folks,
i’m a devops/sre based in france, confirmed junior engineer looking to join community projects in devops or infrastructure.
i’m open to freelance work too if it helps me build up stars. i’ve reached out to people before but haven’t had much luck, and i’m not looking to spend a ton of time on self-promo since i already have a good job.
i just want to put my skills into something interesting and well-built, while working with cool people.
curious to know if others are in the same situation — where do you usually find these kinds of projects ?
https://redd.it/1mnj3de
@r_devops
hi folks,
i’m a devops/sre based in france, confirmed junior engineer looking to join community projects in devops or infrastructure.
i’m open to freelance work too if it helps me build up stars. i’ve reached out to people before but haven’t had much luck, and i’m not looking to spend a ton of time on self-promo since i already have a good job.
i just want to put my skills into something interesting and well-built, while working with cool people.
curious to know if others are in the same situation — where do you usually find these kinds of projects ?
https://redd.it/1mnj3de
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
GitHub's CEO is leaving and GitHub will be more directly integrated into Microsoft CoreAI
link
I wonder what impact this will have on the GitHub ecosystem. We use GitHub and GitHub Actions extensively as our primary git and CI platforms, respectively...
https://redd.it/1mnp20a
@r_devops
link
I wonder what impact this will have on the GitHub ecosystem. We use GitHub and GitHub Actions extensively as our primary git and CI platforms, respectively...
https://redd.it/1mnp20a
@r_devops
The GitHub Blog
Auf Wiedersehen, GitHub ♥️
I am stepping down as GitHub CEO to build my next adventure. GitHub is thriving and has a bright future ahead.
Accelerating AI-assisted development
https://metalbear.co/blog/cursor-windsurf-mirrord-extension/
https://redd.it/1mnpg8k
@r_devops
https://metalbear.co/blog/cursor-windsurf-mirrord-extension/
https://redd.it/1mnpg8k
@r_devops
MetalBear 🐻
Accelerating AI-Assisted Development With mirrord
Learn how to accelerate AI-assisted development by testing AI-generated code instantly in production-like environments using the mirrord extension for Cursor and Windsurf.
100+ Rejections, No FAANG Internship, and Still a Senior SWE — Here’s How She Did It
Breaking into tech is hard enough when you’re in a major hub.
It’s even harder when you’re applying from across the world.
I recently spoke with a developer from Nigeria who applied for over 100 jobs before getting her first break.
No FAANG internship. No insider network. Just persistence.
What finally worked:
* Building personal projects to prove she could ship
* Following up after applications (almost no one does this)
* Learning to **communicate impact**, not just list skills
Her first role was an internship via a job board — nothing glamorous, but it opened the door.
Six years later, she’s a Senior SWE building global SaaS products and mentoring 200+ devs.
Her advice for people trying to land their first SWE role:
>
I wrote up the full conversation (with her detailed strategies) — I’ll drop it in the first comment for anyone who wants to read.
What about you — what was your *breakthrough moment* in getting your first SWE job?
https://redd.it/1mnsqp9
@r_devops
Breaking into tech is hard enough when you’re in a major hub.
It’s even harder when you’re applying from across the world.
I recently spoke with a developer from Nigeria who applied for over 100 jobs before getting her first break.
No FAANG internship. No insider network. Just persistence.
What finally worked:
* Building personal projects to prove she could ship
* Following up after applications (almost no one does this)
* Learning to **communicate impact**, not just list skills
Her first role was an internship via a job board — nothing glamorous, but it opened the door.
Six years later, she’s a Senior SWE building global SaaS products and mentoring 200+ devs.
Her advice for people trying to land their first SWE role:
>
I wrote up the full conversation (with her detailed strategies) — I’ll drop it in the first comment for anyone who wants to read.
What about you — what was your *breakthrough moment* in getting your first SWE job?
https://redd.it/1mnsqp9
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Interview questions!!
I am aware that Devops is a vast field. You have to know tools . You have to know process. You have to show your engineering chops. You have to know yourself. It is not limited to certain tools , certain process or certain organization.
But my question is to the people who are interviewing. Who is your ideal candidate!!
A. Is he who can solve your problem.
B. Is he who fits your team
C. Is he who fits company culture
D. Is he who is affordable
F. Back door ??
This is a vote on you !! Please vote as you please . Trust this as a self assessment test . !!
https://redd.it/1mnu192
@r_devops
I am aware that Devops is a vast field. You have to know tools . You have to know process. You have to show your engineering chops. You have to know yourself. It is not limited to certain tools , certain process or certain organization.
But my question is to the people who are interviewing. Who is your ideal candidate!!
A. Is he who can solve your problem.
B. Is he who fits your team
C. Is he who fits company culture
D. Is he who is affordable
F. Back door ??
This is a vote on you !! Please vote as you please . Trust this as a self assessment test . !!
https://redd.it/1mnu192
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
What are the hardest tasks you had to complete during your career?
What are the hardest tasks you had to complete during your career? I am curious to know what one might expect as DevOps engineer. I am not a DevOps engineer, but I did take a bunch of courses just in case the job market becomes really competitive since people tend to prefer people with a wide array of skills.
https://redd.it/1mnvau9
@r_devops
What are the hardest tasks you had to complete during your career? I am curious to know what one might expect as DevOps engineer. I am not a DevOps engineer, but I did take a bunch of courses just in case the job market becomes really competitive since people tend to prefer people with a wide array of skills.
https://redd.it/1mnvau9
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Switching inter-service calls from HTTPS to STOMP over WebSockets - Bad idea for enterprise?
**TL;DR:** My team builds software for high-security clients (banks, government). We're considering replacing our inter-cluster HTTPS (REST) calls with STOMP over WebSockets (wss://) for a more message-driven architecture. I have some serious reservations and would love the community's opinion.
**Current Setup:** Multiple Kubernetes clusters, potentially in different regions, communicating via standard HTTPS.
**Proposed Change:** Move to persistent WebSocket connections running the STOMP messaging protocol, all secured by TLS.
**My Concerns:**
* **Security Inspection:** Our customers' Web Application Firewalls (WAFs) can inspect HTTP traffic for threats which won't be true of the new approach.
* **Monitoring & Logging:** With HTTPS, customers get rich access logs (path, status code, etc.) from our ingress controllers and service mesh. With WebSockets, the logs will just show "connection opened" and "connection closed," making it less transparent.
* **Operational Overhead:** Routing and load balancing is harder due to persistent connections.
This change will make our application much more performant, but will it be a blocker for our customers? Is there something that could be done to mitigate these concerns. I was thinking that we could reduce the duration of the persistent connections to a few minutes. It seems like this would at least help with the load balancing problem. What other things can be done? Is this acceptable or a no-go?
https://redd.it/1mnwl0r
@r_devops
**TL;DR:** My team builds software for high-security clients (banks, government). We're considering replacing our inter-cluster HTTPS (REST) calls with STOMP over WebSockets (wss://) for a more message-driven architecture. I have some serious reservations and would love the community's opinion.
**Current Setup:** Multiple Kubernetes clusters, potentially in different regions, communicating via standard HTTPS.
**Proposed Change:** Move to persistent WebSocket connections running the STOMP messaging protocol, all secured by TLS.
**My Concerns:**
* **Security Inspection:** Our customers' Web Application Firewalls (WAFs) can inspect HTTP traffic for threats which won't be true of the new approach.
* **Monitoring & Logging:** With HTTPS, customers get rich access logs (path, status code, etc.) from our ingress controllers and service mesh. With WebSockets, the logs will just show "connection opened" and "connection closed," making it less transparent.
* **Operational Overhead:** Routing and load balancing is harder due to persistent connections.
This change will make our application much more performant, but will it be a blocker for our customers? Is there something that could be done to mitigate these concerns. I was thinking that we could reduce the duration of the persistent connections to a few minutes. It seems like this would at least help with the load balancing problem. What other things can be done? Is this acceptable or a no-go?
https://redd.it/1mnwl0r
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community
Has anyone deployed AI generated stacks with Autocoder?
I’ve been exploring ways to streamline deployment workflows for small projects, and I've seen an Autocoder cc, which auto generates both code and infrastructure files. I’m curious how this compares to manual infrastructure setup from a DevOps perspective, especially in terms of reliability, security, and CI/CD integration. Has anyone here tried deploying something generated by tools like this? What was your experience?
https://redd.it/1mnxsjo
@r_devops
I’ve been exploring ways to streamline deployment workflows for small projects, and I've seen an Autocoder cc, which auto generates both code and infrastructure files. I’m curious how this compares to manual infrastructure setup from a DevOps perspective, especially in terms of reliability, security, and CI/CD integration. Has anyone here tried deploying something generated by tools like this? What was your experience?
https://redd.it/1mnxsjo
@r_devops
Reddit
From the devops community on Reddit
Explore this post and more from the devops community