25 years to the day !! of my first surviving open source PHP project: PHP-Egg, born 13 April 2001. FIrst PHP Daemon. First RFC PHP client (IRC). First long-running (months on end) PHP process.
https://github.com/hopeseekr/phpegg
https://redd.it/1sl89rw
@r_php
https://github.com/hopeseekr/phpegg
https://redd.it/1sl89rw
@r_php
GitHub
GitHub - hopeseekr/phpegg: The first IRC bot + IRC client / server + CLI daemon in PHP.
The first IRC bot + IRC client / server + CLI daemon in PHP. - hopeseekr/phpegg
Symfony_Live Berlin: "Build Applications that Welcome Change"
https://symfony.com/blog/symfony-live-berlin-build-applications-that-welcome-change?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1slcok7
@r_php
https://symfony.com/blog/symfony-live-berlin-build-applications-that-welcome-change?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1slcok7
@r_php
Symfony
Symfony_Live Berlin: "Build Applications that Welcome Change" (Symfony Blog)
Change is inevitable — but your architecture can make it easier. In “Build Applications that Welcome Change”, Alexander M. Turek shares how to design Symfony apps built for long-term evolution.
SymfonyLive Berlin 2026: "AI Culture in Open Source — The Sylius Way"
https://symfony.com/blog/s?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sl57gb
@r_php
https://symfony.com/blog/s?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sl57gb
@r_php
SymfonyConnect
Join the Community - SymfonyConnect
SymfonyConnect is a developer social network. Join the community, create your developer profile, and start earning achievements for your community involvement and commitment.
I built a modern and clean PHP wrapper for Android ADB (xvq/php-adb)
**Hi Reddit,**
**I built this a while back** when I was working on some Android automation projects. At the time, I found that the PHP ecosystem lacked native ADB (Android Debug Bridge) libraries. I was forced to switch to Python or Go for device interactions, but the context-switching cost was too high for my workflow.
So, I developed **xvq/php-adb**. This library is **heavily inspired by the Python** [**openatx/adbutils**](https://github.com/openatx/adbutils) **library**, aiming to bring that same ease of use to PHP.
**Features:**
* **Device Management:** List, connect, and switch between devices (supports wireless ADB).
* **Shell Commands:** Execute adb shell commands and get output as strings or arrays.
* **Input Control:** Support for screen taps (clicks), key events, and text input.
* **Port Forwarding:** Manage forward and reverse port mapping.
* **File Transfer:** Built-in `push` and `pull` support.
* **App Management:** Install, uninstall, and clear app data.
* **Screenshots:** Capture screen directly to local files.
**Quick Example:**
PHP
use Xvq\PhpAdb\Adb;
$adb = new AdbClient();
$device = $adb->device('emulator-5554');
// Tap at coordinates
$device->input->tap(500, 1000);
// Press Home button
$device->input->keyEvent(KeyCode::KEY_HOME);
// Screenshot
$device->screenshot('./debug.png');
I hope this helps anyone doing Android automation within the PHP ecosystem. Feedback and bug reports are welcome!
**GitHub:** [https://github.com/xvq/php-adb](https://github.com/xvq/php-adb)
https://redd.it/1sl81xx
@r_php
**Hi Reddit,**
**I built this a while back** when I was working on some Android automation projects. At the time, I found that the PHP ecosystem lacked native ADB (Android Debug Bridge) libraries. I was forced to switch to Python or Go for device interactions, but the context-switching cost was too high for my workflow.
So, I developed **xvq/php-adb**. This library is **heavily inspired by the Python** [**openatx/adbutils**](https://github.com/openatx/adbutils) **library**, aiming to bring that same ease of use to PHP.
**Features:**
* **Device Management:** List, connect, and switch between devices (supports wireless ADB).
* **Shell Commands:** Execute adb shell commands and get output as strings or arrays.
* **Input Control:** Support for screen taps (clicks), key events, and text input.
* **Port Forwarding:** Manage forward and reverse port mapping.
* **File Transfer:** Built-in `push` and `pull` support.
* **App Management:** Install, uninstall, and clear app data.
* **Screenshots:** Capture screen directly to local files.
**Quick Example:**
PHP
use Xvq\PhpAdb\Adb;
$adb = new AdbClient();
$device = $adb->device('emulator-5554');
// Tap at coordinates
$device->input->tap(500, 1000);
// Press Home button
$device->input->keyEvent(KeyCode::KEY_HOME);
// Screenshot
$device->screenshot('./debug.png');
I hope this helps anyone doing Android automation within the PHP ecosystem. Feedback and bug reports are welcome!
**GitHub:** [https://github.com/xvq/php-adb](https://github.com/xvq/php-adb)
https://redd.it/1sl81xx
@r_php
GitHub
GitHub - openatx/adbutils: pure python adb library for google adb service.
pure python adb library for google adb service. Contribute to openatx/adbutils development by creating an account on GitHub.
Shopper: Announcing the Livewire Starter Kit
https://laravelshopper.dev/blog/announcing-the-livewire-starter-kit
https://redd.it/1slkefv
@r_php
https://laravelshopper.dev/blog/announcing-the-livewire-starter-kit
https://redd.it/1slkefv
@r_php
Shopper
Announcing the Livewire Starter Kit // Shopper
A production-ready storefront built with Livewire 3, Flux UI, and Tailwind CSS v4. Product browsing, variant selection, multi-step checkout with Stripe, customer accounts. One command to install.
Bootgly v0.13.0-beta — Pure PHP HTTP Client (no cURL, no Guzzle, no ext dependencies) + Import Linter
Hey r/PHP,
I just released v0.13.0-beta of Bootgly, a base PHP 8.4+ framework that follows a zero third-party dependency policy.
Just install
This release adds two main features:
# 1. HTTP Client CLI — built from raw sockets
Not a cURL wrapper. Not a Guzzle fork. This is a from-scratch HTTP client built on top of
All standard methods (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS)
RFC 9112-compliant response decoding — chunked transfer-encoding, content-length, close-delimited
100-Continue two-phase requests (sends headers first, waits for server acceptance before body)
Keep-alive connection reuse
Request pipelining (multiple requests queued per connection)
Batch mode (
Event-driven async mode via `on()` hooks
SSL/TLS support
Automatic redirect following (configurable limit)
Connection timeouts + automatic retries
Multi-worker load generation (fork-based) for benchmarking
The whole client stack is \~3,700 lines of source code (TCP layer + HTTP layer + encoders/decoders + request/response models) plus \~2,000 lines of tests. No magic, no abstraction astronautics.
Why build an HTTP client from scratch instead of using cURL? Because the same event loop (`Select`) that powers the HTTP Server also powers the HTTP Client. They share the same connection management, the same non-blocking I/O model. The client can be used to benchmark the server with real HTTP traffic without any external tool.
# 2. Import Linter (bootgly lint imports)
A code style checker/fixer for PHP `use` statements:
Detects missing imports, wrong order (const → function → class), backslash-prefixed FQN in body
Auto-fix mode (`--fix`) with `php -l` validation before writing
Dry-run mode
AI-friendly JSON output for CI integration
Handles comma-separated
Built on
# Benchmarks (self-tested, WSL2, Ryzen 9 3900X, 12 workers)
Numbers below reflect *v0.13.1-beta*, a patch release with HTTP Client hot-path optimizations (+29.6% throughput) and cache isolation tests.
Scenario: 1 static route (Response is 'Hello, World!'), 514 concurrent connections, 10s duration.
|Runner|Req/s|Latency|Transfer/s|
|:-|:-|:-|:-|
|Bootgly TCP_Client_CLI|629,842|553μs|81.69 MB/s|
|WRK (C tool)|595,370|—|—|
|Bootgly HTTP_Client_CLI|568,058|1.07ms|56.95 MB/s|
Three different benchmark runners, all built-in (except wrk). The TCP client sends raw pre-built HTTP packets — that's the theoretical ceiling. The HTTP client builds and parses real HTTP requests/responses with full RFC compliance — that's the realistic throughput. WRK sits in between. All three confirm the server sustains 568k–630k req/s on a single machine with pure PHP + OPcache/JIT.
To provide context: Workman at TechEmpower Round 23 — the fastest pure PHP framework there — achieved approximately 580,000 requests per second on dedicated hardware. Bootgly reaches that level, with a difference of about 3% (a technical tie).
Why this absurd performance?
I tried replacing
The C → PHP callback dispatch via
Hey r/PHP,
I just released v0.13.0-beta of Bootgly, a base PHP 8.4+ framework that follows a zero third-party dependency policy.
Just install
php-cli, php-readline, and php-mbstring for PHP 8.4, and you'll have a high-performance HTTP server and client (see Benchmarks bellow)! No Symfony components, no League packages, nothing from Packagist in the core.This release adds two main features:
# 1. HTTP Client CLI — built from raw sockets
Not a cURL wrapper. Not a Guzzle fork. This is a from-scratch HTTP client built on top of
stream_socket_client with its own event loop:All standard methods (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS)
RFC 9112-compliant response decoding — chunked transfer-encoding, content-length, close-delimited
100-Continue two-phase requests (sends headers first, waits for server acceptance before body)
Keep-alive connection reuse
Request pipelining (multiple requests queued per connection)
Batch mode (
batch() → N × request() → drain())Event-driven async mode via `on()` hooks
SSL/TLS support
Automatic redirect following (configurable limit)
Connection timeouts + automatic retries
Multi-worker load generation (fork-based) for benchmarking
The whole client stack is \~3,700 lines of source code (TCP layer + HTTP layer + encoders/decoders + request/response models) plus \~2,000 lines of tests. No magic, no abstraction astronautics.
Why build an HTTP client from scratch instead of using cURL? Because the same event loop (`Select`) that powers the HTTP Server also powers the HTTP Client. They share the same connection management, the same non-blocking I/O model. The client can be used to benchmark the server with real HTTP traffic without any external tool.
# 2. Import Linter (bootgly lint imports)
A code style checker/fixer for PHP `use` statements:
Detects missing imports, wrong order (const → function → class), backslash-prefixed FQN in body
Auto-fix mode (`--fix`) with `php -l` validation before writing
Dry-run mode
AI-friendly JSON output for CI integration
Handles comma-separated
use, multi-namespace files, local function tracking (avoids false positives)Built on
token_get_all() — no nikic/php-parser dependency.# Benchmarks (self-tested, WSL2, Ryzen 9 3900X, 12 workers)
Numbers below reflect *v0.13.1-beta*, a patch release with HTTP Client hot-path optimizations (+29.6% throughput) and cache isolation tests.
Scenario: 1 static route (Response is 'Hello, World!'), 514 concurrent connections, 10s duration.
|Runner|Req/s|Latency|Transfer/s|
|:-|:-|:-|:-|
|Bootgly TCP_Client_CLI|629,842|553μs|81.69 MB/s|
|WRK (C tool)|595,370|—|—|
|Bootgly HTTP_Client_CLI|568,058|1.07ms|56.95 MB/s|
Three different benchmark runners, all built-in (except wrk). The TCP client sends raw pre-built HTTP packets — that's the theoretical ceiling. The HTTP client builds and parses real HTTP requests/responses with full RFC compliance — that's the realistic throughput. WRK sits in between. All three confirm the server sustains 568k–630k req/s on a single machine with pure PHP + OPcache/JIT.
To provide context: Workman at TechEmpower Round 23 — the fastest pure PHP framework there — achieved approximately 580,000 requests per second on dedicated hardware. Bootgly reaches that level, with a difference of about 3% (a technical tie).
Why this absurd performance?
I tried replacing
stream_select with libev or libuv and it got worse — the bottleneck is in the C ↔️ PHP bridge, not in the syscall.The C → PHP callback dispatch via
zend_call_function() is approximately 50% more expensive than a direct PHP method call. Many people don't know this, but stream_selectGitHub
Release v0.13.0-beta · bootgly/bootgly
Focus: HTTP Client CLI + Linter (Imports Formatter)
🆕 MINOR version
Changes
WPI — Web Programming Interface
HTTP Client CLI (WPI/Nodes/HTTP_Client_CLI)
GET, POST, PUT, DELETE, PATCH, HEAD, OPTIO...
🆕 MINOR version
Changes
WPI — Web Programming Interface
HTTP Client CLI (WPI/Nodes/HTTP_Client_CLI)
GET, POST, PUT, DELETE, PATCH, HEAD, OPTIO...
Bootgly v0.13.0-beta — Pure PHP HTTP Client (no cURL, no Guzzle, no ext dependencies) + Import Linter
Hey r/PHP,
I just released [v0.13.0-beta](https://github.com/bootgly/bootgly/releases/tag/v0.13.0-beta) of [Bootgly](https://github.com/bootgly/bootgly), a base PHP 8.4+ framework that follows a zero third-party dependency policy.
Just install `php-cli`, `php-readline`, and `php-mbstring` for PHP 8.4, and you'll have a high-performance HTTP server and client (see Benchmarks bellow)! No Symfony components, no League packages, nothing from Packagist in the core.
This release adds two main features:
# 1. HTTP Client CLI — built from raw sockets
Not a cURL wrapper. Not a Guzzle fork. This is a from-scratch HTTP client built on top of `stream_socket_client` with its own event loop:
* All standard methods (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS)
* RFC 9112-compliant response decoding — chunked transfer-encoding, content-length, close-delimited
* 100-Continue two-phase requests (sends headers first, waits for server acceptance before body)
* Keep-alive connection reuse
* Request pipelining (multiple requests queued per connection)
* Batch mode (`batch()` → N × `request()` → `drain()`)
* Event-driven async mode via `on()` hooks
* SSL/TLS support
* Automatic redirect following (configurable limit)
* Connection timeouts + automatic retries
* Multi-worker load generation (fork-based) for benchmarking
The whole client stack is \~3,700 lines of source code (TCP layer + HTTP layer + encoders/decoders + request/response models) plus \~2,000 lines of tests. No magic, no abstraction astronautics.
**Why build an HTTP client from scratch instead of using cURL?** Because the same event loop (`Select`) that powers the HTTP Server also powers the HTTP Client. They share the same connection management, the same non-blocking I/O model. The client can be used to benchmark the server with real HTTP traffic without any external tool.
# 2. Import Linter (bootgly lint imports)
A code style checker/fixer for PHP `use` statements:
* Detects missing imports, wrong order (const → function → class), backslash-prefixed FQN in body
* Auto-fix mode (`--fix`) with `php -l` validation before writing
* Dry-run mode
* AI-friendly JSON output for CI integration
* Handles comma-separated `use`, multi-namespace files, local function tracking (avoids false positives)
Built on `token_get_all()` — no nikic/php-parser dependency.
# Benchmarks (self-tested, WSL2, Ryzen 9 3900X, 12 workers)
*Numbers below reflect* [*v0.13.1-beta*](https://github.com/bootgly/bootgly/releases/tag/v0.13.1-beta)*, a patch release with HTTP Client hot-path optimizations (+29.6% throughput) and cache isolation tests.*
Scenario: 1 static route (Response is 'Hello, World!'), 514 concurrent connections, 10s duration.
|Runner|Req/s|Latency|Transfer/s|
|:-|:-|:-|:-|
|**Bootgly TCP\_Client\_CLI**|**629,842**|553μs|81.69 MB/s|
|**WRK** (C tool)|**595,370**|—|—|
|**Bootgly HTTP\_Client\_CLI**|**568,058**|1.07ms|56.95 MB/s|
Three different benchmark runners, all built-in (except wrk). The TCP client sends raw pre-built HTTP packets — that's the theoretical ceiling. The HTTP client builds and parses real HTTP requests/responses with full RFC compliance — that's the realistic throughput. WRK sits in between. All three confirm the server sustains **568k–630k req/s** on a single machine with pure PHP + OPcache/JIT.
To provide context: [Workman at TechEmpower Round 23](https://www.techempower.com/benchmarks/#section=data-r23&test=plaintext&l=zik073-pa7) — the fastest pure PHP framework there — achieved approximately 580,000 requests per second on dedicated hardware. Bootgly reaches that level, with a difference of about 3% (a technical tie).
Why this absurd performance?
I tried replacing `stream_select` with `libev` or `libuv` and it got worse — the bottleneck is in the C ↔️ PHP bridge, not in the syscall.
The C → PHP callback dispatch via `zend_call_function()` is approximately 50% more expensive than a direct PHP method call. Many people don't know this, but `stream_select`
Hey r/PHP,
I just released [v0.13.0-beta](https://github.com/bootgly/bootgly/releases/tag/v0.13.0-beta) of [Bootgly](https://github.com/bootgly/bootgly), a base PHP 8.4+ framework that follows a zero third-party dependency policy.
Just install `php-cli`, `php-readline`, and `php-mbstring` for PHP 8.4, and you'll have a high-performance HTTP server and client (see Benchmarks bellow)! No Symfony components, no League packages, nothing from Packagist in the core.
This release adds two main features:
# 1. HTTP Client CLI — built from raw sockets
Not a cURL wrapper. Not a Guzzle fork. This is a from-scratch HTTP client built on top of `stream_socket_client` with its own event loop:
* All standard methods (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS)
* RFC 9112-compliant response decoding — chunked transfer-encoding, content-length, close-delimited
* 100-Continue two-phase requests (sends headers first, waits for server acceptance before body)
* Keep-alive connection reuse
* Request pipelining (multiple requests queued per connection)
* Batch mode (`batch()` → N × `request()` → `drain()`)
* Event-driven async mode via `on()` hooks
* SSL/TLS support
* Automatic redirect following (configurable limit)
* Connection timeouts + automatic retries
* Multi-worker load generation (fork-based) for benchmarking
The whole client stack is \~3,700 lines of source code (TCP layer + HTTP layer + encoders/decoders + request/response models) plus \~2,000 lines of tests. No magic, no abstraction astronautics.
**Why build an HTTP client from scratch instead of using cURL?** Because the same event loop (`Select`) that powers the HTTP Server also powers the HTTP Client. They share the same connection management, the same non-blocking I/O model. The client can be used to benchmark the server with real HTTP traffic without any external tool.
# 2. Import Linter (bootgly lint imports)
A code style checker/fixer for PHP `use` statements:
* Detects missing imports, wrong order (const → function → class), backslash-prefixed FQN in body
* Auto-fix mode (`--fix`) with `php -l` validation before writing
* Dry-run mode
* AI-friendly JSON output for CI integration
* Handles comma-separated `use`, multi-namespace files, local function tracking (avoids false positives)
Built on `token_get_all()` — no nikic/php-parser dependency.
# Benchmarks (self-tested, WSL2, Ryzen 9 3900X, 12 workers)
*Numbers below reflect* [*v0.13.1-beta*](https://github.com/bootgly/bootgly/releases/tag/v0.13.1-beta)*, a patch release with HTTP Client hot-path optimizations (+29.6% throughput) and cache isolation tests.*
Scenario: 1 static route (Response is 'Hello, World!'), 514 concurrent connections, 10s duration.
|Runner|Req/s|Latency|Transfer/s|
|:-|:-|:-|:-|
|**Bootgly TCP\_Client\_CLI**|**629,842**|553μs|81.69 MB/s|
|**WRK** (C tool)|**595,370**|—|—|
|**Bootgly HTTP\_Client\_CLI**|**568,058**|1.07ms|56.95 MB/s|
Three different benchmark runners, all built-in (except wrk). The TCP client sends raw pre-built HTTP packets — that's the theoretical ceiling. The HTTP client builds and parses real HTTP requests/responses with full RFC compliance — that's the realistic throughput. WRK sits in between. All three confirm the server sustains **568k–630k req/s** on a single machine with pure PHP + OPcache/JIT.
To provide context: [Workman at TechEmpower Round 23](https://www.techempower.com/benchmarks/#section=data-r23&test=plaintext&l=zik073-pa7) — the fastest pure PHP framework there — achieved approximately 580,000 requests per second on dedicated hardware. Bootgly reaches that level, with a difference of about 3% (a technical tie).
Why this absurd performance?
I tried replacing `stream_select` with `libev` or `libuv` and it got worse — the bottleneck is in the C ↔️ PHP bridge, not in the syscall.
The C → PHP callback dispatch via `zend_call_function()` is approximately 50% more expensive than a direct PHP method call. Many people don't know this, but `stream_select`
GitHub
Release v0.13.0-beta · bootgly/bootgly
Focus: HTTP Client CLI + Linter (Imports Formatter)
🆕 MINOR version
Changes
WPI — Web Programming Interface
HTTP Client CLI (WPI/Nodes/HTTP_Client_CLI)
GET, POST, PUT, DELETE, PATCH, HEAD, OPTIO...
🆕 MINOR version
Changes
WPI — Web Programming Interface
HTTP Client CLI (WPI/Nodes/HTTP_Client_CLI)
GET, POST, PUT, DELETE, PATCH, HEAD, OPTIO...
has absurd performance and the call is 50% faster than a C ↔️ PHP bridge.
# Stats
* 37 commits, 467 files changed, +13,426 / −3,996 lines
* PHPStan level 9 — 0 errors
* 331 test cases passing (using Bootgly's own test framework, not PHPUnit)
# The "why should I care" part
I know r/PHP sees a lot of "my framework" posts. Here's what makes Bootgly different from Yet Another Framework™:
1. **Zero third-party deps in core.** The vendor folder in production has exactly one package: Bootgly itself. This isn't ideological — it means the HTTP server boots in \~2ms and the entire framework loads in a single autoboot.php.
2. **I2P architecture (Interface-to-Platform).** Six layers (ABI → ACI → ADI → API → CLI → WPI) with strict one-way dependency. CLI creates the Console platform, WPI creates the Web platform. Each layer can only depend on layers below it. This is enforced by convention and static analysis, not by DI magic.
3. **One-way policy.** There is exactly one HTTP server, one router, one test framework, one autoloader. No "pick your adapter" indirection. This makes the codebase smaller and easier to audit.
4. **Built for PHP 8.4.** Property hooks, typed properties everywhere, enums, fibers-ready. No PHP 7 compatibility baggage.
It's still beta — not production-ready. But if you're tired of frameworks where `composer install` downloads 200 packages to serve a JSON response, take a look.
GitHub: [https://github.com/bootgly/bootgly](https://github.com/bootgly/bootgly)
Release: [https://github.com/bootgly/bootgly/releases/tag/v0.13.0-beta](https://github.com/bootgly/bootgly/releases/tag/v0.13.0-beta)
Patch: [https://github.com/bootgly/bootgly/releases/tag/v0.13.1-beta](https://github.com/bootgly/bootgly/releases/tag/v0.13.1-beta)
Happy to answer questions and take criticism.
https://redd.it/1slnw50
@r_php
# Stats
* 37 commits, 467 files changed, +13,426 / −3,996 lines
* PHPStan level 9 — 0 errors
* 331 test cases passing (using Bootgly's own test framework, not PHPUnit)
# The "why should I care" part
I know r/PHP sees a lot of "my framework" posts. Here's what makes Bootgly different from Yet Another Framework™:
1. **Zero third-party deps in core.** The vendor folder in production has exactly one package: Bootgly itself. This isn't ideological — it means the HTTP server boots in \~2ms and the entire framework loads in a single autoboot.php.
2. **I2P architecture (Interface-to-Platform).** Six layers (ABI → ACI → ADI → API → CLI → WPI) with strict one-way dependency. CLI creates the Console platform, WPI creates the Web platform. Each layer can only depend on layers below it. This is enforced by convention and static analysis, not by DI magic.
3. **One-way policy.** There is exactly one HTTP server, one router, one test framework, one autoloader. No "pick your adapter" indirection. This makes the codebase smaller and easier to audit.
4. **Built for PHP 8.4.** Property hooks, typed properties everywhere, enums, fibers-ready. No PHP 7 compatibility baggage.
It's still beta — not production-ready. But if you're tired of frameworks where `composer install` downloads 200 packages to serve a JSON response, take a look.
GitHub: [https://github.com/bootgly/bootgly](https://github.com/bootgly/bootgly)
Release: [https://github.com/bootgly/bootgly/releases/tag/v0.13.0-beta](https://github.com/bootgly/bootgly/releases/tag/v0.13.0-beta)
Patch: [https://github.com/bootgly/bootgly/releases/tag/v0.13.1-beta](https://github.com/bootgly/bootgly/releases/tag/v0.13.1-beta)
Happy to answer questions and take criticism.
https://redd.it/1slnw50
@r_php
GitHub
GitHub - bootgly/bootgly: Base PHP Framework for Multi Projects
Base PHP Framework for Multi Projects. Contribute to bootgly/bootgly development by creating an account on GitHub.
Instant view switches with Inertia v3 prefetching
https://freek.dev/3087-instant-view-switches-with-inertia-v3-prefetching
https://redd.it/1slyo1i
@r_php
https://freek.dev/3087-instant-view-switches-with-inertia-v3-prefetching
https://redd.it/1slyo1i
@r_php
freek.dev
Instant view switches with Inertia v3 prefetching | freek.dev
Over the past few months we've been building There There at Spatie, a support tool shaped by the two decades we've spent running our own customer support. The goal is simple: the helpdesk we always wished we had.
We care about using AI in a particular way.…
We care about using AI in a particular way.…
Why Projections Exist — Your First Read Model
https://medium.com/@dariuszgafka/why-projections-exist-your-first-read-model-ceaab247bc2b
https://redd.it/1slz17q
@r_php
https://medium.com/@dariuszgafka/why-projections-exist-your-first-read-model-ceaab247bc2b
https://redd.it/1slz17q
@r_php
Medium
Why Projections Exist — Your First Read Model
Event Sourcing gives you a complete history of everything that happened in your system. What it does not give you is a way to query it.
Is Claude my permanent co-author?
I wanted to migrate an old PHP web app that I wrote by hand to a modern framework, and chose Symfony. I prepared some docs, watched some symfony youtubes, and resisted getting started for months. Finally, I decided to see if Claude code could get me over the hump. Well, I'm astounded by the result. Completely rebuilt in a solid Symfony framework in about 10 days. Works beautifully. I had claude build documentation as well, but now I have a site whose internal wiring is really beyond my ability to manage responsibly. I can invoke Claude in the code base, and pick up work at any time, but I couldn't maintain the system without Claude. I feel peculiar about it now: I'm the (human) author but I have an AI partner that has to be part of the "team" going forward. I can't be the first person to get here. Any words of advice?
https://redd.it/1slxyp7
@r_php
I wanted to migrate an old PHP web app that I wrote by hand to a modern framework, and chose Symfony. I prepared some docs, watched some symfony youtubes, and resisted getting started for months. Finally, I decided to see if Claude code could get me over the hump. Well, I'm astounded by the result. Completely rebuilt in a solid Symfony framework in about 10 days. Works beautifully. I had claude build documentation as well, but now I have a site whose internal wiring is really beyond my ability to manage responsibly. I can invoke Claude in the code base, and pick up work at any time, but I couldn't maintain the system without Claude. I feel peculiar about it now: I'm the (human) author but I have an AI partner that has to be part of the "team" going forward. I can't be the first person to get here. Any words of advice?
https://redd.it/1slxyp7
@r_php
Reddit
From the PHP community on Reddit
Explore this post and more from the PHP community
How I evolved a PHP payment model from one table to DDD — channels, state machines, and hexagonal architecture
I got tired of every project reinventing the payment layer from scratch, so I tried to build a proper domain model in PHP and document the process.
Wrote about going from a single table to channels, state machines, and hexagonal architecture.
It's an experiment, not a final answer — curious how others tackle this.
https://corner4.dev/reinventing-payment-how-i-evolved-a-domain-model-from-one-table-to-ddd
https://redd.it/1sm3f80
@r_php
I got tired of every project reinventing the payment layer from scratch, so I tried to build a proper domain model in PHP and document the process.
Wrote about going from a single table to channels, state machines, and hexagonal architecture.
It's an experiment, not a final answer — curious how others tackle this.
https://corner4.dev/reinventing-payment-how-i-evolved-a-domain-model-from-one-table-to-ddd
https://redd.it/1sm3f80
@r_php
Reddit
From the PHP community on Reddit
Explore this post and more from the PHP community
SymfonyLive Berlin 2026: "Specing out teamwork”
https://symfony.com/blog/symfonylive-berlin-2026-specing-out-teamwork?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sm8ctw
@r_php
https://symfony.com/blog/symfonylive-berlin-2026-specing-out-teamwork?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sm8ctw
@r_php
Symfony
SymfonyLive Berlin 2026: "Specing out teamwork” (Symfony Blog)
Great teams don’t just follow processes — they build their own. In “Specing out teamwork”, Stiven Llupa shares strategies to improve collaboration and delivery in Symfony teams.
Your AI Agent Has Amnesia. Let's Fix It. - Ship AI with Laravel EP4
https://youtu.be/mZbyCIOFWsE
https://redd.it/1sm8hp0
@r_php
https://youtu.be/mZbyCIOFWsE
https://redd.it/1sm8hp0
@r_php
YouTube
Your AI Agent Has Amnesia. Let's Fix It. - Ship AI with Laravel EP4
Episode 4 of Ship AI with Laravel, a series on Laravel News where we build a full AI platform using Laravel 13 and the Laravel AI SDK.
Our agent can look up orders and pull customer history, but every message is a blank slate. Ask about order 1042, get a…
Our agent can look up orders and pull customer history, but every message is a blank slate. Ask about order 1042, get a…
Anyone else get tired of rebuilding Filament resources every time admin requirements change?
I kept hitting the same pattern in Laravel / Filament projects:
the first version of the admin panel is usually fine, but later the data side keeps changing.
New content type.
More custom fields.
Better filtering.
Dashboards.
API requirements.
Tenant-specific behavior.
More exceptions.
At that point, every "small" change becomes another migration, another model, another Filament resource, and another layer of maintenance.
So I built a plugin called **Filament Studio** for Filament v5.
The idea is to let you create collections and fields at runtime, manage records through generated Filament UI, build dashboards, add advanced filtering, and expose APIs without rebuilding a brand-new schema layer every time requirements shift.
It also includes things I thought were important if this is going to be useful beyond a demo:
\- authorization
\- multi-tenancy
\- versioning
\- soft deletes
\- custom field and panel extensibility
I know some people will immediately look at the EAV angle and prefer hand-built resources anyway, which is fair.
I am mostly curious about where other Laravel developers draw that line.
If you are building something with a stable schema, I still think hand-built resources make sense.
But if the admin/data model changes constantly, would you rather keep building each resource manually, or use something like this?
Repo if you want to look at it:
GitHub: https://github.com/flexpik/filament-studio
I am not looking for empty promotion here. I would rather hear the real objections or the kinds of projects where this would actually help.
https://redd.it/1smipbx
@r_php
I kept hitting the same pattern in Laravel / Filament projects:
the first version of the admin panel is usually fine, but later the data side keeps changing.
New content type.
More custom fields.
Better filtering.
Dashboards.
API requirements.
Tenant-specific behavior.
More exceptions.
At that point, every "small" change becomes another migration, another model, another Filament resource, and another layer of maintenance.
So I built a plugin called **Filament Studio** for Filament v5.
The idea is to let you create collections and fields at runtime, manage records through generated Filament UI, build dashboards, add advanced filtering, and expose APIs without rebuilding a brand-new schema layer every time requirements shift.
It also includes things I thought were important if this is going to be useful beyond a demo:
\- authorization
\- multi-tenancy
\- versioning
\- soft deletes
\- custom field and panel extensibility
I know some people will immediately look at the EAV angle and prefer hand-built resources anyway, which is fair.
I am mostly curious about where other Laravel developers draw that line.
If you are building something with a stable schema, I still think hand-built resources make sense.
But if the admin/data model changes constantly, would you rather keep building each resource manually, or use something like this?
Repo if you want to look at it:
GitHub: https://github.com/flexpik/filament-studio
I am not looking for empty promotion here. I would rather hear the real objections or the kinds of projects where this would actually help.
https://redd.it/1smipbx
@r_php
GitHub
GitHub - flexpik/filament-studio: Dynamic data model manager for Filament — EAV storage, 33 field types, dashboards, multi-tenancy
Dynamic data model manager for Filament — EAV storage, 33 field types, dashboards, multi-tenancy - flexpik/filament-studio
I built a CLI that generate Symfony-compatible Twig templates from an Astro frontend project
Hello,
Sharing a tool I built for a workflow that comes up often on agency projects :
Astro frontend, Symfony backend, someone has to bridge the two.
Frontmatter Solo reads a constrained Astro project and generates Twig templates structured for Symfony:
frontmatter solo:build --adapter twig
Output drops directly into templates/:
The data contract follows the fm namespace. Your controller passes it as a single array.
INTEGRATION.md is generated automatically, it documents every Twig variable expected by each template. The Symfony developer reads it once, writes the controller, done.
Compatible with Symfony 5, 6, 7 (Twig 3.x).
Also works with Drupal and Craft CMS.
Check compatibility before buying (free):
npx @withfrontmatter/solo-check
$49 one-time https://www.frontmatter.tech/solo/symfony
Happy to discuss the fm namespace convention or the controller integration pattern.
https://redd.it/1smtnqq
@r_php
Hello,
Sharing a tool I built for a workflow that comes up often on agency projects :
Astro frontend, Symfony backend, someone has to bridge the two.
Frontmatter Solo reads a constrained Astro project and generates Twig templates structured for Symfony:
frontmatter solo:build --adapter twig
Output drops directly into templates/:
The data contract follows the fm namespace. Your controller passes it as a single array.
INTEGRATION.md is generated automatically, it documents every Twig variable expected by each template. The Symfony developer reads it once, writes the controller, done.
Compatible with Symfony 5, 6, 7 (Twig 3.x).
Also works with Drupal and Craft CMS.
Check compatibility before buying (free):
npx @withfrontmatter/solo-check
$49 one-time https://www.frontmatter.tech/solo/symfony
Happy to discuss the fm namespace convention or the controller integration pattern.
https://redd.it/1smtnqq
@r_php
www.frontmatter.tech
Generate Twig templates for Symfony from your HTML design — Frontmatter Solo
Turn your frontend design into Symfony-ready Twig templates. Solo reads a constrained Astro project and outputs partials, layouts, manifest.json, and INTEGRATION.md — ready to drop into templates/.
SymfonyLive Berlin 2026: "Simultaneous editing: Easy mode with Symfony UX"
https://symfony.com/blog/symfonylive-berlin-2026-simultaneous-editing-easy-mode-with-symfony-ux?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sn2nmd
@r_php
https://symfony.com/blog/symfonylive-berlin-2026-simultaneous-editing-easy-mode-with-symfony-ux?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sn2nmd
@r_php
Symfony
SymfonyLive Berlin 2026: "Simultaneous editing: Easy mode with Symfony UX" (Symfony Blog)
No more heavy JavaScript frontends. In “Simultaneous editing: Easy mode with Symfony UX”, David Buchmann shows how Hotwire and Symfony UX enable fast, reactive web applications with minimal compl…
Two Composer command injection CVEs this week patch is one command, but there's a bigger picture here worth talking about
CVE-2026-40176 (CVSS 7.8) and CVE-2026-40261 (CVSS 8.8), both in Composer's Perforce VCS driver. The 8.8 one fires during `composer install` when pulling from source so your CI pipeline is the actual attack surface. Public PoCs dropped today.
Fix: `composer self-update` to 2.9.6, or 2.2.27 if you're on LTS. One command, do it now.
The reason I'm posting beyond the PSA: this is the third significant PHP ecosystem CVE in about six weeks, and what's getting harder isn't patching it's knowing what you actually need to care about before someone else tells you.
The same vulnerability comes in from four different feeds simultaneously. NVD, GitHub Advisories, OSV, Packagist all reporting the same CVE with different IDs, different severity framings, different context. And CVSS alone tells you very little. These two CVEs have PoCs live today. There are plenty of 9.0+ CVEs with no exploitation evidence that can sit in a backlog forever.
I've been building a tool called A.S.E. to deal with exactly this it watches all the major feeds, deduplicates across them, cross-references against your actual composer.lock, and factors in exploit probability (EPSS) alongside CVSS so you're not just triaging severity theater. Good starting point for anyone wanting something to help them out in these situations
github.com/infinri/A.S.E
https://redd.it/1sn3an5
@r_php
CVE-2026-40176 (CVSS 7.8) and CVE-2026-40261 (CVSS 8.8), both in Composer's Perforce VCS driver. The 8.8 one fires during `composer install` when pulling from source so your CI pipeline is the actual attack surface. Public PoCs dropped today.
Fix: `composer self-update` to 2.9.6, or 2.2.27 if you're on LTS. One command, do it now.
The reason I'm posting beyond the PSA: this is the third significant PHP ecosystem CVE in about six weeks, and what's getting harder isn't patching it's knowing what you actually need to care about before someone else tells you.
The same vulnerability comes in from four different feeds simultaneously. NVD, GitHub Advisories, OSV, Packagist all reporting the same CVE with different IDs, different severity framings, different context. And CVSS alone tells you very little. These two CVEs have PoCs live today. There are plenty of 9.0+ CVEs with no exploitation evidence that can sit in a backlog forever.
I've been building a tool called A.S.E. to deal with exactly this it watches all the major feeds, deduplicates across them, cross-references against your actual composer.lock, and factors in exploit probability (EPSS) alongside CVSS so you're not just triaging severity theater. Good starting point for anyone wanting something to help them out in these situations
github.com/infinri/A.S.E
https://redd.it/1sn3an5
@r_php
GitHub
GitHub - infinri/A.S.E: CVE monitoring for Magento / Adobe Commerce / Mage-OS. Polls KEV, NVD, GHSA, OSV, Packagist; filters against…
CVE monitoring for Magento / Adobe Commerce / Mage-OS. Polls KEV, NVD, GHSA, OSV, Packagist; filters against your composer.lock; alerts only P0/P1 to Slack. - infinri/A.S.E
PagibleAI 0.10: PHP CMS for developers AND editors
We just released Pagible 0.10, an open-source AI-powered CMS built as PHP composer package for Laravel applications:
* [https://pagible.com](https://pagible.com)
# What's new in 0.10
* MCP Server — Pagible ships with a built-in Model Context Protocol server. AI agents can create pages, manage content, and search your site programmatically. This makes Pagible one of the first CMS platforms where AI can directly manage your content through a standardized protocol.
* Customizable architecture — The codebase has been split into 9 independent sub-packages (core, admin, AI, GraphQL, search, MCP, theme, etc.). Install only what you need.
* Vuetify 4 admin panel — The admin backend has been upgraded to Vuetify 4 and optimized for WCAG accessibility, keyboard navigation and reduced bundle size.
* Significant performance work — This release focused heavily on database performance: optimized indexes, reduced query count, eager loading, optimized column selection, and faster page tree fetching.
* Rewritten fulltext search — Custom Scout engine supporting fulltext search in SQLite, MySQL/MariaDB, PostgreSQL, and SQL Server. Paginated results with improved relevance ranking.
* Named roles & JSON permissions — Moved from bitmask permissions to a readable JSON array system with configurable roles (e.g. editor, publisher, viewer, etc).
* Security hardening — Rate limiting on all endpoints, strict security, DoS protection against all inputs.
# What makes Pagible different
* API first — GraphQL and JSON:API endpoints out of the box. Build headless sites, mobile apps, or single-page applications without writing a single API route ... or use traditional templates and themes - just as you like.
* AI-native — MCP server for agent-driven content management, plus built-in AI features for content generation, translation, and image manipulation.
* Hierarchical pages — Nested set tree structure with versioning. Editors see drafts, visitors see published content.
* Multi-tenant — Global tenant scoping on all models out of the box.
* Small footprint — The entire codebase is deliberately kept small. No bloat, no unnecessary abstractions.
* LGPL-3.0 — Fully open source.
# Links
* Demo: [https://demo.pagible.com/](https://demo.pagible.com/cmsadmin)
* GitHub: [https://github.com/aimeos/pagible](https://github.com/aimeos/pagible)
* Website: [https://pagible.com](https://pagible.com)
Would love to hear your feedback and if you like it, give a star :-)
https://redd.it/1sn4r04
@r_php
We just released Pagible 0.10, an open-source AI-powered CMS built as PHP composer package for Laravel applications:
* [https://pagible.com](https://pagible.com)
# What's new in 0.10
* MCP Server — Pagible ships with a built-in Model Context Protocol server. AI agents can create pages, manage content, and search your site programmatically. This makes Pagible one of the first CMS platforms where AI can directly manage your content through a standardized protocol.
* Customizable architecture — The codebase has been split into 9 independent sub-packages (core, admin, AI, GraphQL, search, MCP, theme, etc.). Install only what you need.
* Vuetify 4 admin panel — The admin backend has been upgraded to Vuetify 4 and optimized for WCAG accessibility, keyboard navigation and reduced bundle size.
* Significant performance work — This release focused heavily on database performance: optimized indexes, reduced query count, eager loading, optimized column selection, and faster page tree fetching.
* Rewritten fulltext search — Custom Scout engine supporting fulltext search in SQLite, MySQL/MariaDB, PostgreSQL, and SQL Server. Paginated results with improved relevance ranking.
* Named roles & JSON permissions — Moved from bitmask permissions to a readable JSON array system with configurable roles (e.g. editor, publisher, viewer, etc).
* Security hardening — Rate limiting on all endpoints, strict security, DoS protection against all inputs.
# What makes Pagible different
* API first — GraphQL and JSON:API endpoints out of the box. Build headless sites, mobile apps, or single-page applications without writing a single API route ... or use traditional templates and themes - just as you like.
* AI-native — MCP server for agent-driven content management, plus built-in AI features for content generation, translation, and image manipulation.
* Hierarchical pages — Nested set tree structure with versioning. Editors see drafts, visitors see published content.
* Multi-tenant — Global tenant scoping on all models out of the box.
* Small footprint — The entire codebase is deliberately kept small. No bloat, no unnecessary abstractions.
* LGPL-3.0 — Fully open source.
# Links
* Demo: [https://demo.pagible.com/](https://demo.pagible.com/cmsadmin)
* GitHub: [https://github.com/aimeos/pagible](https://github.com/aimeos/pagible)
* Website: [https://pagible.com](https://pagible.com)
Would love to hear your feedback and if you like it, give a star :-)
https://redd.it/1sn4r04
@r_php
Pagible
Pagible AI CMS - Next level content management!
PagibleAI, the AI-powered CMS that combines WordPress's ease of use with the power of Contentful. Get your work done in minutes instead of hours!