PHP Reddit
31 subscribers
308 photos
40 videos
25.3K links
Channel to sync with /r/PHP /r/Laravel /r/Symfony. Powered by awesome @r_channels and @reddit2telegram
Download Telegram
Bootgly v0.13.0-beta — Pure PHP HTTP Client (no cURL, no Guzzle, no ext dependencies) + Import Linter

Hey r/PHP,

I just released v0.13.0-beta of Bootgly, a base PHP 8.4+ framework that follows a zero third-party dependency policy.

Just install php-cli, php-readline, and php-mbstring for PHP 8.4, and you'll have a high-performance HTTP server and client (see Benchmarks bellow)! No Symfony components, no League packages, nothing from Packagist in the core.

This release adds two main features:

# 1. HTTP Client CLI — built from raw sockets

Not a cURL wrapper. Not a Guzzle fork. This is a from-scratch HTTP client built on top of stream_socket_client with its own event loop:

All standard methods (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS)
RFC 9112-compliant response decoding — chunked transfer-encoding, content-length, close-delimited
100-Continue two-phase requests (sends headers first, waits for server acceptance before body)
Keep-alive connection reuse
Request pipelining (multiple requests queued per connection)
Batch mode (batch() → N × request()drain())
Event-driven async mode via `on()` hooks
SSL/TLS support
Automatic redirect following (configurable limit)
Connection timeouts + automatic retries
Multi-worker load generation (fork-based) for benchmarking

The whole client stack is \~3,700 lines of source code (TCP layer + HTTP layer + encoders/decoders + request/response models) plus \~2,000 lines of tests. No magic, no abstraction astronautics.

Why build an HTTP client from scratch instead of using cURL? Because the same event loop (`Select`) that powers the HTTP Server also powers the HTTP Client. They share the same connection management, the same non-blocking I/O model. The client can be used to benchmark the server with real HTTP traffic without any external tool.

# 2. Import Linter (bootgly lint imports)

A code style checker/fixer for PHP `use` statements:

Detects missing imports, wrong order (const → function → class), backslash-prefixed FQN in body
Auto-fix mode (`--fix`) with `php -l` validation before writing
Dry-run mode
AI-friendly JSON output for CI integration
Handles comma-separated use, multi-namespace files, local function tracking (avoids false positives)

Built on token_get_all() — no nikic/php-parser dependency.

# Benchmarks (self-tested, WSL2, Ryzen 9 3900X, 12 workers)

Numbers below reflect *v0.13.1-beta*, a patch release with HTTP Client hot-path optimizations (+29.6% throughput) and cache isolation tests.

Scenario: 1 static route (Response is 'Hello, World!'), 514 concurrent connections, 10s duration.

|Runner|Req/s|Latency|Transfer/s|
|:-|:-|:-|:-|
|Bootgly TCP_Client_CLI|629,842|553μs|81.69 MB/s|
|WRK (C tool)|595,370|—|—|
|Bootgly HTTP_Client_CLI|568,058|1.07ms|56.95 MB/s|

Three different benchmark runners, all built-in (except wrk). The TCP client sends raw pre-built HTTP packets — that's the theoretical ceiling. The HTTP client builds and parses real HTTP requests/responses with full RFC compliance — that's the realistic throughput. WRK sits in between. All three confirm the server sustains 568k–630k req/s on a single machine with pure PHP + OPcache/JIT.

To provide context: Workman at TechEmpower Round 23 — the fastest pure PHP framework there — achieved approximately 580,000 requests per second on dedicated hardware. Bootgly reaches that level, with a difference of about 3% (a technical tie).

Why this absurd performance?

I tried replacing stream_select with libev or libuv and it got worse — the bottleneck is in the C ↔️ PHP bridge, not in the syscall.

The C → PHP callback dispatch via zend_call_function() is approximately 50% more expensive than a direct PHP method call. Many people don't know this, but stream_select
Bootgly v0.13.0-beta — Pure PHP HTTP Client (no cURL, no Guzzle, no ext dependencies) + Import Linter

Hey r/PHP,

I just released [v0.13.0-beta](https://github.com/bootgly/bootgly/releases/tag/v0.13.0-beta) of [Bootgly](https://github.com/bootgly/bootgly), a base PHP 8.4+ framework that follows a zero third-party dependency policy.

Just install `php-cli`, `php-readline`, and `php-mbstring` for PHP 8.4, and you'll have a high-performance HTTP server and client (see Benchmarks bellow)! No Symfony components, no League packages, nothing from Packagist in the core.

This release adds two main features:

# 1. HTTP Client CLI — built from raw sockets

Not a cURL wrapper. Not a Guzzle fork. This is a from-scratch HTTP client built on top of `stream_socket_client` with its own event loop:

* All standard methods (GET, POST, PUT, DELETE, PATCH, HEAD, OPTIONS)
* RFC 9112-compliant response decoding — chunked transfer-encoding, content-length, close-delimited
* 100-Continue two-phase requests (sends headers first, waits for server acceptance before body)
* Keep-alive connection reuse
* Request pipelining (multiple requests queued per connection)
* Batch mode (`batch()` → N × `request()` → `drain()`)
* Event-driven async mode via `on()` hooks
* SSL/TLS support
* Automatic redirect following (configurable limit)
* Connection timeouts + automatic retries
* Multi-worker load generation (fork-based) for benchmarking

The whole client stack is \~3,700 lines of source code (TCP layer + HTTP layer + encoders/decoders + request/response models) plus \~2,000 lines of tests. No magic, no abstraction astronautics.

**Why build an HTTP client from scratch instead of using cURL?** Because the same event loop (`Select`) that powers the HTTP Server also powers the HTTP Client. They share the same connection management, the same non-blocking I/O model. The client can be used to benchmark the server with real HTTP traffic without any external tool.

# 2. Import Linter (bootgly lint imports)

A code style checker/fixer for PHP `use` statements:

* Detects missing imports, wrong order (const → function → class), backslash-prefixed FQN in body
* Auto-fix mode (`--fix`) with `php -l` validation before writing
* Dry-run mode
* AI-friendly JSON output for CI integration
* Handles comma-separated `use`, multi-namespace files, local function tracking (avoids false positives)

Built on `token_get_all()` — no nikic/php-parser dependency.

# Benchmarks (self-tested, WSL2, Ryzen 9 3900X, 12 workers)

*Numbers below reflect* [*v0.13.1-beta*](https://github.com/bootgly/bootgly/releases/tag/v0.13.1-beta)*, a patch release with HTTP Client hot-path optimizations (+29.6% throughput) and cache isolation tests.*

Scenario: 1 static route (Response is 'Hello, World!'), 514 concurrent connections, 10s duration.

|Runner|Req/s|Latency|Transfer/s|
|:-|:-|:-|:-|
|**Bootgly TCP\_Client\_CLI**|**629,842**|553μs|81.69 MB/s|
|**WRK** (C tool)|**595,370**|—|—|
|**Bootgly HTTP\_Client\_CLI**|**568,058**|1.07ms|56.95 MB/s|

Three different benchmark runners, all built-in (except wrk). The TCP client sends raw pre-built HTTP packets — that's the theoretical ceiling. The HTTP client builds and parses real HTTP requests/responses with full RFC compliance — that's the realistic throughput. WRK sits in between. All three confirm the server sustains **568k–630k req/s** on a single machine with pure PHP + OPcache/JIT.

To provide context: [Workman at TechEmpower Round 23](https://www.techempower.com/benchmarks/#section=data-r23&test=plaintext&l=zik073-pa7) — the fastest pure PHP framework there — achieved approximately 580,000 requests per second on dedicated hardware. Bootgly reaches that level, with a difference of about 3% (a technical tie).

Why this absurd performance?

I tried replacing `stream_select` with `libev` or `libuv` and it got worse — the bottleneck is in the C ↔️ PHP bridge, not in the syscall.

The C → PHP callback dispatch via `zend_call_function()` is approximately 50% more expensive than a direct PHP method call. Many people don't know this, but `stream_select`
has absurd performance and the call is 50% faster than a C ↔️ PHP bridge.

# Stats

* 37 commits, 467 files changed, +13,426 / −3,996 lines
* PHPStan level 9 — 0 errors
* 331 test cases passing (using Bootgly's own test framework, not PHPUnit)

# The "why should I care" part

I know r/PHP sees a lot of "my framework" posts. Here's what makes Bootgly different from Yet Another Framework:

1. **Zero third-party deps in core.** The vendor folder in production has exactly one package: Bootgly itself. This isn't ideological — it means the HTTP server boots in \~2ms and the entire framework loads in a single autoboot.php.
2. **I2P architecture (Interface-to-Platform).** Six layers (ABI → ACI → ADI → API → CLI → WPI) with strict one-way dependency. CLI creates the Console platform, WPI creates the Web platform. Each layer can only depend on layers below it. This is enforced by convention and static analysis, not by DI magic.
3. **One-way policy.** There is exactly one HTTP server, one router, one test framework, one autoloader. No "pick your adapter" indirection. This makes the codebase smaller and easier to audit.
4. **Built for PHP 8.4.** Property hooks, typed properties everywhere, enums, fibers-ready. No PHP 7 compatibility baggage.

It's still beta — not production-ready. But if you're tired of frameworks where `composer install` downloads 200 packages to serve a JSON response, take a look.

GitHub: [https://github.com/bootgly/bootgly](https://github.com/bootgly/bootgly)
Release: [https://github.com/bootgly/bootgly/releases/tag/v0.13.0-beta](https://github.com/bootgly/bootgly/releases/tag/v0.13.0-beta)
Patch: [https://github.com/bootgly/bootgly/releases/tag/v0.13.1-beta](https://github.com/bootgly/bootgly/releases/tag/v0.13.1-beta)

Happy to answer questions and take criticism.

https://redd.it/1slnw50
@r_php
Is Claude my permanent co-author?

I wanted to migrate an old PHP web app that I wrote by hand to a modern framework, and chose Symfony. I prepared some docs, watched some symfony youtubes, and resisted getting started for months. Finally, I decided to see if Claude code could get me over the hump. Well, I'm astounded by the result. Completely rebuilt in a solid Symfony framework in about 10 days. Works beautifully. I had claude build documentation as well, but now I have a site whose internal wiring is really beyond my ability to manage responsibly. I can invoke Claude in the code base, and pick up work at any time, but I couldn't maintain the system without Claude. I feel peculiar about it now: I'm the (human) author but I have an AI partner that has to be part of the "team" going forward. I can't be the first person to get here. Any words of advice?

https://redd.it/1slxyp7
@r_php
How I evolved a PHP payment model from one table to DDD — channels, state machines, and hexagonal architecture

I got tired of every project reinventing the payment layer from scratch, so I tried to build a proper domain model in PHP and document the process.

Wrote about going from a single table to channels, state machines, and hexagonal architecture.

It's an experiment, not a final answer — curious how others tackle this.

https://corner4.dev/reinventing-payment-how-i-evolved-a-domain-model-from-one-table-to-ddd

https://redd.it/1sm3f80
@r_php
Anyone else get tired of rebuilding Filament resources every time admin requirements change?

I kept hitting the same pattern in Laravel / Filament projects:

the first version of the admin panel is usually fine, but later the data side keeps changing.

New content type.
More custom fields.
Better filtering.
Dashboards.
API requirements.
Tenant-specific behavior.
More exceptions.

At that point, every "small" change becomes another migration, another model, another Filament resource, and another layer of maintenance.

So I built a plugin called **Filament Studio** for Filament v5.

The idea is to let you create collections and fields at runtime, manage records through generated Filament UI, build dashboards, add advanced filtering, and expose APIs without rebuilding a brand-new schema layer every time requirements shift.

It also includes things I thought were important if this is going to be useful beyond a demo:

\- authorization
\- multi-tenancy
\- versioning
\- soft deletes
\- custom field and panel extensibility

I know some people will immediately look at the EAV angle and prefer hand-built resources anyway, which is fair.

I am mostly curious about where other Laravel developers draw that line.

If you are building something with a stable schema, I still think hand-built resources make sense.

But if the admin/data model changes constantly, would you rather keep building each resource manually, or use something like this?

Repo if you want to look at it:

GitHub: https://github.com/flexpik/filament-studio

I am not looking for empty promotion here. I would rather hear the real objections or the kinds of projects where this would actually help.

https://redd.it/1smipbx
@r_php
I built a CLI that generate Symfony-compatible Twig templates from an Astro frontend project

Hello,

Sharing a tool I built for a workflow that comes up often on agency projects :

Astro frontend, Symfony backend, someone has to bridge the two.

Frontmatter Solo reads a constrained Astro project and generates Twig templates structured for Symfony:

frontmatter solo:build --adapter twig

Output drops directly into templates/:

The data contract follows the fm namespace. Your controller passes it as a single array.

INTEGRATION.md is generated automatically, it documents every Twig variable expected by each template. The Symfony developer reads it once, writes the controller, done.

Compatible with Symfony 5, 6, 7 (Twig 3.x).

Also works with Drupal and Craft CMS.

Check compatibility before buying (free):

npx @withfrontmatter/solo-check

$49 one-time https://www.frontmatter.tech/solo/symfony

Happy to discuss the fm namespace convention or the controller integration pattern.

https://redd.it/1smtnqq
@r_php
Two Composer command injection CVEs this week patch is one command, but there's a bigger picture here worth talking about

CVE-2026-40176 (CVSS 7.8) and CVE-2026-40261 (CVSS 8.8), both in Composer's Perforce VCS driver. The 8.8 one fires during `composer install` when pulling from source so your CI pipeline is the actual attack surface. Public PoCs dropped today.

Fix: `composer self-update` to 2.9.6, or 2.2.27 if you're on LTS. One command, do it now.

The reason I'm posting beyond the PSA: this is the third significant PHP ecosystem CVE in about six weeks, and what's getting harder isn't patching it's knowing what you actually need to care about before someone else tells you.

The same vulnerability comes in from four different feeds simultaneously. NVD, GitHub Advisories, OSV, Packagist all reporting the same CVE with different IDs, different severity framings, different context. And CVSS alone tells you very little. These two CVEs have PoCs live today. There are plenty of 9.0+ CVEs with no exploitation evidence that can sit in a backlog forever.

I've been building a tool called A.S.E. to deal with exactly this it watches all the major feeds, deduplicates across them, cross-references against your actual composer.lock, and factors in exploit probability (EPSS) alongside CVSS so you're not just triaging severity theater. Good starting point for anyone wanting something to help them out in these situations

github.com/infinri/A.S.E

https://redd.it/1sn3an5
@r_php
PagibleAI 0.10: PHP CMS for developers AND editors

We just released Pagible 0.10, an open-source AI-powered CMS built as PHP composer package for Laravel applications:

* [https://pagible.com](https://pagible.com)

# What's new in 0.10

* MCP Server — Pagible ships with a built-in Model Context Protocol server. AI agents can create pages, manage content, and search your site programmatically. This makes Pagible one of the first CMS platforms where AI can directly manage your content through a standardized protocol.
* Customizable architecture — The codebase has been split into 9 independent sub-packages (core, admin, AI, GraphQL, search, MCP, theme, etc.). Install only what you need.
* Vuetify 4 admin panel — The admin backend has been upgraded to Vuetify 4 and optimized for WCAG accessibility, keyboard navigation and reduced bundle size.
* Significant performance work — This release focused heavily on database performance: optimized indexes, reduced query count, eager loading, optimized column selection, and faster page tree fetching.
* Rewritten fulltext search — Custom Scout engine supporting fulltext search in SQLite, MySQL/MariaDB, PostgreSQL, and SQL Server. Paginated results with improved relevance ranking.
* Named roles & JSON permissions — Moved from bitmask permissions to a readable JSON array system with configurable roles (e.g. editor, publisher, viewer, etc).
* Security hardening — Rate limiting on all endpoints, strict security, DoS protection against all inputs.

# What makes Pagible different

* API first — GraphQL and JSON:API endpoints out of the box. Build headless sites, mobile apps, or single-page applications without writing a single API route ... or use traditional templates and themes - just as you like.
* AI-native — MCP server for agent-driven content management, plus built-in AI features for content generation, translation, and image manipulation.
* Hierarchical pages — Nested set tree structure with versioning. Editors see drafts, visitors see published content.
* Multi-tenant — Global tenant scoping on all models out of the box.
* Small footprint — The entire codebase is deliberately kept small. No bloat, no unnecessary abstractions.
* LGPL-3.0 — Fully open source.

# Links

* Demo: [https://demo.pagible.com/](https://demo.pagible.com/cmsadmin)
* GitHub: [https://github.com/aimeos/pagible](https://github.com/aimeos/pagible)
* Website: [https://pagible.com](https://pagible.com)

Would love to hear your feedback and if you like it, give a star :-)

https://redd.it/1sn4r04
@r_php
Finally moved my PHP media processing to an async Celery (Python) pipeline. Here’s how I handled the cross-language "handshake."

**The Problem:** I was hit with the classic scaling wall: image processing inside request cycles. Doing background removal, resizing, and PDF generation in PHP during a file upload is a recipe for timeouts and a terrible UX. PHP just isn't the right tool for heavy lifting like `rembg` or `ReportLab`.

**The Setup:** I decided to move everything to an async pipeline using **PHP → Redis → Celery (Python) → Cloudinary**.

**The "Aha! 😤 " Moment:** The trickiest part was that PHP doesn't have a great native Celery client. I didn't want to overcomplicate the stack with a bridge, so I just looked at how Celery actually talks to Redis.

Turns out, Celery’s wire format is just JSON. I ended up manually constructing the Celery protocol messages in PHP and pushing them directly into the Redis list. As long as you follow the structure (headers, properties, body), the Python worker picks it up thinking it came from another Celery instance.

**The Pipeline:**

1. **PHP:** Enqueues the job and immediately returns a 202 to the user. No blocking.
2. **Redis:** Acts as the broker.
3. **Celery (Python):** Does the heavy lifting.
* **Background Removal:** `rembg` (absolute lifesaver).
* **Resizing:** `Pillow`.
* **PDFs:** `ReportLab`.
4. **Cloudinary:** Final storage for the processed media.
5. **Callback:** The worker hits a PHP API endpoint to let the app know the asset is ready.

**The Win:** The system is finally snappy. PHP just "enqueues and forgets."

**What I’m fixing in v2:**

* **Dead-letter queues:** Right now, if a job fails, it just logs. I need a better retry/recovery flow.
* **Queue Priority:** Moving heavy PDF tasks to a separate queue so they don't block simple image resizes.
* **Visibility:** Adding **Flower** to actually see what's happening in real-time.
* **Cleanup:** Automating the `/tmp` file purge on the worker side more aggressively.

**Curious if anyone else has gone the "manual protocol" route for cross-language Celery setups?** Is there a cleaner pattern I’m missing, or is this the standard way to bridge the two?

[**https://github.com/eslieh/grid-worker**](https://github.com/eslieh/grid-worker)

[**https://github.com/eslieh/grid**](https://github.com/eslieh/grid)

https://redd.it/1sn1wyh
@r_php
I built a CLI tool that lets your AI agents improve your query performance in a loop
https://redd.it/1sne1ui
@r_php
Sharing Community Feedback from The PHP Foundation

On behalf of The PHP Foundation, I’m excited to share the results of the feedback I’ve collected over the past few weeks. It will help inform The PHP Foundation’s Strategy for the rest of 2026 and into 2027.

There are a lot of opportunities for The PHP Foundation to extend our support into the PHP ecosystem, and I couldn’t be more excited! If you’re interested, you can read the post here:

https://thephp.foundation/blog/2026/04/16/integrating-community-feedback-into-foundation-strategy-part1/

https://redd.it/1snf018
@r_php