PHP Reddit
31 subscribers
305 photos
40 videos
25.3K links
Channel to sync with /r/PHP /r/Laravel /r/Symfony. Powered by awesome @r_channels and @reddit2telegram
Download Telegram
Anyone else get tired of rebuilding Filament resources every time admin requirements change?

I kept hitting the same pattern in Laravel / Filament projects:

the first version of the admin panel is usually fine, but later the data side keeps changing.

New content type.
More custom fields.
Better filtering.
Dashboards.
API requirements.
Tenant-specific behavior.
More exceptions.

At that point, every "small" change becomes another migration, another model, another Filament resource, and another layer of maintenance.

So I built a plugin called **Filament Studio** for Filament v5.

The idea is to let you create collections and fields at runtime, manage records through generated Filament UI, build dashboards, add advanced filtering, and expose APIs without rebuilding a brand-new schema layer every time requirements shift.

It also includes things I thought were important if this is going to be useful beyond a demo:

\- authorization
\- multi-tenancy
\- versioning
\- soft deletes
\- custom field and panel extensibility

I know some people will immediately look at the EAV angle and prefer hand-built resources anyway, which is fair.

I am mostly curious about where other Laravel developers draw that line.

If you are building something with a stable schema, I still think hand-built resources make sense.

But if the admin/data model changes constantly, would you rather keep building each resource manually, or use something like this?

Repo if you want to look at it:

GitHub: https://github.com/flexpik/filament-studio

I am not looking for empty promotion here. I would rather hear the real objections or the kinds of projects where this would actually help.

https://redd.it/1smipbx
@r_php
I built a CLI that generate Symfony-compatible Twig templates from an Astro frontend project

Hello,

Sharing a tool I built for a workflow that comes up often on agency projects :

Astro frontend, Symfony backend, someone has to bridge the two.

Frontmatter Solo reads a constrained Astro project and generates Twig templates structured for Symfony:

frontmatter solo:build --adapter twig

Output drops directly into templates/:

The data contract follows the fm namespace. Your controller passes it as a single array.

INTEGRATION.md is generated automatically, it documents every Twig variable expected by each template. The Symfony developer reads it once, writes the controller, done.

Compatible with Symfony 5, 6, 7 (Twig 3.x).

Also works with Drupal and Craft CMS.

Check compatibility before buying (free):

npx @withfrontmatter/solo-check

$49 one-time https://www.frontmatter.tech/solo/symfony

Happy to discuss the fm namespace convention or the controller integration pattern.

https://redd.it/1smtnqq
@r_php
Two Composer command injection CVEs this week patch is one command, but there's a bigger picture here worth talking about

CVE-2026-40176 (CVSS 7.8) and CVE-2026-40261 (CVSS 8.8), both in Composer's Perforce VCS driver. The 8.8 one fires during `composer install` when pulling from source so your CI pipeline is the actual attack surface. Public PoCs dropped today.

Fix: `composer self-update` to 2.9.6, or 2.2.27 if you're on LTS. One command, do it now.

The reason I'm posting beyond the PSA: this is the third significant PHP ecosystem CVE in about six weeks, and what's getting harder isn't patching it's knowing what you actually need to care about before someone else tells you.

The same vulnerability comes in from four different feeds simultaneously. NVD, GitHub Advisories, OSV, Packagist all reporting the same CVE with different IDs, different severity framings, different context. And CVSS alone tells you very little. These two CVEs have PoCs live today. There are plenty of 9.0+ CVEs with no exploitation evidence that can sit in a backlog forever.

I've been building a tool called A.S.E. to deal with exactly this it watches all the major feeds, deduplicates across them, cross-references against your actual composer.lock, and factors in exploit probability (EPSS) alongside CVSS so you're not just triaging severity theater. Good starting point for anyone wanting something to help them out in these situations

github.com/infinri/A.S.E

https://redd.it/1sn3an5
@r_php
PagibleAI 0.10: PHP CMS for developers AND editors

We just released Pagible 0.10, an open-source AI-powered CMS built as PHP composer package for Laravel applications:

* [https://pagible.com](https://pagible.com)

# What's new in 0.10

* MCP Server — Pagible ships with a built-in Model Context Protocol server. AI agents can create pages, manage content, and search your site programmatically. This makes Pagible one of the first CMS platforms where AI can directly manage your content through a standardized protocol.
* Customizable architecture — The codebase has been split into 9 independent sub-packages (core, admin, AI, GraphQL, search, MCP, theme, etc.). Install only what you need.
* Vuetify 4 admin panel — The admin backend has been upgraded to Vuetify 4 and optimized for WCAG accessibility, keyboard navigation and reduced bundle size.
* Significant performance work — This release focused heavily on database performance: optimized indexes, reduced query count, eager loading, optimized column selection, and faster page tree fetching.
* Rewritten fulltext search — Custom Scout engine supporting fulltext search in SQLite, MySQL/MariaDB, PostgreSQL, and SQL Server. Paginated results with improved relevance ranking.
* Named roles & JSON permissions — Moved from bitmask permissions to a readable JSON array system with configurable roles (e.g. editor, publisher, viewer, etc).
* Security hardening — Rate limiting on all endpoints, strict security, DoS protection against all inputs.

# What makes Pagible different

* API first — GraphQL and JSON:API endpoints out of the box. Build headless sites, mobile apps, or single-page applications without writing a single API route ... or use traditional templates and themes - just as you like.
* AI-native — MCP server for agent-driven content management, plus built-in AI features for content generation, translation, and image manipulation.
* Hierarchical pages — Nested set tree structure with versioning. Editors see drafts, visitors see published content.
* Multi-tenant — Global tenant scoping on all models out of the box.
* Small footprint — The entire codebase is deliberately kept small. No bloat, no unnecessary abstractions.
* LGPL-3.0 — Fully open source.

# Links

* Demo: [https://demo.pagible.com/](https://demo.pagible.com/cmsadmin)
* GitHub: [https://github.com/aimeos/pagible](https://github.com/aimeos/pagible)
* Website: [https://pagible.com](https://pagible.com)

Would love to hear your feedback and if you like it, give a star :-)

https://redd.it/1sn4r04
@r_php
Finally moved my PHP media processing to an async Celery (Python) pipeline. Here’s how I handled the cross-language "handshake."

**The Problem:** I was hit with the classic scaling wall: image processing inside request cycles. Doing background removal, resizing, and PDF generation in PHP during a file upload is a recipe for timeouts and a terrible UX. PHP just isn't the right tool for heavy lifting like `rembg` or `ReportLab`.

**The Setup:** I decided to move everything to an async pipeline using **PHP → Redis → Celery (Python) → Cloudinary**.

**The "Aha! 😤 " Moment:** The trickiest part was that PHP doesn't have a great native Celery client. I didn't want to overcomplicate the stack with a bridge, so I just looked at how Celery actually talks to Redis.

Turns out, Celery’s wire format is just JSON. I ended up manually constructing the Celery protocol messages in PHP and pushing them directly into the Redis list. As long as you follow the structure (headers, properties, body), the Python worker picks it up thinking it came from another Celery instance.

**The Pipeline:**

1. **PHP:** Enqueues the job and immediately returns a 202 to the user. No blocking.
2. **Redis:** Acts as the broker.
3. **Celery (Python):** Does the heavy lifting.
* **Background Removal:** `rembg` (absolute lifesaver).
* **Resizing:** `Pillow`.
* **PDFs:** `ReportLab`.
4. **Cloudinary:** Final storage for the processed media.
5. **Callback:** The worker hits a PHP API endpoint to let the app know the asset is ready.

**The Win:** The system is finally snappy. PHP just "enqueues and forgets."

**What I’m fixing in v2:**

* **Dead-letter queues:** Right now, if a job fails, it just logs. I need a better retry/recovery flow.
* **Queue Priority:** Moving heavy PDF tasks to a separate queue so they don't block simple image resizes.
* **Visibility:** Adding **Flower** to actually see what's happening in real-time.
* **Cleanup:** Automating the `/tmp` file purge on the worker side more aggressively.

**Curious if anyone else has gone the "manual protocol" route for cross-language Celery setups?** Is there a cleaner pattern I’m missing, or is this the standard way to bridge the two?

[**https://github.com/eslieh/grid-worker**](https://github.com/eslieh/grid-worker)

[**https://github.com/eslieh/grid**](https://github.com/eslieh/grid)

https://redd.it/1sn1wyh
@r_php
I built a CLI tool that lets your AI agents improve your query performance in a loop
https://redd.it/1sne1ui
@r_php
Sharing Community Feedback from The PHP Foundation

On behalf of The PHP Foundation, I’m excited to share the results of the feedback I’ve collected over the past few weeks. It will help inform The PHP Foundation’s Strategy for the rest of 2026 and into 2027.

There are a lot of opportunities for The PHP Foundation to extend our support into the PHP ecosystem, and I couldn’t be more excited! If you’re interested, you can read the post here:

https://thephp.foundation/blog/2026/04/16/integrating-community-feedback-into-foundation-strategy-part1/

https://redd.it/1snf018
@r_php
PagibleAI 0.10: Laravel CMS for developers AND editors
https://redd.it/1sn4d1i
@r_php
I built a VS Code extension to make Laravel projects easier for AI tools to understand

I was working on some older Laravel projects recently and noticed something frustrating when using AI tools like Codex or Claude.

They struggle to understand the actual database schema of the app.

Even though all the information is technically there (models, migrations, relationships), the AI has to parse everything manually, which:

wastes tokens
misses relationships sometimes
makes responses inconsistent

So I built a small VS Code extension to solve this.

It scans:

app/Models
database/migrations

And generates a clean Markdown file with:

table structure
columns
foreign keys
Eloquent relationships

The idea is simple:

Instead of making AI read your entire codebase, you give it a structured summary of your schema.

This makes it easier to:

explain your project to AI
debug faster
onboard into older Laravel codebases

I’m still experimenting with it, so I’d love feedback:

Would this actually fit into your workflow?
Anything you’d want it to include?

GitHub:
https://github.com/u-did-it/laravel-model-markdown-generator

https://redd.it/1snvodb
@r_php