🎊 SymfonyLive Paris 2026 : le wrap-up ! 🚀
https://symfony.com/blog/symfonylive-paris-2026-le-wrap-up?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sewmib
@r_php
https://symfony.com/blog/symfonylive-paris-2026-le-wrap-up?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sewmib
@r_php
Symfony
🎊 SymfonyLive Paris 2026 : le wrap-up ! 🚀 (Symfony Blog)
SymfonyLive Paris 2026 : deux jours d’apprentissage, de partage et de rencontres avec la communauté Symfony. Merci à tous pour cette édition mémorable !
Scythe: an SQL Compiler and Linter, making ORMs redundant
Hi Peeps,
I released Scythe — an SQL compiler that generates type-safe database access code from plain SQL. If you're familiar with sqlc, the concept is similar — sqlc was a direct inspiration. Since Scythe treats SQL as the source of truth, it also ships with robust SQL linting and formatting — 93 rules covering correctness, performance, style, and naming conventions, powered by a built-in sqruff integration.
## Why compile SQL?
ORMs add unnecessary bloat and complexity. SQL as the source of truth, from which you generate type-safe and precise code, gives you most of the benefits of ORMs without the cruft and hard-to-debug edge cases.
This is common practice in Go, where sqlc is widely used. I personally also use it in Rust — I used sqlc with the community-provided Rust plugin, which is solid. But sqlc has limitations: type inference for complex joins, nullability propagation, and multi-language support are areas where I wanted more.
## What Scythe does differently
Scythe has a modular, trait-based architecture built in Rust. It uses engine-specific manifests and Jinja templates to make backends highly extensible. Out of the box it supports all major backend languages:
- Rust (sqlx, tokio-postgres)
- Python (psycopg3, asyncpg, aiomysql, aiosqlite)
- TypeScript (postgres.js, pg, mysql2, better-sqlite3)
- Go (pgx, database/sql)
- Java (JDBC)
- Kotlin (JDBC)
- C# (Npgsql, MySqlConnector, Microsoft.Data.Sqlite)
- Elixir (Postgrex, MyXQL, Exqlite)
- Ruby (pg, mysql2, sqlite3)
- PHP (PDO)
It also supports multiple databases — PostgreSQL, MySQL, and SQLite — with more planned.
Most languages have several driver options per database. For example, in Rust you can target sqlx or tokio-postgres. In Python, you can choose between psycopg3 (sync), asyncpg (async PG), aiomysql (async MySQL), or aiosqlite (async SQLite). The engine-aware architecture means adding a new database for an existing driver is often just a manifest file.
Beyond codegen, Scythe includes 93 SQL lint rules (22 custom + 71 via sqruff integration), SQL formatting, and a migration tool for sqlc users.
- GitHub
- Documentation
- crates.io
https://redd.it/1seurw1
@r_php
Hi Peeps,
I released Scythe — an SQL compiler that generates type-safe database access code from plain SQL. If you're familiar with sqlc, the concept is similar — sqlc was a direct inspiration. Since Scythe treats SQL as the source of truth, it also ships with robust SQL linting and formatting — 93 rules covering correctness, performance, style, and naming conventions, powered by a built-in sqruff integration.
## Why compile SQL?
ORMs add unnecessary bloat and complexity. SQL as the source of truth, from which you generate type-safe and precise code, gives you most of the benefits of ORMs without the cruft and hard-to-debug edge cases.
This is common practice in Go, where sqlc is widely used. I personally also use it in Rust — I used sqlc with the community-provided Rust plugin, which is solid. But sqlc has limitations: type inference for complex joins, nullability propagation, and multi-language support are areas where I wanted more.
## What Scythe does differently
Scythe has a modular, trait-based architecture built in Rust. It uses engine-specific manifests and Jinja templates to make backends highly extensible. Out of the box it supports all major backend languages:
- Rust (sqlx, tokio-postgres)
- Python (psycopg3, asyncpg, aiomysql, aiosqlite)
- TypeScript (postgres.js, pg, mysql2, better-sqlite3)
- Go (pgx, database/sql)
- Java (JDBC)
- Kotlin (JDBC)
- C# (Npgsql, MySqlConnector, Microsoft.Data.Sqlite)
- Elixir (Postgrex, MyXQL, Exqlite)
- Ruby (pg, mysql2, sqlite3)
- PHP (PDO)
It also supports multiple databases — PostgreSQL, MySQL, and SQLite — with more planned.
Most languages have several driver options per database. For example, in Rust you can target sqlx or tokio-postgres. In Python, you can choose between psycopg3 (sync), asyncpg (async PG), aiomysql (async MySQL), or aiosqlite (async SQLite). The engine-aware architecture means adding a new database for an existing driver is often just a manifest file.
Beyond codegen, Scythe includes 93 SQL lint rules (22 custom + 71 via sqruff integration), SQL formatting, and a migration tool for sqlc users.
- GitHub
- Documentation
- crates.io
https://redd.it/1seurw1
@r_php
GitHub
GitHub - Goldziher/scythe: SQL-to-code generator for 10 languages and 3 databases
SQL-to-code generator for 10 languages and 3 databases - Goldziher/scythe
Follow-up: Filament Compass
Hey everyone,
Earlier this week I posted about a repo I made called Filament Compass, which provides structured data to stop AI from hallucinating or using deprecated methods when generating code for Filament v5.
I wanted to drop a quick update: I've launched
Just to clarify how the two repositories work together:
The original repo remains the main source of truth. You can still use it to create your own custom "compass" or even refine the base data to better suit your needs.
The new
I sync the data manually between them using a script available in the main repo. If you want to run the script yourself or customize it, just drop the source repositories (
Installation & Documentation:
You can find the setup instructions in the README of the new package repo: https://github.com/aldesrahim/filament-compass-pkg
Note:
The new package is not tested thoroughly yet, but I've checked that Claude Code can successfully read the filament-compass skill and its documentation.
Let me know what you think, and happy coding!
https://redd.it/1sf21bu
@r_php
Hey everyone,
Earlier this week I posted about a repo I made called Filament Compass, which provides structured data to stop AI from hallucinating or using deprecated methods when generating code for Filament v5.
I wanted to drop a quick update: I've launched
filament-compass-pkg, so you can now install this data directly into your projects via Composer!Just to clarify how the two repositories work together:
The original repo remains the main source of truth. You can still use it to create your own custom "compass" or even refine the base data to better suit your needs.
The new
pkg repo is the "result" repository that acts strictly as the provider for Composer/Packagist.I sync the data manually between them using a script available in the main repo. If you want to run the script yourself or customize it, just drop the source repositories (
filament, demo, filament-compass-pkg) into the source folder of the main repo, and update the sync.sh (and/or PLAN.md) file if you introduce any new folder structures or instruction update.Installation & Documentation:
You can find the setup instructions in the README of the new package repo: https://github.com/aldesrahim/filament-compass-pkg
Note:
The new package is not tested thoroughly yet, but I've checked that Claude Code can successfully read the filament-compass skill and its documentation.
Let me know what you think, and happy coding!
https://redd.it/1sf21bu
@r_php
Reddit
From the laravel community on Reddit: Filament Compass: Better LLM prompts for Filament v5
Explore this post and more from the laravel community
A different approach to PHP debugging
https://ddless.com/blog/technical-journey-building-php-debugger
https://redd.it/1sf4foa
@r_php
https://ddless.com/blog/technical-journey-building-php-debugger
https://redd.it/1sf4foa
@r_php
ddless
ddless - PHP Development Workbench
A desktop workbench for PHP and Laravel developers. Debug, run code, test methods, and manage CLI scripts — all in one place. No Xdebug, no IDE plugins.
SymfonyLive Berlin 2026: Discover the Workshops!
https://symfony.com/blog/symfonylive-berlin-2026-discover-the-workshops?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sf3dc5
@r_php
https://symfony.com/blog/symfonylive-berlin-2026-discover-the-workshops?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sf3dc5
@r_php
Symfony
SymfonyLive Berlin 2026: Discover the Workshops! (Symfony Blog)
Join two days of hands-on workshops at SymfonyLive Berlin 2026 (April 21–22). From Symfony 8 to AI, Docker, and clean architectur. Boost your skills before the conference.
PHP Tek Returns to Chicago May 19-21, 2026
Hi PHPers... come join us for 3 days of fun, networking, and learning. PHP Tek is the longest running PHP conference and is returning for our 18th annual show.
This year we will have our 3 normal tracks for PHP Tek, plus a 4th track dedicated to JavaScript presentations.
Use this link to get $100 off your ticket.
https://ti.to/phptek/phptek-2026/discount/reddit
https://redd.it/1sf5ptx
@r_php
Hi PHPers... come join us for 3 days of fun, networking, and learning. PHP Tek is the longest running PHP conference and is returning for our 18th annual show.
This year we will have our 3 normal tracks for PHP Tek, plus a 4th track dedicated to JavaScript presentations.
Use this link to get $100 off your ticket.
https://ti.to/phptek/phptek-2026/discount/reddit
https://redd.it/1sf5ptx
@r_php
phptek.io
PHP Tek 2026 - The Premier PHP Conference
Join us at PHP Tek 2026, the premier PHP conference featuring expert speakers, hands-on workshops, and networking opportunities.
👻 PHP Dead Code Detector is stable (after 4 years of development), newly supports even Laravel!
https://github.com/shipmonk-rnd/dead-code-detector
https://redd.it/1sfr3n7
@r_php
https://github.com/shipmonk-rnd/dead-code-detector
https://redd.it/1sfr3n7
@r_php
GitHub
GitHub - shipmonk-rnd/dead-code-detector: 💀 PHP unused code detection via PHPStan extension. Detects dead cycles, supports libs…
💀 PHP unused code detection via PHPStan extension. Detects dead cycles, supports libs like Laravel, Symfony, Twig, Doctrine, PHPUnit etc. Can automatically remove dead PHP code. Able to detect dead...
Feedback on my package Laravel Policy Engine
Hey gang,
I've been building a larger project but one of the most complex parts of it has been authorization. I really spent a lot of time thinking through the authorization model and wanted something like IAM policy documents but native to my Laravel app.
I am a long time Spatie fanboy and have used Roles & Permissions package for years, but for this particular build I'm working on, I don't think the data model was quite granular enough in the way I'm needing.
So over the last couple months I've been building Laravel Policy Engine which has been everything I've learned while working on my larger app. I've really tried to distill it down and harden it into a rock solid package.
The pitch is basically,
1. Declarative policy documents your entire authorization config lives in version-controlled JSON.
2. Scoped permissions roles are assigned per-scope (team::5, plan::pro), not just globally. One user can be an admin in one org and a viewer in another, without workarounds and such. Just declare it and it just works.
3. Permission boundaries, hard ceilings per scope, like AWS permission boundaries. Even if someone holds admin, the boundary has final say.
4. Deny rules rule everything, so !posts.delete overrides every allow, across every role. No "last role wins" ambiguity.
5. Wires into Laravel's Gate, so $user->can(), @can, authorize(), and can: middleware all just work.
6. Interface driven so every component is an interface. Swap the Eloquent stores for DynamoDB, swap the evaluator for a remote policy service, whatever. The API basically never changes.
Define your authorization as declarative JSON documents and import them the way you'd manage AWS IAM policies:
I've really tried to put this thing through the ringer so any feedback would be very welcomed. Worst case it will have a userbase of 1 (me haha) but if it's helpful to anyone else I wanted to share.
https://redd.it/1sg2zym
@r_php
Hey gang,
I've been building a larger project but one of the most complex parts of it has been authorization. I really spent a lot of time thinking through the authorization model and wanted something like IAM policy documents but native to my Laravel app.
I am a long time Spatie fanboy and have used Roles & Permissions package for years, but for this particular build I'm working on, I don't think the data model was quite granular enough in the way I'm needing.
So over the last couple months I've been building Laravel Policy Engine which has been everything I've learned while working on my larger app. I've really tried to distill it down and harden it into a rock solid package.
The pitch is basically,
1. Declarative policy documents your entire authorization config lives in version-controlled JSON.
2. Scoped permissions roles are assigned per-scope (team::5, plan::pro), not just globally. One user can be an admin in one org and a viewer in another, without workarounds and such. Just declare it and it just works.
3. Permission boundaries, hard ceilings per scope, like AWS permission boundaries. Even if someone holds admin, the boundary has final say.
4. Deny rules rule everything, so !posts.delete overrides every allow, across every role. No "last role wins" ambiguity.
5. Wires into Laravel's Gate, so $user->can(), @can, authorize(), and can: middleware all just work.
6. Interface driven so every component is an interface. Swap the Eloquent stores for DynamoDB, swap the evaluator for a remote policy service, whatever. The API basically never changes.
Define your authorization as declarative JSON documents and import them the way you'd manage AWS IAM policies:
{
"version": "1.0",
"permissions": [
"posts.read",
"posts.create",
"posts.update.own",
"posts.delete.any"
],
"roles": [
{
"id": "editor",
"name": "Editor",
"permissions": [
"posts.read",
"posts.create",
"posts.update.own",
"!posts.delete.any"
]
}
],
"boundaries": [
{
"scope": "plan::free",
"max_permissions": ["posts.read", "comments.read"]
},
{
"scope": "plan::pro",
"max_permissions": ["posts.*", "comments.*", "analytics.*"]
}
]
}
I've really tried to put this thing through the ringer so any feedback would be very welcomed. Worst case it will have a userbase of 1 (me haha) but if it's helpful to anyone else I wanted to share.
https://redd.it/1sg2zym
@r_php
GitHub
GitHub - dynamik-dev/laravel-policy-engine
Contribute to dynamik-dev/laravel-policy-engine development by creating an account on GitHub.
C-level APCu key isolation based on FPM pool names (Zero-allocation)
Hey everyone,
APCu is arguably the best in-memory key-value store for single-node PHP applications. It’s blazingly fast because it runs within PHP's own master process. But it has one massive, well-known architectural flaw in multi-tenant environments: It lacks pool isolation.
If you run multiple independent applications on the same server, each in their own PHP-FPM pool (with their own system users), they still share the exact same APCu memory segment. Pool A can read, modify, or delete Pool B's keys.
The standard solution is relying on PHP developers to manually prefix their keys (e.g., $cache->set('app1_config')). Not only is this annoying to maintain, but it offers zero security if an application gets compromised—a malicious script can just iterate and modify out the neighbor's cache.
I decided to fix this at the C level.
I wrote a patch for the APCu extension that introduces a transparent memory hook. It automatically namespaces every cache key based on the active PHP-FPM pool, completely invisible to the PHP userland.
How it works under the hood (The C Magic):
Instead of allocating new heap memory (malloc/free) on every web request—which would destroy APCu's legendary speed—I engineered a zero-allocation memory reuse strategy:
Out-of-Band Pool ID: When an FPM worker spawns, the C code reads /proc/self/cmdline to safely extract the exact pool name (falling back to geteuid() if procfs is restricted).
Worker-Lifetime Persistence: On the worker's very first APCu call, it allocates a single, persistent zend_string buffer (default 256 bytes) that survives the request shutdown and is immune to PHP's garbage collector.
Raw memcpy & Zend Spoofing: On every subsequent cache request, the code uses a fast memcpy to drop the user's requested key directly into this persistent buffer right after the static pool prefix. It then mutates ZSTR_LEN and forcefully resets the hash (h = 0) to trick APCu into recalculating the hash for the new, secured string.
The Result:
A script in Pool A calls apcu_store('db_config', $data). Pool B calls the exact same thing. In physical RAM, they are securely locked away as pool_A_db_config and pool_B_db_config. No application intervention required. Zero performance penalty.
I've documented the exact architecture, installation instructions, and how to maintain the patch on future APCu releases.
GitHub Repo: https://github.com/Samer-Al-iraqi/apcu-fpm-pool-isolation
I'd love to hear feedback from other extension developers or anyone dealing with shared-hosting/multi-tenant PHP architectures!
https://redd.it/1sg9rln
@r_php
Hey everyone,
APCu is arguably the best in-memory key-value store for single-node PHP applications. It’s blazingly fast because it runs within PHP's own master process. But it has one massive, well-known architectural flaw in multi-tenant environments: It lacks pool isolation.
If you run multiple independent applications on the same server, each in their own PHP-FPM pool (with their own system users), they still share the exact same APCu memory segment. Pool A can read, modify, or delete Pool B's keys.
The standard solution is relying on PHP developers to manually prefix their keys (e.g., $cache->set('app1_config')). Not only is this annoying to maintain, but it offers zero security if an application gets compromised—a malicious script can just iterate and modify out the neighbor's cache.
I decided to fix this at the C level.
I wrote a patch for the APCu extension that introduces a transparent memory hook. It automatically namespaces every cache key based on the active PHP-FPM pool, completely invisible to the PHP userland.
How it works under the hood (The C Magic):
Instead of allocating new heap memory (malloc/free) on every web request—which would destroy APCu's legendary speed—I engineered a zero-allocation memory reuse strategy:
Out-of-Band Pool ID: When an FPM worker spawns, the C code reads /proc/self/cmdline to safely extract the exact pool name (falling back to geteuid() if procfs is restricted).
Worker-Lifetime Persistence: On the worker's very first APCu call, it allocates a single, persistent zend_string buffer (default 256 bytes) that survives the request shutdown and is immune to PHP's garbage collector.
Raw memcpy & Zend Spoofing: On every subsequent cache request, the code uses a fast memcpy to drop the user's requested key directly into this persistent buffer right after the static pool prefix. It then mutates ZSTR_LEN and forcefully resets the hash (h = 0) to trick APCu into recalculating the hash for the new, secured string.
The Result:
A script in Pool A calls apcu_store('db_config', $data). Pool B calls the exact same thing. In physical RAM, they are securely locked away as pool_A_db_config and pool_B_db_config. No application intervention required. Zero performance penalty.
I've documented the exact architecture, installation instructions, and how to maintain the patch on future APCu releases.
GitHub Repo: https://github.com/Samer-Al-iraqi/apcu-fpm-pool-isolation
I'd love to hear feedback from other extension developers or anyone dealing with shared-hosting/multi-tenant PHP architectures!
https://redd.it/1sg9rln
@r_php
GitHub
GitHub - Samer-Al-iraqi/apcu-fpm-pool-isolation: APCu C-level key namespacing for PHP-FPM: Isolation of key spaces by pool name…
APCu C-level key namespacing for PHP-FPM: Isolation of key spaces by pool name to prevent collisions and cross-pool data tampering - Samer-Al-iraqi/apcu-fpm-pool-isolation
Flow PHP PostgreSql Symfony Bundle
/r/PHP/comments/1sfmlha/flow_php_postgresql_symfony_bundle/
https://redd.it/1sfmlwt
@r_php
/r/PHP/comments/1sfmlha/flow_php_postgresql_symfony_bundle/
https://redd.it/1sfmlwt
@r_php
Reddit
From the symfony community on Reddit: Flow PHP PostgreSql Symfony Bundle
Posted by norbert_tech - 9 votes and 8 comments
I got tired of importing themes and tweaking CSS by hand, so I built a visual theme builder for my Laravel Starter Kit
[A complete theme builder for Saucebase laravel Starter kit.](https://reddit.com/link/1sgp4dn/video/ory7ede9z5ug1/player)
Hey everyone,
I've been using [tweakcn](https://tweakcn.com/) to generate themes for my projects, and it's a great tool , big inspiration for what I ended up building. But the workflow was always the same: generate a theme there, export it, import it into my project, then spend ages manually adjusting variables until everything actually looked right in context.
So I decided to build a visual theme builder directly into my project. If you're not familiar, I'm working on an open-source module-first Laravel SaaS boilerplate called Saucebase ([intro post here](https://www.reddit.com/r/laravel/comments/1ri1uc0/open_sourcing_a_modulefirst_laravel_saas/)). The theme builder is one of the modules.
Here's what it does:
* Live editor with color pickers, font selectors, shadow and radius sliders, you see changes instantly in your actual app, not a separate preview
* Dark/light mode support with per-field sync (link a value across modes or keep them independent)
* Shadow system where you tweak 6 base values and it computes 8 shadow scale strings automatically using `color-mix()`
* Same idea for border radius and letter-spacing with one base value, computed scale
* Google Fonts loaded on demand (Its use a static list for now, possibly I will integrate with the google fonts api)
* 15 built-in themes (named after food to keep the joke with the name, like beetroot, coffee, kiwi…)
* When you're happy, save a JSON, run `php artisan saucebase:theme:apply`, rebuild and done! Happy days
Important to note: **this is a developer tool, not an end-user feature.** You use it in your dev environment to design and bake your theme, then commit the result. In production, it's just plain CSS variables, no runtime overhead, no third-party dependency.
The most fun part to build was the dark/light mode editing experience. You can edit both modes from the same screen and toggle between them live. And for any color field, there's a little sync toggle, lock it and when you change that color in light mode, it automatically mirrors to dark mode (or vice versa). Sounds simple but getting the sync state right, deciding what should link by default and what shouldn't, and making it all feel smooth took way more iterations than I'd like to admit.
You can play with it on the demo: [https://demo.saucebase.dev/](https://demo.saucebase.dev/)
Documentation is still WIP, but the editor itself should be pretty self-explanatory. Would love to hear what you think, especially around the UX of the editor and whether the workflow makes sense. Open to feedback and suggestions.
https://redd.it/1sgp4dn
@r_php
[A complete theme builder for Saucebase laravel Starter kit.](https://reddit.com/link/1sgp4dn/video/ory7ede9z5ug1/player)
Hey everyone,
I've been using [tweakcn](https://tweakcn.com/) to generate themes for my projects, and it's a great tool , big inspiration for what I ended up building. But the workflow was always the same: generate a theme there, export it, import it into my project, then spend ages manually adjusting variables until everything actually looked right in context.
So I decided to build a visual theme builder directly into my project. If you're not familiar, I'm working on an open-source module-first Laravel SaaS boilerplate called Saucebase ([intro post here](https://www.reddit.com/r/laravel/comments/1ri1uc0/open_sourcing_a_modulefirst_laravel_saas/)). The theme builder is one of the modules.
Here's what it does:
* Live editor with color pickers, font selectors, shadow and radius sliders, you see changes instantly in your actual app, not a separate preview
* Dark/light mode support with per-field sync (link a value across modes or keep them independent)
* Shadow system where you tweak 6 base values and it computes 8 shadow scale strings automatically using `color-mix()`
* Same idea for border radius and letter-spacing with one base value, computed scale
* Google Fonts loaded on demand (Its use a static list for now, possibly I will integrate with the google fonts api)
* 15 built-in themes (named after food to keep the joke with the name, like beetroot, coffee, kiwi…)
* When you're happy, save a JSON, run `php artisan saucebase:theme:apply`, rebuild and done! Happy days
Important to note: **this is a developer tool, not an end-user feature.** You use it in your dev environment to design and bake your theme, then commit the result. In production, it's just plain CSS variables, no runtime overhead, no third-party dependency.
The most fun part to build was the dark/light mode editing experience. You can edit both modes from the same screen and toggle between them live. And for any color field, there's a little sync toggle, lock it and when you change that color in light mode, it automatically mirrors to dark mode (or vice versa). Sounds simple but getting the sync state right, deciding what should link by default and what shouldn't, and making it all feel smooth took way more iterations than I'd like to admit.
You can play with it on the demo: [https://demo.saucebase.dev/](https://demo.saucebase.dev/)
Documentation is still WIP, but the editor itself should be pretty self-explanatory. Would love to hear what you think, especially around the UX of the editor and whether the workflow makes sense. Open to feedback and suggestions.
https://redd.it/1sgp4dn
@r_php
Reddit
From the laravel community on Reddit
Explore this post and more from the laravel community
How are people linking Stripe payments back to the original visitor session in PHP apps
I'm trying to answer a simple question for a small Laravel SaaS I run: which traffic sources actually produce paying users. Traffic itself is easy. GA4, Plausible, whatever, they all show pageviews, referrers, UTMs. The annoying part is when the user leaves your app for Stripe Checkout. Once payment completes, the signal you get back is a server side webhook. At that point you're holding a Stripe event, not the browser session that originally came from Twitter or HN or Google. So the problem becomes: how do you reliably connect those two worlds. Most straightforward approach I came up with was attaching a visitor id to the checkout session. Flow looks like this:
1. small JS snippet generates visitor_id
2. store source + UTM info in DB
3. attach visitor_id to Stripe metadata
4. when checkout.session.completed fires, map payment to visitor
That works, but it means you end up maintaining a mini attribution system (sessions table, source storage, webhook mapping). I use Plausible for traffic analytics because it's simple and privacy friendly, but it doesn't solve the visitor to Stripe revenue join either. While searching I found Faurya which basically productizes this exact idea: tracks the visitor session + source data and connects it to Stripe events so you can see which channels actually produce revenue. How are people here handling this in real projects? Are you storing visitor IDs in Stripe metadata, pushing conversion events back into GA4, doing attribution inside your own DB, or just not worrying about it at this stage?
https://redd.it/1sgqhdg
@r_php
I'm trying to answer a simple question for a small Laravel SaaS I run: which traffic sources actually produce paying users. Traffic itself is easy. GA4, Plausible, whatever, they all show pageviews, referrers, UTMs. The annoying part is when the user leaves your app for Stripe Checkout. Once payment completes, the signal you get back is a server side webhook. At that point you're holding a Stripe event, not the browser session that originally came from Twitter or HN or Google. So the problem becomes: how do you reliably connect those two worlds. Most straightforward approach I came up with was attaching a visitor id to the checkout session. Flow looks like this:
1. small JS snippet generates visitor_id
2. store source + UTM info in DB
3. attach visitor_id to Stripe metadata
4. when checkout.session.completed fires, map payment to visitor
That works, but it means you end up maintaining a mini attribution system (sessions table, source storage, webhook mapping). I use Plausible for traffic analytics because it's simple and privacy friendly, but it doesn't solve the visitor to Stripe revenue join either. While searching I found Faurya which basically productizes this exact idea: tracks the visitor session + source data and connects it to Stripe events so you can see which channels actually produce revenue. How are people here handling this in real projects? Are you storing visitor IDs in Stripe metadata, pushing conversion events back into GA4, doing attribution inside your own DB, or just not worrying about it at this stage?
https://redd.it/1sgqhdg
@r_php
Faurya
Faurya - Privacy-First Web Analytics
Privacy-first web analytics that helps you understand your visitors and grow your business.
SymfonyLive Berlin 2026: “Finding security vulnerabilities with static analysis.”
https://symfony.com/blog/symfonylive-berlin-2026-finding-security-vulnerabilities-with-static-analysis?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sgruvh
@r_php
https://symfony.com/blog/symfonylive-berlin-2026-finding-security-vulnerabilities-with-static-analysis?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sgruvh
@r_php
Symfony
SymfonyLive Berlin 2026: “Finding security vulnerabilities with static analysis.” (Symfony Blog)
Security issues aren’t always visible in code reviews. In “Finding security vulnerabilities with static analysis”, Nic Wortel shows how to automatically detect common vulnerabilities in Symfony…
SymfonyLive Berlin 2026: “How native lazy objects will change Doctrine and Symfony forever”
https://symfony.com/blog/symfonylive-berlin-2026-how-native-lazy-objects-will-change-doctrine-and-symfony-forever?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sgorh3
@r_php
https://symfony.com/blog/symfonylive-berlin-2026-how-native-lazy-objects-will-change-doctrine-and-symfony-forever?utm_medium=feed&utm_source=Symfony%20Blog%20Feed
https://redd.it/1sgorh3
@r_php
Symfony
SymfonyLive Berlin 2026: “How native lazy objects will change Doctrine and Symfony forever” (Symfony Blog)
PHP 8.4 introduces native lazy objects — and they could change Doctrine and Symfony forever. In this talk, Benjamin Eberlei explains how this new feature simplifies lazy loading and could reshape t…
How to set up automatic SSL for every site in a multi-site CMS —
wildcard subdomains + custom domains, zero manual cert management
/r/cms/comments/1sgubts/how_to_set_up_automatic_ssl_for_every_site_in_a/
https://redd.it/1sgudgg
@r_php
wildcard subdomains + custom domains, zero manual cert management
/r/cms/comments/1sgubts/how_to_set_up_automatic_ssl_for_every_site_in_a/
https://redd.it/1sgudgg
@r_php
Reddit
From the PHP community on Reddit: How to set up automatic SSL for every site in a multi-site CMS —
wildcard subdomains + custom…
wildcard subdomains + custom…
Posted by BuildWithTall - 0 votes and 8 comments
Flow PHP PostgreSql Symfony Bundle
Working with PHP, PostgreSql and Symfony?
You might want to check Flow PHP Symfony PostgreSql Bundle - it's the latest package I have been working on as a part of Flow PHP project.
https://flow-php.com/documentation/components/bridges/symfony-postgresql-bundle/
Features:
\- query builder with full PostgreSql syntax support
\- migrations
\- schema definition in php/yaml
\- SQL AST Parser/Deparser
\- client that supports static analysis types narrowing, no more return array<mixed>
https://redd.it/1sfmlha
@r_php
Working with PHP, PostgreSql and Symfony?
You might want to check Flow PHP Symfony PostgreSql Bundle - it's the latest package I have been working on as a part of Flow PHP project.
https://flow-php.com/documentation/components/bridges/symfony-postgresql-bundle/
Features:
\- query builder with full PostgreSql syntax support
\- migrations
\- schema definition in php/yaml
\- SQL AST Parser/Deparser
\- client that supports static analysis types narrowing, no more return array<mixed>
https://redd.it/1sfmlha
@r_php
Flow-Php
Documentation - Flow PHP - Data Processing Framework
Array intersection benchmarks
I’m trying to optimize my hot code path where array intersection is used a lot. I got curious and decided to compare the various intersection algorithms that I know of.
<?php
// Source - https://stackoverflow.com/a/9276284
// Posted by kingmaple, modified by community. See post 'Timeline' for change history
// Retrieved 2026-04-08, License - CC BY-SA 4.0
// Source - https://stackoverflow.com/a/53203232
// Posted by slaszu, modified by community. See post 'Timeline' for change history
// Retrieved 2026-04-08, License - CC BY-SA 4.0
iniset('memorylimit', '2048M');
function formatBytes(int $bytes): string {
$units = 'B', 'KB', 'MB', 'GB';
$i = 0;
while ($bytes >= 1024 && $i < count($units) - 1) {
$bytes /= 1024;
$i++;
}
return sprintf("%.2f %s", $bytes, $units$i);
}
function benchmark(callable $fn, string $label): array {
gccollectcycles();
gcmemcaches();
memoryresetpeakusage();
$mem = -memorygetpeakusage();
$time = -hrtime(true);
$fn();
$time += hrtime(true);
$mem += memorygetpeakusage();
return [
'label' => $label,
'timems' => $time / 1e6,
'memused' => $mem,
];
}
function manualintersect($arrayOne, $arrayTwo) {
$index = arrayflip($arrayOne);
foreach ($arrayTwo as $value) {
if (isset($index[$value])) {
unset($index[$value]);
}
}
foreach ($index as $value => $key) {
unset($arrayOne[$key]);
}
return $arrayOne;
}
function flippedintersect($arrayOne, $arrayTwo) {
$index = arrayflip($arrayOne);
$second = arrayflip($arrayTwo);
$x = arrayintersectkey($index, $second);
return arrayflip($x);
}
function runBenchmarks(int $n): void {
echo "\n=== Array Intersection Benchmark for " . numberformat($n) . " elements ===\n";
// Generate test arrays
$one = ;
$two = ;
for ($i = 0; $i < $n; $i++) {
$one = rand(0, 1000000);
$two = rand(0, 100000);
$two = rand(0, 10000);
}
$one = arrayunique($one);
$two = arrayunique($two);
$results = ;
$results = benchmark(
fn() => $res = manualintersect($one, $two),
'manualintersect()'
);
$results = benchmark(
fn() => $res = arrayintersect($one, $two),
'arrayintersect()'
);
$results = benchmark(
fn() => $res = flippedintersect($one, $two),
'flippedintersect()'
);
// --- Print Table ---
echo strrepeat('-', 60) . "\n";
printf("%-25s | %-14s | %-15s\n", 'Method', 'Time (ms)', 'Memory');
echo strrepeat('-', 60) . "\n";
foreach ($results as $r) {
printf("%-25s | %11.3f ms | %15s\n",
$r'label',
$r'time_ms',
formatBytes($r'mem_used')
);
}
echo strrepeat('-', 60) . "\n";
}
// Run for various sizes
foreach ([20, 20000, 200000, 1000000] as $n) {
runBenchmarks($n);
}
I run this on PHP 8.4 on Core I7 11700F
=== Array Intersection Benchmark for 20 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manualintersect() | 0.007 ms | 1.98 KB
arrayintersect() | 0.029 ms | 3.02 KB
flippedintersect() | 0.002 ms | 3.97 KB
------------------------------------------------------------
=== Array Intersection Benchmark for 20,000 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manualintersect() | 1.169 ms | 1.75 MB
arrayintersect() | 41.300 ms | 1.88 MB
I’m trying to optimize my hot code path where array intersection is used a lot. I got curious and decided to compare the various intersection algorithms that I know of.
<?php
// Source - https://stackoverflow.com/a/9276284
// Posted by kingmaple, modified by community. See post 'Timeline' for change history
// Retrieved 2026-04-08, License - CC BY-SA 4.0
// Source - https://stackoverflow.com/a/53203232
// Posted by slaszu, modified by community. See post 'Timeline' for change history
// Retrieved 2026-04-08, License - CC BY-SA 4.0
iniset('memorylimit', '2048M');
function formatBytes(int $bytes): string {
$units = 'B', 'KB', 'MB', 'GB';
$i = 0;
while ($bytes >= 1024 && $i < count($units) - 1) {
$bytes /= 1024;
$i++;
}
return sprintf("%.2f %s", $bytes, $units$i);
}
function benchmark(callable $fn, string $label): array {
gccollectcycles();
gcmemcaches();
memoryresetpeakusage();
$mem = -memorygetpeakusage();
$time = -hrtime(true);
$fn();
$time += hrtime(true);
$mem += memorygetpeakusage();
return [
'label' => $label,
'timems' => $time / 1e6,
'memused' => $mem,
];
}
function manualintersect($arrayOne, $arrayTwo) {
$index = arrayflip($arrayOne);
foreach ($arrayTwo as $value) {
if (isset($index[$value])) {
unset($index[$value]);
}
}
foreach ($index as $value => $key) {
unset($arrayOne[$key]);
}
return $arrayOne;
}
function flippedintersect($arrayOne, $arrayTwo) {
$index = arrayflip($arrayOne);
$second = arrayflip($arrayTwo);
$x = arrayintersectkey($index, $second);
return arrayflip($x);
}
function runBenchmarks(int $n): void {
echo "\n=== Array Intersection Benchmark for " . numberformat($n) . " elements ===\n";
// Generate test arrays
$one = ;
$two = ;
for ($i = 0; $i < $n; $i++) {
$one = rand(0, 1000000);
$two = rand(0, 100000);
$two = rand(0, 10000);
}
$one = arrayunique($one);
$two = arrayunique($two);
$results = ;
$results = benchmark(
fn() => $res = manualintersect($one, $two),
'manualintersect()'
);
$results = benchmark(
fn() => $res = arrayintersect($one, $two),
'arrayintersect()'
);
$results = benchmark(
fn() => $res = flippedintersect($one, $two),
'flippedintersect()'
);
// --- Print Table ---
echo strrepeat('-', 60) . "\n";
printf("%-25s | %-14s | %-15s\n", 'Method', 'Time (ms)', 'Memory');
echo strrepeat('-', 60) . "\n";
foreach ($results as $r) {
printf("%-25s | %11.3f ms | %15s\n",
$r'label',
$r'time_ms',
formatBytes($r'mem_used')
);
}
echo strrepeat('-', 60) . "\n";
}
// Run for various sizes
foreach ([20, 20000, 200000, 1000000] as $n) {
runBenchmarks($n);
}
I run this on PHP 8.4 on Core I7 11700F
=== Array Intersection Benchmark for 20 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manualintersect() | 0.007 ms | 1.98 KB
arrayintersect() | 0.029 ms | 3.02 KB
flippedintersect() | 0.002 ms | 3.97 KB
------------------------------------------------------------
=== Array Intersection Benchmark for 20,000 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manualintersect() | 1.169 ms | 1.75 MB
arrayintersect() | 41.300 ms | 1.88 MB
Stack Overflow
PHP: settings memory_limits > 1024M does not work
For bad reasons I need to set memory_limits higher than 1 GB for a directory, but on my PHP 5.2.17 on a Debian 5.0 (Lenny) server when I use, for example, 2048M, I get only the php.ini default value (
Array intersection benchmarks
I’m trying to optimize my hot code path where array intersection is used a lot. I got curious and decided to compare the various intersection algorithms that I know of.
<?php
// Source - https://stackoverflow.com/a/9276284
// Posted by kingmaple, modified by community. See post 'Timeline' for change history
// Retrieved 2026-04-08, License - CC BY-SA 4.0
// Source - https://stackoverflow.com/a/53203232
// Posted by slaszu, modified by community. See post 'Timeline' for change history
// Retrieved 2026-04-08, License - CC BY-SA 4.0
ini_set('memory_limit', '2048M');
function formatBytes(int $bytes): string {
$units = ['B', 'KB', 'MB', 'GB'];
$i = 0;
while ($bytes >= 1024 && $i < count($units) - 1) {
$bytes /= 1024;
$i++;
}
return sprintf("%.2f %s", $bytes, $units[$i]);
}
function benchmark(callable $fn, string $label): array {
gc_collect_cycles();
gc_mem_caches();
memory_reset_peak_usage();
$mem = -memory_get_peak_usage();
$time = -hrtime(true);
$fn();
$time += hrtime(true);
$mem += memory_get_peak_usage();
return [
'label' => $label,
'time_ms' => $time / 1e6,
'mem_used' => $mem,
];
}
function manual_intersect($arrayOne, $arrayTwo) {
$index = array_flip($arrayOne);
foreach ($arrayTwo as $value) {
if (isset($index[$value])) {
unset($index[$value]);
}
}
foreach ($index as $value => $key) {
unset($arrayOne[$key]);
}
return $arrayOne;
}
function flipped_intersect($arrayOne, $arrayTwo) {
$index = array_flip($arrayOne);
$second = array_flip($arrayTwo);
$x = array_intersect_key($index, $second);
return array_flip($x);
}
function runBenchmarks(int $n): void {
echo "\n=== Array Intersection Benchmark for " . number_format($n) . " elements ===\n";
// Generate test arrays
$one = [];
$two = [];
for ($i = 0; $i < $n; $i++) {
$one[] = rand(0, 1000000);
$two[] = rand(0, 100000);
$two[] = rand(0, 10000);
}
$one = array_unique($one);
$two = array_unique($two);
$results = [];
$results[] = benchmark(
fn() => $res = manual_intersect($one, $two),
'manual_intersect()'
);
$results[] = benchmark(
fn() => $res = array_intersect($one, $two),
'array_intersect()'
);
$results[] = benchmark(
fn() => $res = flipped_intersect($one, $two),
'flipped_intersect()'
);
// --- Print Table ---
echo str_repeat('-', 60) . "\n";
printf("%-25s | %-14s | %-15s\n", 'Method', 'Time (ms)', 'Memory');
echo str_repeat('-', 60) . "\n";
foreach ($results as $r) {
printf("%-25s | %11.3f ms | %15s\n",
$r['label'],
$r['time_ms'],
formatBytes($r['mem_used'])
);
}
echo str_repeat('-', 60) . "\n";
}
// Run for various sizes
foreach ([20, 20000, 200000, 1000000] as $n) {
runBenchmarks($n);
}
I run this on PHP 8.4 on Core I7 11700F
=== Array Intersection Benchmark for 20 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manual_intersect() | 0.007 ms | 1.98 KB
array_intersect() | 0.029 ms | 3.02 KB
flipped_intersect() | 0.002 ms | 3.97 KB
------------------------------------------------------------
=== Array Intersection Benchmark for 20,000 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manual_intersect() | 1.169 ms | 1.75 MB
array_intersect() | 41.300 ms | 1.88 MB
I’m trying to optimize my hot code path where array intersection is used a lot. I got curious and decided to compare the various intersection algorithms that I know of.
<?php
// Source - https://stackoverflow.com/a/9276284
// Posted by kingmaple, modified by community. See post 'Timeline' for change history
// Retrieved 2026-04-08, License - CC BY-SA 4.0
// Source - https://stackoverflow.com/a/53203232
// Posted by slaszu, modified by community. See post 'Timeline' for change history
// Retrieved 2026-04-08, License - CC BY-SA 4.0
ini_set('memory_limit', '2048M');
function formatBytes(int $bytes): string {
$units = ['B', 'KB', 'MB', 'GB'];
$i = 0;
while ($bytes >= 1024 && $i < count($units) - 1) {
$bytes /= 1024;
$i++;
}
return sprintf("%.2f %s", $bytes, $units[$i]);
}
function benchmark(callable $fn, string $label): array {
gc_collect_cycles();
gc_mem_caches();
memory_reset_peak_usage();
$mem = -memory_get_peak_usage();
$time = -hrtime(true);
$fn();
$time += hrtime(true);
$mem += memory_get_peak_usage();
return [
'label' => $label,
'time_ms' => $time / 1e6,
'mem_used' => $mem,
];
}
function manual_intersect($arrayOne, $arrayTwo) {
$index = array_flip($arrayOne);
foreach ($arrayTwo as $value) {
if (isset($index[$value])) {
unset($index[$value]);
}
}
foreach ($index as $value => $key) {
unset($arrayOne[$key]);
}
return $arrayOne;
}
function flipped_intersect($arrayOne, $arrayTwo) {
$index = array_flip($arrayOne);
$second = array_flip($arrayTwo);
$x = array_intersect_key($index, $second);
return array_flip($x);
}
function runBenchmarks(int $n): void {
echo "\n=== Array Intersection Benchmark for " . number_format($n) . " elements ===\n";
// Generate test arrays
$one = [];
$two = [];
for ($i = 0; $i < $n; $i++) {
$one[] = rand(0, 1000000);
$two[] = rand(0, 100000);
$two[] = rand(0, 10000);
}
$one = array_unique($one);
$two = array_unique($two);
$results = [];
$results[] = benchmark(
fn() => $res = manual_intersect($one, $two),
'manual_intersect()'
);
$results[] = benchmark(
fn() => $res = array_intersect($one, $two),
'array_intersect()'
);
$results[] = benchmark(
fn() => $res = flipped_intersect($one, $two),
'flipped_intersect()'
);
// --- Print Table ---
echo str_repeat('-', 60) . "\n";
printf("%-25s | %-14s | %-15s\n", 'Method', 'Time (ms)', 'Memory');
echo str_repeat('-', 60) . "\n";
foreach ($results as $r) {
printf("%-25s | %11.3f ms | %15s\n",
$r['label'],
$r['time_ms'],
formatBytes($r['mem_used'])
);
}
echo str_repeat('-', 60) . "\n";
}
// Run for various sizes
foreach ([20, 20000, 200000, 1000000] as $n) {
runBenchmarks($n);
}
I run this on PHP 8.4 on Core I7 11700F
=== Array Intersection Benchmark for 20 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manual_intersect() | 0.007 ms | 1.98 KB
array_intersect() | 0.029 ms | 3.02 KB
flipped_intersect() | 0.002 ms | 3.97 KB
------------------------------------------------------------
=== Array Intersection Benchmark for 20,000 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manual_intersect() | 1.169 ms | 1.75 MB
array_intersect() | 41.300 ms | 1.88 MB
Stack Overflow
PHP: settings memory_limits > 1024M does not work
For bad reasons I need to set memory_limits higher than 1 GB for a directory, but on my PHP 5.2.17 on a Debian 5.0 (Lenny) server when I use, for example, 2048M, I get only the php.ini default value (
flipped_intersect() | 0.634 ms | 2.55 MB
------------------------------------------------------------
=== Array Intersection Benchmark for 200,000 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manual_intersect() | 8.781 ms | 16.00 MB
array_intersect() | 290.759 ms | 16.00 MB
flipped_intersect() | 6.196 ms | 20.00 MB
------------------------------------------------------------
=== Array Intersection Benchmark for 1,000,000 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manual_intersect() | 35.547 ms | 58.00 MB
array_intersect() | 882.681 ms | 42.00 MB
flipped_intersect() | 26.764 ms | 58.00 MB
------------------------------------------------------------
The built-in functions mock me!
https://redd.it/1sh13gh
@r_php
------------------------------------------------------------
=== Array Intersection Benchmark for 200,000 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manual_intersect() | 8.781 ms | 16.00 MB
array_intersect() | 290.759 ms | 16.00 MB
flipped_intersect() | 6.196 ms | 20.00 MB
------------------------------------------------------------
=== Array Intersection Benchmark for 1,000,000 elements ===
------------------------------------------------------------
Method | Time (ms) | Memory
------------------------------------------------------------
manual_intersect() | 35.547 ms | 58.00 MB
array_intersect() | 882.681 ms | 42.00 MB
flipped_intersect() | 26.764 ms | 58.00 MB
------------------------------------------------------------
The built-in functions mock me!
https://redd.it/1sh13gh
@r_php
Reddit
From the PHP community on Reddit
Explore this post and more from the PHP community
I got tired of coding the same CRUDs and admin panels for years, so I open-sourced my own PHP framework (built on CodeIgniter 4)
Hey everyone.
If you build software for the educational or administrative sector, you know the drill: ever-changing requirements, massive databases, and the headache of rewriting the exact same logic for views, tables, pagination, and permissions for every new system.
It got to a point where my job felt like 80% repetitive boilerplate and 20% actual business logic.
To fix this and keep my sanity, I decided to build a higher-level layer leveraging the speed of CodeIgniter 4 and MariaDB. The core philosophy is simple: Configuration over Programming. I wanted to be able to define a "Data Dictionary" (a simple array) and have the system automatically render the dashboard, filters, data exports, and handle security (SQLi, XSS, RBAC) without touching a single manual view.
The result is Ragnos, a framework I use daily for production systems, which I've decided to release 100% Open Source for the community.
Also, because everything is based on configuration arrays, its declarative architecture is perfect for using AI (ChatGPT/Claude) to generate entire modules in seconds.
Where to check it out? You can find the project's philosophy, initial docs, and the direct link to the GitHub repository here: 🔗**ragnos.build**
For those who want to dive deep into the architecture or implement it at an enterprise level, I also just published the complete official manual (Ragnos from Zero to Pro) on Leanpub, but the heart of this launch is the free open-source tool.
I’d love for you to take a look at the code, install it, break it, and give me your feedback. If you find the tool useful, dropping a star on the GitHub repo helps tremendously with project visibility.
Thanks for reading and happy coding!
https://redd.it/1sh28gf
@r_php
Hey everyone.
If you build software for the educational or administrative sector, you know the drill: ever-changing requirements, massive databases, and the headache of rewriting the exact same logic for views, tables, pagination, and permissions for every new system.
It got to a point where my job felt like 80% repetitive boilerplate and 20% actual business logic.
To fix this and keep my sanity, I decided to build a higher-level layer leveraging the speed of CodeIgniter 4 and MariaDB. The core philosophy is simple: Configuration over Programming. I wanted to be able to define a "Data Dictionary" (a simple array) and have the system automatically render the dashboard, filters, data exports, and handle security (SQLi, XSS, RBAC) without touching a single manual view.
The result is Ragnos, a framework I use daily for production systems, which I've decided to release 100% Open Source for the community.
Also, because everything is based on configuration arrays, its declarative architecture is perfect for using AI (ChatGPT/Claude) to generate entire modules in seconds.
Where to check it out? You can find the project's philosophy, initial docs, and the direct link to the GitHub repository here: 🔗**ragnos.build**
For those who want to dive deep into the architecture or implement it at an enterprise level, I also just published the complete official manual (Ragnos from Zero to Pro) on Leanpub, but the heart of this launch is the free open-source tool.
I’d love for you to take a look at the code, install it, break it, and give me your feedback. If you find the tool useful, dropping a star on the GitHub repo helps tremendously with project visibility.
Thanks for reading and happy coding!
https://redd.it/1sh28gf
@r_php
ragnos.build
Ragnos Framework - The Declarative PHP Solution
Ragnos: A powerful declarative framework for modern PHP development based on CodeIgniter 4.