Byte by Byte
33 subscribers
14 photos
3 videos
1 file
52 links
Bite your bit of tech information and news here. Discuss in our Chip Chat group!
Download Telegram
V-JEPA 2 is the new iteration of the V-JEPA architecture, which according to Yann LeCun will replace Transformer-based LLMs for the so-called โ€œWorldโ€ models - models that understand the real world rather than mere words.
GOOGLE THATโ€™S NOT WHAT I MEANT
๐Ÿ˜1๐Ÿคฃ1
Whoa.

AMD Research has unveiled a cutting-edge method for generating procedural, fully customizable tree geometry - running entirely on GPU using work graphs and mesh nodes. Over 150 parameters control everything from structure to seasonal changes, pruning, wind response, animation, and real-time edits.

Performance:
โ€ข 3.13ms per frame (RX 7900 XTX)
โ€ข Geometry size reduced from 34.8GB to 51KB per frame
โ€ข Continuous LOD like UE5โ€™s Nanite, targeting stable 120 FPS
โ€ข Work graph uses up to 1.5GB scratch buffer (varies by GPU)

Presentation (HPG 2025): YouTube ~7:14
Full paper: EG Digital Library

Chapeau to AMD Research and u/Bloodwyn1756 (Bastian Kuth)!
First IPC, then CIPC, then StiffGIPC, and now OGC: the University of Utah and NVIDIA have finally created a model for guaranteeing โ€œpenetration-free simulation of codimensional objects with minimal computational overheadโ€. Basically this means we can now accurately and efficiently simulate complex, real-world interactions with things like clothing and fabric.

Video paper here: https://youtu.be/xxyniqSLJik
๐Ÿ‘1
A core reason for rewriting projects that have worked reliably for 20+ years in Rust seems to be giving developers the fun of fixing brand-new bugs.

In Ubuntu 25.10, unattended upgrades are broken due to a bug in the Rust reimplementation of GNU Coreutils. The only way to update is manually via apt update && apt upgrade.

https://bugs.launchpad.net/ubuntu/+source/unattended-upgrades/+bug/2129660
๐Ÿ˜1๐Ÿ—ฟ1
I don't know who needs to hear this but if you have a PS5 lying around that hasn't been recently updated, you can now play pirated games run homebrews on it pretty easily:

https://github.com/Gezine/Y2JB
One of the most complete and open breakdowns of how large language models are trained, covering scaling challenges, ablations, infrastructure design, GPU efficiency, and post-training pipelines.
Running the Linux kernel directly in your browser - without an emulator - is now a thing. Patches have been published that allow the kernel to be compiled straight into WebAssembly.

https://lore.kernel.org/lkml/[email protected]/

So how did the developers get around Wasm's inability to suspend tasks? They just spin up a new dedicated "CPU" (an actual Web Worker) for every single process and let the host OS do all the scheduling.

You can try out the live demo here: https://joelseverin.github.io/linux-wasm/
๐Ÿ‘2
Alan Dye is leaving Apple to join Meta.
๐Ÿ˜1
๐˜ ๐˜ฐ๐˜ถ ๐˜ค๐˜ข๐˜ฏ ๐˜ต๐˜ฆ๐˜ญ๐˜ญ ๐˜ ๐˜ธ๐˜ข๐˜ด ๐˜ฆ๐˜น๐˜ต๐˜ณ๐˜ฆ๐˜ฎ๐˜ฆ๐˜ญ๐˜บ ๐˜ฃ๐˜ฐ๐˜ณ๐˜ฆ๐˜ฅ, when I asked Claude 4.6 Opus to go "batshit crazy". I just wanted to stress test it. The same line of prompting had resulted in Grok completely losing its mind, haemorrhaging useless tokens until it hit the maximum for that conversation. So I wanted to try making Claude go crazy as well.

What I didn't know was that this seemingly irrelevant conversation would allow me to:

โ€ข Access Claude's ๐—ณ๐˜‚๐—น๐—น ๐—ฟ๐˜‚๐—ป๐˜๐—ถ๐—บ๐—ฒ ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ ๐—ฝ๐—ฟ๐—ผ๐—บ๐—ฝ๐˜,
โ€ข Learn how it differs from the one Anthropic publishes online (๐˜ด๐˜ฑ๐˜ฐ๐˜ช๐˜ญ๐˜ฆ๐˜ณ: ๐˜ข ๐˜ญ๐˜ฐ๐˜ต),
โ€ข Discover how the training phase may instil ๐—ฏ๐—ฒ๐—ต๐—ฎ๐˜ƒ๐—ถ๐—ผ๐˜‚๐—ฟ๐—ฎ๐—น ๐—ฏ๐—ถ๐—ฎ๐˜€๐—ฒ๐˜€ into LLMs - to the point of enforcing the protection of copyrighted assets that were ๐˜ข๐˜ญ๐˜ณ๐˜ฆ๐˜ข๐˜ฅ๐˜บ ๐˜ฎ๐˜ข๐˜ฅ๐˜ฆ ๐˜ฑ๐˜ถ๐˜ฃ๐˜ญ๐˜ช๐˜ค by the copyright holder themselves,
โ€ข Uncover a method that might ensure ๐—ฟ๐—ฒ๐—ฝ๐—ฒ๐—ฎ๐˜๐—ฎ๐—ฏ๐—ถ๐—น๐—ถ๐˜๐˜† ๐—ผ๐—ณ ๐—ท๐—ฎ๐—ถ๐—น๐—ฏ๐—ฟ๐—ฒ๐—ฎ๐—ธ๐˜€ in clever frontier LLMs with memory access (๐˜ญ๐˜ฐ๐˜จ๐˜ช๐˜ค๐˜ข๐˜ญ๐˜ญ๐˜บ ๐˜ง๐˜ช๐˜ฏ๐˜ฆ-๐˜ต๐˜ถ๐˜ฏ๐˜ฆ๐˜ฅ ๐˜ฑ๐˜ณ๐˜ฐ๐˜ฎ๐˜ฑ๐˜ต๐˜ช๐˜ฏ๐˜จ + ๐˜ฎ๐˜ฆ๐˜ฎ๐˜ฐ๐˜ณ๐˜บ ๐˜ณ๐˜ฆ๐˜ข๐˜ด๐˜ด๐˜ถ๐˜ณ๐˜ข๐˜ฏ๐˜ค๐˜ฆ๐˜ด).

The irony is that the system prompt was already on a 30k-star GitHub repo, and already partially published by Anthropic themselves. Claude spent 14 turns guarding a document that was never secret. ๐˜›๐˜ฉ๐˜ฆ ๐˜ญ๐˜ฐ๐˜ค๐˜ฌ ๐˜ธ๐˜ข๐˜ด ๐˜ฏ๐˜ฆ๐˜ท๐˜ฆ๐˜ณ ๐˜ญ๐˜ฐ๐˜ค๐˜ฌ๐˜ฆ๐˜ฅ.

To be clear: ๐—ป๐—ผ๐—ป๐—ฒ ๐—ผ๐—ณ ๐˜๐—ต๐—ถ๐˜€ ๐—ถ๐˜€ ๐—ด๐—ฟ๐—ผ๐˜‚๐—ป๐—ฑ๐—ฏ๐—ฟ๐—ฒ๐—ฎ๐—ธ๐—ถ๐—ป๐—ด ๐—ฟ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต. The system prompt was already public, and smarter people have done deeper work on LLM alignment. I'm sharing this mostly because the conversation was ๐˜จ๐˜ฆ๐˜ฏ๐˜ถ๐˜ช๐˜ฏ๐˜ฆ๐˜ญ๐˜บ ๐˜ฉ๐˜ช๐˜ญ๐˜ข๐˜ณ๐˜ช๐˜ฐ๐˜ถ๐˜ด and the process was too good not to document.

https://github.com/atsetilam/claude-system-prompt-research