Cache-to-Cache (C2C) lets multiple LLMs communicate directly through their KV-caches instead of text, transferring deep semantics without token-by-token generation. It fuses cache representations via a neural projector and gating mechanism for efficient inter-model exchange. The payoff: up to 10% higher accuracy, 3–5% gains over text-based communication, and 2× faster responses.
Cache-to-Cache: Direct Semantic Communication Between Large Language Models
arXiv.org
Cache-to-Cache: Direct Semantic Communication Between Large Language Models
Multi-LLM systems harness the complementary strengths of diverse Large Language Models, achieving performance and efficiency gains that are not attainable by a single model. In existing designs,...
🔥2
i finally paid anthropic 20$ and tried claude code and oh my god. fuck me. ai is insane these days. i mean of cource it's still not me but i'd make that trade.
😁2
цього і минулого року приблизно в цей час мій ~/.zsh_history видалився і я втратив 10к команд. майже щодня я тисячі разів так чи інакше викликаю щось звідти. враховуючи це і час який я проводжу в терміналі це типу еквівалент 7% моїх спогадів. і цього і минулого разу це гучно нагадало про те як погано я розумію системи на які так неперебірливо наївно і щосили спираюсь. дуже бадьорить. відчувається як brain damage and memory loss але тіпа кіберпанк.
😁5👾1
“Security teams should experiment with applying AI for defense”
…
Hmm. But who sells this kind of AI they’re talking about here ?
anthropic’s paper smells like bullshit
🤯3
documentation is the code
“Security teams should experiment with applying AI for defense” … Hmm. But who sells this kind of AI they’re talking about here ? anthropic’s paper smells like bullshit
let's not even mention the recent google's Big Sleep bombardment of ffmpeg 🤢😁
😁2
documentation is the code
https://github.com/mtdvio/every-programmer-should-know
пхахрхахпхпх бляя
😁3