DoomPosting
8.08K subscribers
86.3K photos
27.2K videos
6 files
84.6K links
Degens Deteriorating
Download Telegram
DeepSeek just did something wild.

They built an OCR system that compresses long text into vision tokens literally turning paragraphs into pixels.

Their model, DeepSeek-OCR, achieves 97% decoding precision at 10× compression and still manages 60% accuracy even at 20×. That means one image can represent entire documents using a fraction of the tokens an LLM would need.

Even crazier? It beats GOT-OCR2.0 and MinerU2.0 while using up to 60× fewer tokens and can process 200K+ pages/day on a single A100.

This could solve one of AI’s biggest problems: long-context inefficiency.

Instead of paying more for longer sequences, models might soon see text instead of reading it.

The future of context compression might not be textual at all.

It might be optical

github. com/deepseek-ai/DeepSeek-OCR

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🔥2👀2
Early signs of trouble are emerging in the leveraged loan market:

The US leveraged loan market is now on track for its biggest monthly loss since at least 2022.

This comes as defaults of First Brands and Tricolor Auto in September have exposed possible weak underwriting standards and growing vulnerabilities in credit markets.

The collapse of First Brands alone has triggered over -$4 billion in losses, affecting funds run by Blackstone, PGIM, Franklin Templeton, CIFC, and Wellington.

Despite these failures, leveraged loan issuances hit a record +$404 billion in Q3 2025.

The leveraged loan market now stands at an estimated $2 TRILLION.

Cracks in the credit market are becoming more apparent.

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
Media is too big
VIEW IN TELEGRAM
Never lose faith crypto guys, may be the women of your life is still in Japans factory

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🥰1
What did the 1944 Luftwaffe gunnery manual mean by this

(This is actually a genius way to get 97 IQ gunners to approximate trig functions)

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
6🫡3
Listing this for $1,000,000 on opensea soon.

Shooters shoot.

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🎉1
😨4💯1
NEW: SLERF CLAIMS THAT “SLERF REFUNDS FROM THE INFAMOUS SLERF BURN ARE OFFICIALLY COMPLETE” - BACK IN MARCH 2024, THE PROJECT ACCIDENTALLY BURNED $10M+ WORTH OF PRESALE MEMECOIN TOKENS MEANT TO BE AIRDROPPED

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
👀2
2025 NYC mayor election poll

Among American born New Yorkers:

Andrew Cuomo: 40%

Zohran Mamdani: 31%

Curtis Sliwa: 25%

Among Foreign born New Yorkers:

Zohran Mamdani: 62%

Andrew Cuomo: 24%

Curtis Sliwa: 12%

Demographics is destiny

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🤬9
Waiting in line for dinner is the new NYC trend — with eager diners flocking to hours-long queues at fashionable dives

New Yorkers yearn for the breadlines

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
😐7😁2
We vibe coded right into this mess and we will vibe code out of it

just waiting for cursor to get back online

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
😁6
🫡1
Prediction markets just closed the first TWO BILLION DOLLAR WEEK EVER.

That's a new notional volume all-time high!

Uptober.

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🔥1
World cannot survive with indian tech sapport!

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
🌚3🔥1
This was Hasan’s previous dog. There is a visible open wound around the dog’s neck region below the collar.

THE BIGGEST LEFTIST STREAMER IS A GENERATIONAL DOG TORTURER

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
💯5🤯1😨1
This might be the most disturbing AI paper of 2025

Scientists just proved that large language models can literally rot their own brains the same way humans get brain rot from scrolling junk content online.

They fed models months of viral Twitter data short, high-engagement posts and watched their cognition collapse:

- Reasoning fell by 23%

- Long-context memory dropped 30%

- Personality tests showed spikes in narcissism & psychopathy

And get this even after retraining on clean, high-quality data, the damage didn’t fully heal.

The representational “rot” persisted.

It’s not just bad data → bad output.

It’s bad data → permanent cognitive drift.

The AI equivalent of doomscrolling is real. And it’s already happening.

Full study: llm-brain-rot. github. io

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
💯4😨3🫡2
The Experiment Setup:

Researchers built two data sets:

• Junk Data: short, viral, high-engagement tweets
• Control Data: longer, thoughtful, low-engagement tweets

Then they retrained Llama 3, Qwen, and others on each same scale, same steps.

Only variable: data quality.

The Cognitive Crash:

The results are brutal.

On reasoning tasks (ARC Challenge):
→ Accuracy dropped from 74.9 → 57.2

On long-context understanding (RULER):
→ Scores plunged from 84.4 → 52.3

That’s a measurable intelligence collapse.

🄳🄾🄾🄼🄿🤖🅂🅃🄸🄽🄶
👀2