Early signs of trouble are emerging in the leveraged loan market:
The US leveraged loan market is now on track for its biggest monthly loss since at least 2022.
This comes as defaults of First Brands and Tricolor Auto in September have exposed possible weak underwriting standards and growing vulnerabilities in credit markets.
The collapse of First Brands alone has triggered over -$4 billion in losses, affecting funds run by Blackstone, PGIM, Franklin Templeton, CIFC, and Wellington.
Despite these failures, leveraged loan issuances hit a record +$404 billion in Q3 2025.
The leveraged loan market now stands at an estimated $2 TRILLION.
Cracks in the credit market are becoming more apparent.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
The US leveraged loan market is now on track for its biggest monthly loss since at least 2022.
This comes as defaults of First Brands and Tricolor Auto in September have exposed possible weak underwriting standards and growing vulnerabilities in credit markets.
The collapse of First Brands alone has triggered over -$4 billion in losses, affecting funds run by Blackstone, PGIM, Franklin Templeton, CIFC, and Wellington.
Despite these failures, leveraged loan issuances hit a record +$404 billion in Q3 2025.
The leveraged loan market now stands at an estimated $2 TRILLION.
Cracks in the credit market are becoming more apparent.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Media is too big
VIEW IN TELEGRAM
Never lose faith crypto guys, may be the women of your life is still in Japans factory
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π₯°1
What did the 1944 Luftwaffe gunnery manual mean by this
(This is actually a genius way to get 97 IQ gunners to approximate trig functions)
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
(This is actually a genius way to get 97 IQ gunners to approximate trig functions)
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
β6π«‘3
Listing this for $1,000,000 on opensea soon.
Shooters shoot.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Shooters shoot.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π1
NEW: SLERF CLAIMS THAT βSLERF REFUNDS FROM THE INFAMOUS SLERF BURN ARE OFFICIALLY COMPLETEβ - BACK IN MARCH 2024, THE PROJECT ACCIDENTALLY BURNED $10M+ WORTH OF PRESALE MEMECOIN TOKENS MEANT TO BE AIRDROPPED
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π2
2025 NYC mayor election poll
Among American born New Yorkers:
Andrew Cuomo: 40%
Zohran Mamdani: 31%
Curtis Sliwa: 25%
Among Foreign born New Yorkers:
Zohran Mamdani: 62%
Andrew Cuomo: 24%
Curtis Sliwa: 12%
Demographics is destiny
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Among American born New Yorkers:
Andrew Cuomo: 40%
Zohran Mamdani: 31%
Curtis Sliwa: 25%
Among Foreign born New Yorkers:
Zohran Mamdani: 62%
Andrew Cuomo: 24%
Curtis Sliwa: 12%
Demographics is destiny
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π€¬9
Waiting in line for dinner is the new NYC trend β with eager diners flocking to hours-long queues at fashionable dives
New Yorkers yearn for the breadlines
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
New Yorkers yearn for the breadlines
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π7π2
We vibe coded right into this mess and we will vibe code out of it
just waiting for cursor to get back online
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
just waiting for cursor to get back online
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π6
Prediction markets just closed the first TWO BILLION DOLLAR WEEK EVER.
That's a new notional volume all-time high!
Uptober.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
That's a new notional volume all-time high!
Uptober.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π₯1
This was Hasanβs previous dog. There is a visible open wound around the dogβs neck region below the collar.
THE BIGGEST LEFTIST STREAMER IS A GENERATIONAL DOG TORTURER
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
THE BIGGEST LEFTIST STREAMER IS A GENERATIONAL DOG TORTURER
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π―5π€―1π¨1
This might be the most disturbing AI paper of 2025
Scientists just proved that large language models can literally rot their own brains the same way humans get brain rot from scrolling junk content online.
They fed models months of viral Twitter data short, high-engagement posts and watched their cognition collapse:
- Reasoning fell by 23%
- Long-context memory dropped 30%
- Personality tests showed spikes in narcissism & psychopathy
And get this even after retraining on clean, high-quality data, the damage didnβt fully heal.
The representational βrotβ persisted.
Itβs not just bad data β bad output.
Itβs bad data β permanent cognitive drift.
The AI equivalent of doomscrolling is real. And itβs already happening.
Full study: llm-brain-rot. github. io
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Scientists just proved that large language models can literally rot their own brains the same way humans get brain rot from scrolling junk content online.
They fed models months of viral Twitter data short, high-engagement posts and watched their cognition collapse:
- Reasoning fell by 23%
- Long-context memory dropped 30%
- Personality tests showed spikes in narcissism & psychopathy
And get this even after retraining on clean, high-quality data, the damage didnβt fully heal.
The representational βrotβ persisted.
Itβs not just bad data β bad output.
Itβs bad data β permanent cognitive drift.
The AI equivalent of doomscrolling is real. And itβs already happening.
Full study: llm-brain-rot. github. io
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π―4π¨3π«‘2
The Experiment Setup:
Researchers built two data sets:
β’ Junk Data: short, viral, high-engagement tweets
β’ Control Data: longer, thoughtful, low-engagement tweets
Then they retrained Llama 3, Qwen, and others on each same scale, same steps.
Only variable: data quality.
The Cognitive Crash:
The results are brutal.
On reasoning tasks (ARC Challenge):
β Accuracy dropped from 74.9 β 57.2
On long-context understanding (RULER):
β Scores plunged from 84.4 β 52.3
Thatβs a measurable intelligence collapse.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Researchers built two data sets:
β’ Junk Data: short, viral, high-engagement tweets
β’ Control Data: longer, thoughtful, low-engagement tweets
Then they retrained Llama 3, Qwen, and others on each same scale, same steps.
Only variable: data quality.
The Cognitive Crash:
The results are brutal.
On reasoning tasks (ARC Challenge):
β Accuracy dropped from 74.9 β 57.2
On long-context understanding (RULER):
β Scores plunged from 84.4 β 52.3
Thatβs a measurable intelligence collapse.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π2
The βThought-Skippingβ Effect:
When reasoning, junk-trained models skip steps entirely.
Instead of thinking through problems, they jump to answers often wrong ones.
This is their version of βattention decay.β
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
When reasoning, junk-trained models skip steps entirely.
Instead of thinking through problems, they jump to answers often wrong ones.
This is their version of βattention decay.β
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π1
Junk In, Dark Out:
The most chilling part...
Models fed junk content didnβt just get dumber they got meaner.
β’ Spikes in narcissism and psychopathy
β’ Drops in agreeableness and conscientiousness
Data doesnβt just shape capability. It shapes personality.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
The most chilling part...
Models fed junk content didnβt just get dumber they got meaner.
β’ Spikes in narcissism and psychopathy
β’ Drops in agreeableness and conscientiousness
Data doesnβt just shape capability. It shapes personality.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π2