Alright which one of you niggas is running the DHS account?
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π7π₯3
Another night, another record high.
Rate cuts, stagflation, and tariff stimulus checks are a wild combination.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Rate cuts, stagflation, and tariff stimulus checks are a wild combination.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π―1
Remember them
Bebe King (6)
Elsie Stancombe (8)
Alice Dasilva Aguiar (9)
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Bebe King (6)
Elsie Stancombe (8)
Alice Dasilva Aguiar (9)
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π5π―4
This media is not supported in your browser
VIEW IN TELEGRAM
Ibram X. Kendi, at a film screening for his Netflix film: "Whiteness prevents White peoples from connecting to humanity"
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π6π2
BNB BROKE $1100 FOR THE FIRST TIME IN HISTORY & HIT A NEW ATH OF $1,112
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π1
When I was asked how tanks will look like in 2025 decades ago - I had a completely different picture in mind
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π4π¨2
This media is not supported in your browser
VIEW IN TELEGRAM
US vs China numbers here are unbelievable.
The US controls the absolute majority of known AI training compute on this planet and continues to build the biggest, most power hungry clusters.
China is spending heavily to close the gap. Recent reporting pegs 2025 AI capital expenditure in China at up to $98B, up 48% from 2024, with about $56B from government programs and about $24B from major internet firms. Capacity will grow, but translating capex into competitive training compute takes time, especially under export controls.
With US controls constraining access to top Nvidia and AMD parts, Chinese firms are leaning more on domestic accelerators. Huawei plans mass shipments of the Ascend 910C in 2025, a two-die package built from 910B chips. US officials argue domestic output is limited this year, and Chinese buyers still weigh tradeoffs in performance, memory, and software.
Chips and policy are moving targets
The policy environment shifted again this week.
A new US arrangement now lets Nvidia and AMD resume limited AI chip sales to China in exchange for a 15% revenue share paid to the US government, covering products like Nvidia H20 and AMD MI308.
This could boost near-term Chinese access to mid-tier training parts, yet it does not restore availability of the top US chips.
Beijing is cautious about reliance on these parts. Chinese regulators have urged companies to pause H20 purchases pending review, and local media describe official pressure to prefer domestic chips.
Why performance still favors the US stack like NVIDIA
Independent analysts compare Nvidiaβs export-grade H20 with Huaweiβs Ascend 910B and find the Nvidia part still holds advantages in memory capacity and bandwidth, which matter for training large models.
But software maturity gaps around Huaweiβs stack remains, that reduce effective throughput, even when nominal specs look close to older Nvidia parts like A100.
These issues make it harder for Chinese labs to match US training runs at the same wall-clock cost.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
The US controls the absolute majority of known AI training compute on this planet and continues to build the biggest, most power hungry clusters.
China is spending heavily to close the gap. Recent reporting pegs 2025 AI capital expenditure in China at up to $98B, up 48% from 2024, with about $56B from government programs and about $24B from major internet firms. Capacity will grow, but translating capex into competitive training compute takes time, especially under export controls.
With US controls constraining access to top Nvidia and AMD parts, Chinese firms are leaning more on domestic accelerators. Huawei plans mass shipments of the Ascend 910C in 2025, a two-die package built from 910B chips. US officials argue domestic output is limited this year, and Chinese buyers still weigh tradeoffs in performance, memory, and software.
Chips and policy are moving targets
The policy environment shifted again this week.
A new US arrangement now lets Nvidia and AMD resume limited AI chip sales to China in exchange for a 15% revenue share paid to the US government, covering products like Nvidia H20 and AMD MI308.
This could boost near-term Chinese access to mid-tier training parts, yet it does not restore availability of the top US chips.
Beijing is cautious about reliance on these parts. Chinese regulators have urged companies to pause H20 purchases pending review, and local media describe official pressure to prefer domestic chips.
Why performance still favors the US stack like NVIDIA
Independent analysts compare Nvidiaβs export-grade H20 with Huaweiβs Ascend 910B and find the Nvidia part still holds advantages in memory capacity and bandwidth, which matter for training large models.
But software maturity gaps around Huaweiβs stack remains, that reduce effective throughput, even when nominal specs look close to older Nvidia parts like A100.
These issues make it harder for Chinese labs to match US training runs at the same wall-clock cost.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
This media is not supported in your browser
VIEW IN TELEGRAM
People are now using AI to make children's television characters break the law
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π6π2