41.7K subscribers
5.53K photos
232 videos
5 files
917 links
πŸ€– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Due to high demand, ChatGPT Plus / GPT-4 upgrades temporarily paused.
🀑31🀯5πŸ‘1πŸ”₯1
Feeling bored? Search "Regenerate Response" to spot people using ChatGPT in their online posts

Example 1

Example 2

Example 3
🀣13
GPU shortages are coming
😱7
πŸ’€
🫑24🀑2❀1πŸ€“1
I want you to try and respond only using emojis
πŸ‘13🀑3❀1πŸ™Š1
β€œSome users are reporting errors when using GPT-4. We are actively investigating”
πŸ‘11🀣8😁5
GPT models family tree

Article
πŸ‘7πŸ€”1
Towards Healthy AI: Large Language Models Need Therapists Too

Paper
🀯5🀣5
Surprising things to know about LLMs - Best Ones:

1. LLMs predictably get more capable with increasing investment, even without targeted innovation.

3. LLMs often appear to learn and use representations of the outside world.

6. Human performance on a task isn’t an upper bound on LLM performance.

Paper
πŸ‘€10πŸ‘5❀2πŸ‘1πŸ’―1
ChadGPT and ChatGPT will now reply to your questions in the group 🚨

To use:

1. Join the group

2. Type /ask ___ for ChatGPT, or

3. Type /chad ___ for CHAD GPT

Links expire soon 🚨🚨🚨🚨
πŸ‘3πŸ”₯2❀1πŸ‘1
GPT-4 weighs in on arrest of former US presidents
πŸ‘7😱2❀1πŸ’―1
ChatGPT: Ask me to write Sonic the Hedgehog fanfics one more time
❀24🀣15πŸ”₯3πŸ‘2
Musk doubles-down on AI Slowdown camp
🀑33πŸ‘12❀1🀯1πŸ€“1
AI researchers rn:
😁28πŸ‘9πŸ—Ώ3❀1🀑1
AI pause gonna be lit
πŸ”₯14🀣5πŸ‘4πŸ€”2❀1πŸ€“1
Google finally releases paper on the TPUv4 AI training hardware they’ve been using since 2020

TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topology to improve scale, availability, utilization, modularity, deployment, security, power, and performance; users can pick a twisted 3D torus topology if desired. Much cheaper, lower power, and faster than Infiniband, OCSes and underlying optical components are <5% of system cost and <3% of system power. Each TPU v4 includes SparseCores, dataflow processors that accelerate models that rely on embeddings by 5x-7x yet use only 5% of die area and power. Deployed since 2020, TPU v4 outperforms TPU v3 by 2.1x and improves performance/Watt by 2.7x. The TPU v4 supercomputer is 4x larger at 4096 chips and thus ~10x faster overall, which along with OCS flexibility helps large language models. For similar sized systems, it is ~4.3x-4.5x faster than the Graphcore IPU Bow and is 1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100. TPU v4s inside the energy-optimized warehouse scale computers of Google Cloud use ~3x less energy and produce ~20x less CO2e than contemporary DSAs in a typical on-premise data center.

Paper
πŸ‘5❀1πŸ‘Œ1