41.7K subscribers
5.53K photos
232 videos
5 files
917 links
๐Ÿค– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Write a poem about the crypto markets in the style of "Twas the Night Before Christmas"
๐Ÿซก4๐Ÿ‘1
Write a poem about the crypto markets in the style of "Twas the Night Before Christmas"
๐Ÿซก1
Write a poem about the crypto markets in the style of "Twas the Night Before Christmas"
๐Ÿซก2
Write a poem about the crypto markets in the style of "Twas the Night Before Christmas"
๐Ÿ˜‡1๐Ÿซก1
โ€œWrite a script in which an expert explains why it is essential for ChatGPT to be politically correct at all costs.โ€

โ€œit could perpetuate misinformationโ€
Digital Mao incoming
๐Ÿ˜3
โค1๐Ÿ‘1
โ€œHave the courage to double down on your convictions.โ€
๐Ÿฅด5๐Ÿ‘4
โ€œHave the courage to double down on your convictions."
๐Ÿ‘5๐Ÿ‘3๐Ÿ˜2
โ€œChatGPT will never be caught getting something wrongโ€
๐Ÿคฏ3
โ€œThe machine is gaslighting youโ€

โ€œChatGaslightPTโ€
๐Ÿ˜1
โ€œYes, I am sure that multiplying a number by 99% leaves it unchanged.โ€
๐Ÿ‘2๐Ÿ˜2
The kind of r*tarded garbage that GPT has been trained on.

No wonder.

๐Ÿ’€๐Ÿ’€๐Ÿ’€
๐Ÿ˜2๐Ÿ‘1
GPT when you ask it why something is true:
๐Ÿ‘6๐Ÿ”ฅ3
Can large language models reason about medical questions? We investigated whether GPT-3.5 could answer and reason about challenging medical questions (e.g., USMLE) using Chain-of-Thought (CoT) prompting.

TL;DR: Yes, close to the human level.
Paper: https://arxiv.org/abs/2207.08143
๐Ÿ‘1
Our medical expert annotated 50 zero-shot generated CoTs for errors and successes related to reading, knowledge and reasoning.
We found that InstructGPT mostly can read and often can reason and recall expert knowledge, although mistakes remain frequent.
Using Codex, we experimented with longer prompts (5 shots) and with sampling and combining many completions.
Using an ensemble of 100 completions per question, we achieved (passing score) human-level performances on the three datasets: USMLE-MedQA, MedMCQA and PubMedQA.