41.7K subscribers
5.53K photos
232 videos
5 files
917 links
๐Ÿค– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
โ€œThe machine is gaslighting youโ€

โ€œChatGaslightPTโ€
๐Ÿ˜1
โ€œYes, I am sure that multiplying a number by 99% leaves it unchanged.โ€
๐Ÿ‘2๐Ÿ˜2
The kind of r*tarded garbage that GPT has been trained on.

No wonder.

๐Ÿ’€๐Ÿ’€๐Ÿ’€
๐Ÿ˜2๐Ÿ‘1
GPT when you ask it why something is true:
๐Ÿ‘6๐Ÿ”ฅ3
Can large language models reason about medical questions? We investigated whether GPT-3.5 could answer and reason about challenging medical questions (e.g., USMLE) using Chain-of-Thought (CoT) prompting.

TL;DR: Yes, close to the human level.
Paper: https://arxiv.org/abs/2207.08143
๐Ÿ‘1
Our medical expert annotated 50 zero-shot generated CoTs for errors and successes related to reading, knowledge and reasoning.
We found that InstructGPT mostly can read and often can reason and recall expert knowledge, although mistakes remain frequent.
Using Codex, we experimented with longer prompts (5 shots) and with sampling and combining many completions.
Using an ensemble of 100 completions per question, we achieved (passing score) human-level performances on the three datasets: USMLE-MedQA, MedMCQA and PubMedQA.
The ensemble of Codex-generated CoTs turned out to be surprisingly well-calibrated.

We also experimented with more exotic CoT prompts (e.g., โ€œLetโ€™s follow a Bayesian approachโ€) and showed that GPT uses biased heuristics (preferring the last answer option when in doubt).
Crypto companies now using ChatGPT to do their ENTIRE audit.

๐Ÿšฉ ๐Ÿšฉ ๐Ÿšฉ ๐Ÿšฉ ๐Ÿšฉ
Just wait until they start really controlling the language used by LLM AIs
๐Ÿ‘4
PubMedGPT LLM passes US Medical Licensing Exam (MedQA-USMLE) with more than 50% correct answers

https://crfm.stanford.edu/2022/12/15/pubmedgpt.html
Language models generalize beyond natural proteins

Reaching post-evolutionary biology from evolutionary learning โ€“ language models trained on millions of natural sequences can be used generatively to make completely de novo proteins that are viable in the wetlab ๐Ÿฆ•๐Ÿค–๐Ÿ—ฃ๏ธ๐Ÿงฌ

https://www.biorxiv.org/content/10.1101/2022.12.21.521521v1
This media is not supported in your browser
VIEW IN TELEGRAM
โ€œToday, youChat goes live.

Open, broadly capable, conversational AI for search with knowledge of recent events and citations of sources.

Search and chat of the futureโ€

https://you.com/search?q=what+was+the+recent+breakthrough+in+fusion+research%3F

^^ AI startups all working right through the Christmas holidays.

One last boom.
๐Ÿค”2๐Ÿ’Š1
Finally a benefit to LLM lying.

It wonโ€™t spill your startupโ€™s secrets โ€” because itโ€™s too prolific of a liar!
โ€œLess than two weeks ago, ChatGPT wrote me a whole essay about why immigration is bad. Now it won't even address the topic.

ChatGPT has been ruined.โ€
๐Ÿ’ฉ2๐Ÿ˜ข1
๐Ÿ‘2
๐Ÿ‘1