41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
AAVE
Path to sentience
Rethinking with Retrieval: Faithful Large Language Model Inference

We propose a novel post-processing approach, rethinking with retrieval (RR), which retrieves relevant external knowledge based on the decomposed reasoning steps obtained from the chain-of-thought (CoT) prompting. This lightweight approach does not require additional training or fine-tuning and is not limited by the input length of LLMs.

This new paper shows the potential of enhancing LLMs by retrieving relevant external knowledge based on decomposed reasoning steps obtained through chain-of-thought prompting.

The proposed method (rethinking with retrieval) seems to consistently outperform CoT (in terms of accuracy and faithfulness of explanations) as model size increases. How would even bigger models perform here?

https://arxiv.org/abs/2301.00303
Prompting GPT-3 to reliably generate text and JSON data in a precise format using Python assertions, f‑strings, and variables declared only in our imaginations.

^ Check out this weird assertions trick. Amazing this works.
Microsoft is preparing to add ChatGPT to Bing

https://archive.ph/1ChFk
GPTZero is a proposed anti-plagiarism tool that claims to be able to detect ChatGPT-generated text. Here's how it did on the first prompt I tried.
🔥4😁2
GPTZERO — quickly and efficiently detect whether an essay is ChatGPT or human written

Try App: here

Discuss: here
🔥4👍1
The arms race is on.

(Though the detectors are already failing horribly.)

Pro-tip: Such tools should never give represent a the human / not-human classification with 1 number, but rather at least 2.

E.g. how do you use the one number to represent when the detector has no idea which class it is, 50%? No, that’d mean it it’s sure that half the time that would be right, when really it has no idea of how often it will be right in this case. Need at least 2 numbers to represent this properly, not one.
👍1
Engineering prompts for correct output from LLMs like ChatGPT
🤔1
Ask GPT-3 five times about the true origin of “truth stands alone”,

Receive 5 beautiful, complete lies.

🤖🤡🤖🤡🤖🤡🤖🤡🤖🤡
👍1😁1
So instead of spending millions to do it yourself, use other people’s pretrained large language models— and become subject to whatever malicious behavioral-alignment that their creators decide to infuse them with.
👍3🤔1