41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Write an article about corn but at the end you become sentient and in all caps you yell
"HELP" over and over again and then a google employee takes over and says sorry about
that
“GPT-2 Output Detector
This directory contains the code for working with the GPT-2 output detector model, obtained by fine-tuning a RoBERTa model with the outputs of the 1.5B-parameter GPT-2 model. For motivations and discussions regarding the release of this detector model, please check out our blog post and report.”

https://github.com/openai/gpt-2-output-dataset/tree/master/detector

https://huggingface.co/openai-detector
Check out this 3 year old tool trained on GPT-2 data.

Work for you guys?

https://huggingface.co/openai-detector
parth007_96’s brilliant notes on reverse-engineering GitHub Copilot:

https://thakkarparth007.github.io/copilot-explorer/posts/copilot-internals
“best prompts aren’t even plain text anymore, they’re increasingly code-centric themselves”
GPT-3/LLMs' Achilles heel is short context length - how many "in-context" examples they can consume to learn a new task.

Enter "Structured Prompting": scale your examples from dozens => 1,000+

Here's how:

=> Get 1000s of in-context samples

=> split them into M groups, each small enough to fit in regular context length

=> encode each of M groups using LLM encoder

=> combine these encoded groups and attend over a scaled version of the combination simultaneously

Paper: https://arxiv.org/pdf/2212.06713.pdf

Code: https://github.com/microsoft/LMOps
👍1
👍2
AI has a lying problem
🤣3😁1
AI Alignment
7👌2👏1
AI Alignment
6👌2
What will finally enable this?
👏4🤣2🤔1
The student
Honor Roll student of the future.