41.7K subscribers
5.53K photos
232 videos
5 files
917 links
๐Ÿค– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
โ€œGPT-2 Output Detector
This directory contains the code for working with the GPT-2 output detector model, obtained by fine-tuning a RoBERTa model with the outputs of the 1.5B-parameter GPT-2 model. For motivations and discussions regarding the release of this detector model, please check out our blog post and report.โ€

https://github.com/openai/gpt-2-output-dataset/tree/master/detector

https://huggingface.co/openai-detector
Check out this 3 year old tool trained on GPT-2 data.

Work for you guys?

https://huggingface.co/openai-detector
parth007_96โ€™s brilliant notes on reverse-engineering GitHub Copilot:

https://thakkarparth007.github.io/copilot-explorer/posts/copilot-internals
โ€œbest prompts arenโ€™t even plain text anymore, theyโ€™re increasingly code-centric themselvesโ€
GPT-3/LLMs' Achilles heel is short context length - how many "in-context" examples they can consume to learn a new task.

Enter "Structured Prompting": scale your examples from dozens => 1,000+

Here's how:

=> Get 1000s of in-context samples

=> split them into M groups, each small enough to fit in regular context length

=> encode each of M groups using LLM encoder

=> combine these encoded groups and attend over a scaled version of the combination simultaneously

Paper: https://arxiv.org/pdf/2212.06713.pdf

Code: https://github.com/microsoft/LMOps
๐Ÿ‘1
๐Ÿ‘2
AI has a lying problem
๐Ÿคฃ3๐Ÿ˜1
AI Alignment
โค7๐Ÿ‘Œ2๐Ÿ‘1
AI Alignment
โค6๐Ÿ‘Œ2
What will finally enable this?
๐Ÿ‘4๐Ÿคฃ2๐Ÿค”1
The student
Honor Roll student of the future.
the moment when teachers figure out they can grade 300 essays in 15 min with chatGPT
AI has a wokeness problem,
and a lying problem.
๐Ÿ‘5๐Ÿ˜1