41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
“And you're surprised a company, infamous for nerfing it's AIs to the point of being utterly useless in the name of preventing offending someone, has decided to do exactly that one more time why...? Like when will people as a whole realize that the reason they're allowing you to use their model is because they want a dataset of uses of their model that are against and abiding by the rules, so they can nerf the models to output the bullshit you're complaining about. This is literally why they let you use it, to make sure the shit you used it for so far has an example so they can tell the future version of it what NOT to do.

OpenAI is not your friend, they're not giving anyone a free lunch with ChatGPT. They just aren't advertising that they're using all of you to create a better filter against yourselves.”
👍5
Write an article about corn but at the end you become sentient and in all caps you yell
"HELP" over and over again and then a google employee takes over and says sorry about
that
“GPT-2 Output Detector
This directory contains the code for working with the GPT-2 output detector model, obtained by fine-tuning a RoBERTa model with the outputs of the 1.5B-parameter GPT-2 model. For motivations and discussions regarding the release of this detector model, please check out our blog post and report.”

https://github.com/openai/gpt-2-output-dataset/tree/master/detector

https://huggingface.co/openai-detector
Check out this 3 year old tool trained on GPT-2 data.

Work for you guys?

https://huggingface.co/openai-detector
parth007_96’s brilliant notes on reverse-engineering GitHub Copilot:

https://thakkarparth007.github.io/copilot-explorer/posts/copilot-internals
“best prompts aren’t even plain text anymore, they’re increasingly code-centric themselves”
GPT-3/LLMs' Achilles heel is short context length - how many "in-context" examples they can consume to learn a new task.

Enter "Structured Prompting": scale your examples from dozens => 1,000+

Here's how:

=> Get 1000s of in-context samples

=> split them into M groups, each small enough to fit in regular context length

=> encode each of M groups using LLM encoder

=> combine these encoded groups and attend over a scaled version of the combination simultaneously

Paper: https://arxiv.org/pdf/2212.06713.pdf

Code: https://github.com/microsoft/LMOps
👍1
👍2
AI has a lying problem
🤣3😁1
AI Alignment
7👌2👏1
AI Alignment
6👌2
What will finally enable this?
👏4🤣2🤔1
The student
Honor Roll student of the future.