GPT-3/LLMs' Achilles heel is short context length - how many "in-context" examples they can consume to learn a new task.
Enter "Structured Prompting": scale your examples from dozens => 1,000+
Here's how:
=> Get 1000s of in-context samples
=> split them into M groups, each small enough to fit in regular context length
=> encode each of M groups using LLM encoder
=> combine these encoded groups and attend over a scaled version of the combination simultaneously
Paper: https://arxiv.org/pdf/2212.06713.pdf
Code: https://github.com/microsoft/LMOps
Enter "Structured Prompting": scale your examples from dozens => 1,000+
Here's how:
=> Get 1000s of in-context samples
=> split them into M groups, each small enough to fit in regular context length
=> encode each of M groups using LLM encoder
=> combine these encoded groups and attend over a scaled version of the combination simultaneously
Paper: https://arxiv.org/pdf/2212.06713.pdf
Code: https://github.com/microsoft/LMOps
👍1
ChatGPT has been a game changer for school essays
I learn through online school in a small city without about 300 students, classes cycle every 3 weeks, English for the first semester opening up just about when ChatGPT launched. So, I decided to see how it, along with Quillbot would fare. The teacher fr called to congratulate me for being one of the best writers he's taught and my mom was so proud. Was just sitting there trying so hard not to laugh. AI really is to essays what Calculators were to math
I learn through online school in a small city without about 300 students, classes cycle every 3 weeks, English for the first semester opening up just about when ChatGPT launched. So, I decided to see how it, along with Quillbot would fare. The teacher fr called to congratulate me for being one of the best writers he's taught and my mom was so proud. Was just sitting there trying so hard not to laugh. AI really is to essays what Calculators were to math
🥰4👍1