Prompt Engineering: The Career of Future
Unlike most AI systems which are designed to perform a single specific task, GPT-3 is designed to be task agnostic with a general-purpose, simple-to-use “text-in, text-out” interface that can potentially perform any number of tasks given the specific training prompt. The easy-to-use API has given birth to the new Software 3.0 ecosystem virtually touching every aspect of human life.
The Secret to writing good prompts is understanding what GPT-3 knows about the world and how to get the model to use that information to generate useful results. As in the game of charades, we give the person just enough information to figure out the word using his/her intelligence. Similarly, with GPT-3 we give the model just enough context in the form of a training prompt to figure out the patterns and perform the given task.
https://medium.com/nerd-for-tech/prompt-engineering-the-career-of-future-2fb93f90f117
Unlike most AI systems which are designed to perform a single specific task, GPT-3 is designed to be task agnostic with a general-purpose, simple-to-use “text-in, text-out” interface that can potentially perform any number of tasks given the specific training prompt. The easy-to-use API has given birth to the new Software 3.0 ecosystem virtually touching every aspect of human life.
The Secret to writing good prompts is understanding what GPT-3 knows about the world and how to get the model to use that information to generate useful results. As in the game of charades, we give the person just enough information to figure out the word using his/her intelligence. Similarly, with GPT-3 we give the model just enough context in the form of a training prompt to figure out the patterns and perform the given task.
https://medium.com/nerd-for-tech/prompt-engineering-the-career-of-future-2fb93f90f117
👍1
Tips for the Few-Shot Setting:
Balance the examples across classes. For example, if you’re showing “good” and “bad” examples, include an equal number of good and bad examples.
Recency bias: shuffle the examples, so that e.g. all of the “bad” examples don’t appear at the end.
https://arxiv.org/abs/2102.09690
Balance the examples across classes. For example, if you’re showing “good” and “bad” examples, include an equal number of good and bad examples.
Recency bias: shuffle the examples, so that e.g. all of the “bad” examples don’t appear at the end.
https://arxiv.org/abs/2102.09690
Prompting tips:
(1) Use declarative and direct signifiers for tasks such as translate or rephrase this paragraph so that a 2nd grader can understand it.
(2) Use few-shot demonstrations when the task requires a bespoke format, recognizing that few-shot examples may be interpreted holistically by the model rather than as independent samples.
(3) Specify tasks using characters or characteristic situations as a proxy for an intention such as asking Gandhi or Nietzsche to solve a task. Here you are tapping into LLMs’ sophisticated understanding of analogies.
(4) Constrain the possible completion output using careful syntactic and lexical prompt formulations such as saying “Translate this French sentence to English” or by adding quotes around the French sentence.
(5) Encourage the model to break down problems into sub problems via step-by-step reasoning.
https://arxiv.org/abs/2102.07350
(1) Use declarative and direct signifiers for tasks such as translate or rephrase this paragraph so that a 2nd grader can understand it.
(2) Use few-shot demonstrations when the task requires a bespoke format, recognizing that few-shot examples may be interpreted holistically by the model rather than as independent samples.
(3) Specify tasks using characters or characteristic situations as a proxy for an intention such as asking Gandhi or Nietzsche to solve a task. Here you are tapping into LLMs’ sophisticated understanding of analogies.
(4) Constrain the possible completion output using careful syntactic and lexical prompt formulations such as saying “Translate this French sentence to English” or by adding quotes around the French sentence.
(5) Encourage the model to break down problems into sub problems via step-by-step reasoning.
https://arxiv.org/abs/2102.07350
👍1
Andrew Cantino’s prompt engineering tips:
(1) Make sure your inputs are grammatically correct and have good writing quality as LLMs tend to preserve stylistic consistency in their completions.
(2) Rather than generating a list of N items, generate a single item N times. This avoids the language model getting stuck in a repetitive loop.
(3) In order to improve output quality, generate many completions and then rank them heuristically.
https://blog.andrewcantino.com/blog/2021/04/21/prompt-engineering-tips-and-tricks/
(1) Make sure your inputs are grammatically correct and have good writing quality as LLMs tend to preserve stylistic consistency in their completions.
(2) Rather than generating a list of N items, generate a single item N times. This avoids the language model getting stuck in a repetitive loop.
(3) In order to improve output quality, generate many completions and then rank them heuristically.
https://blog.andrewcantino.com/blog/2021/04/21/prompt-engineering-tips-and-tricks/
❤1👍1
Mishra et al.’s prompt reframing techniques:
(1) Use low-level patterns from other examples to make a given prompt easier to understand for an LLM.
(2) Explictly itemize instructions into bulleted lists. Turn negative statements such as don’t create questions which are not to create questions which are.
(3) When possible, break down a top-level task into different sub-tasks that can be executed in parallel or sequentially.
(4) Avoid repeated and generic statements when trying to solve a very specific task. For example, instead of saying Answer the following question for a math problem, say Calculate answer to the following question. You need to either add or subtract numbers…
https://arxiv.org/abs/2109.07830
(1) Use low-level patterns from other examples to make a given prompt easier to understand for an LLM.
(2) Explictly itemize instructions into bulleted lists. Turn negative statements such as don’t create questions which are not to create questions which are.
(3) When possible, break down a top-level task into different sub-tasks that can be executed in parallel or sequentially.
(4) Avoid repeated and generic statements when trying to solve a very specific task. For example, instead of saying Answer the following question for a math problem, say Calculate answer to the following question. You need to either add or subtract numbers…
https://arxiv.org/abs/2109.07830