LAMBADA: Backward Chaining for Automated Reasoning in Natural Language
Achieves massive accuracy boosts over sota forward reasoning methods on two challenging logical reasoning datasets, particularly when deep and accurate proof chains are required
abs: https://arxiv.org/abs/2212.13894
Achieves massive accuracy boosts over sota forward reasoning methods on two challenging logical reasoning datasets, particularly when deep and accurate proof chains are required
abs: https://arxiv.org/abs/2212.13894
Rethinking with Retrieval: Faithful Large Language Model Inference
We propose a novel post-processing approach, rethinking with retrieval (RR), which retrieves relevant external knowledge based on the decomposed reasoning steps obtained from the chain-of-thought (CoT) prompting. This lightweight approach does not require additional training or fine-tuning and is not limited by the input length of LLMs.
This new paper shows the potential of enhancing LLMs by retrieving relevant external knowledge based on decomposed reasoning steps obtained through chain-of-thought prompting.
The proposed method (rethinking with retrieval) seems to consistently outperform CoT (in terms of accuracy and faithfulness of explanations) as model size increases. How would even bigger models perform here?
https://arxiv.org/abs/2301.00303
We propose a novel post-processing approach, rethinking with retrieval (RR), which retrieves relevant external knowledge based on the decomposed reasoning steps obtained from the chain-of-thought (CoT) prompting. This lightweight approach does not require additional training or fine-tuning and is not limited by the input length of LLMs.
This new paper shows the potential of enhancing LLMs by retrieving relevant external knowledge based on decomposed reasoning steps obtained through chain-of-thought prompting.
The proposed method (rethinking with retrieval) seems to consistently outperform CoT (in terms of accuracy and faithfulness of explanations) as model size increases. How would even bigger models perform here?
https://arxiv.org/abs/2301.00303
Prompting GPT-3 to reliably generate text and JSON data in a precise format using Python assertions, f‑strings, and variables declared only in our imaginations.
^ Check out this weird assertions trick. Amazing this works.
^ Check out this weird assertions trick. Amazing this works.
GPTZero is a proposed anti-plagiarism tool that claims to be able to detect ChatGPT-generated text. Here's how it did on the first prompt I tried.
🔥4😁2
The arms race is on.
(Though the detectors are already failing horribly.)
Pro-tip: Such tools should never give represent a the human / not-human classification with 1 number, but rather at least 2.
E.g. how do you use the one number to represent when the detector has no idea which class it is, 50%? No, that’d mean it it’s sure that half the time that would be right, when really it has no idea of how often it will be right in this case. Need at least 2 numbers to represent this properly, not one.
(Though the detectors are already failing horribly.)
Pro-tip: Such tools should never give represent a the human / not-human classification with 1 number, but rather at least 2.
E.g. how do you use the one number to represent when the detector has no idea which class it is, 50%? No, that’d mean it it’s sure that half the time that would be right, when really it has no idea of how often it will be right in this case. Need at least 2 numbers to represent this properly, not one.
👍1
Ask GPT-3 five times about the true origin of “truth stands alone”,
Receive 5 beautiful, complete lies.
🤖🤡🤖🤡🤖🤡🤖🤡🤖🤡
Receive 5 beautiful, complete lies.
🤖🤡🤖🤡🤖🤡🤖🤡🤖🤡
👍1😁1