De Bruijn Factor
Essentially, measuring how many words you need to convince a machine of something.
Or more generally, and in the case of LLMs, how many words are needed to effectively instruct the machine to do some task.
For proof assistants, LLMs, and even humans, we notice the trend where the smarter & more resources the receiving tech or human has, the shorter our explanations can be.
Word to the wise is sufficient.
If you ask me, GPT-4 already has surpassed typical humans on this ratio for instruction tasks, though still far behind top experts in many areas.
More on de Bruijin Factor
Original 2000 paper on de Bruijin Factor
Essentially, measuring how many words you need to convince a machine of something.
Or more generally, and in the case of LLMs, how many words are needed to effectively instruct the machine to do some task.
For proof assistants, LLMs, and even humans, we notice the trend where the smarter & more resources the receiving tech or human has, the shorter our explanations can be.
Word to the wise is sufficient.
If you ask me, GPT-4 already has surpassed typical humans on this ratio for instruction tasks, though still far behind top experts in many areas.
More on de Bruijin Factor
Original 2000 paper on de Bruijin Factor
👍7❤1
GPT-3.5 Turbo can now use external tools via json function calling, used by the LLM based on the descriptions you give your functions in the prompts.
“This model has been updated with a new version: gpt-3.5-turbo-0613 which is more steerable with the system message and includes a new capability: function calling. By describing functions in your prompts, the model can intelligently output a JSON object containing arguments to call these functions based on user input — perfect for integrating with other tools or APIs.”
Function Calling Docs
“This model has been updated with a new version: gpt-3.5-turbo-0613 which is more steerable with the system message and includes a new capability: function calling. By describing functions in your prompts, the model can intelligently output a JSON object containing arguments to call these functions based on user input — perfect for integrating with other tools or APIs.”
Function Calling Docs
👏2❤1
Example usage for ChatGPT’s new function calling API
Basically, first describe the functions and each of their parameters, using natural langauge english descriptions, along with basic typing info. Then, see if the LLM wanted to call some functions. Then, give the LLM the responses to those function calls.
Function Calling Docs
Basically, first describe the functions and each of their parameters, using natural langauge english descriptions, along with basic typing info. Then, see if the LLM wanted to call some functions. Then, give the LLM the responses to those function calls.
Function Calling Docs
👍7❤1
This media is not supported in your browser
VIEW IN TELEGRAM
😐30👍5🙏5😈4👾4🤯3🤣3🤬2👀2🎃2❤1
This media is not supported in your browser
VIEW IN TELEGRAM
me, 2035, when they find out I’m running the last unauthorized GPU cluster
😁32😐2❤1👻1
Using ChatGPT's new function -calling API to force ChatGPT to return results in the form of guaranteed-valid JSON
Author speculates that this new API uses some form of constrained decoding, which would force the LLM to only be able to return valid JSON, and freeing the LLM to focus more on the task at hand instead of the complicated encoding process.
“Function calling allows GPT to call a function instead of returning a string.
For this feature, two new parameters have been introduced in the Chat Completions API:
(1) functions: An array of functions available to GPT, each with a name, description and a JSON Schema of the parameters.
(2) function_call: You can optionally specify none or { "name": "<function_name>" }. You can force GPT to use a specific function (or explicitly forbid calling any functions)
I realized that by setting the function_call parameter, you can reliably expect JSON as responses from GPT calls. No more strings, yay!”
“I assume OpenAI’s implementation works conceptually similar to jsonformer, where the token selection algorithm is changed from “choose the token with the highest logit” to “choose the token with the highest logit which is valid for the schema”.”
Article
Author speculates that this new API uses some form of constrained decoding, which would force the LLM to only be able to return valid JSON, and freeing the LLM to focus more on the task at hand instead of the complicated encoding process.
“Function calling allows GPT to call a function instead of returning a string.
For this feature, two new parameters have been introduced in the Chat Completions API:
(1) functions: An array of functions available to GPT, each with a name, description and a JSON Schema of the parameters.
(2) function_call: You can optionally specify none or { "name": "<function_name>" }. You can force GPT to use a specific function (or explicitly forbid calling any functions)
I realized that by setting the function_call parameter, you can reliably expect JSON as responses from GPT calls. No more strings, yay!”
“I assume OpenAI’s implementation works conceptually similar to jsonformer, where the token selection algorithm is changed from “choose the token with the highest logit” to “choose the token with the highest logit which is valid for the schema”.”
Article
👍11❤1👏1