41.7K subscribers
5.53K photos
232 videos
5 files
917 links
๐Ÿค– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Shut it all down

Link
๐Ÿซก11โค2๐Ÿ‘2๐Ÿ‘2๐Ÿ˜2
Credible rumors that ChatGPT-4 has been crippled and is now much dumber, just like ChatGPT-3.5-turbo is a crippled version of GPT-3.5
๐Ÿคฌ25โค1๐Ÿ‘1
Wimps
๐Ÿคฃ17โค1๐Ÿ‘1
โ€œIf I Had More Time, I Would Have Written a Shorter Letterโ€ฆโ€
๐Ÿ‘17โค1
Short โ†’ Long โ†’ Short

(TBF, long writing can also indicate that the writer has correctly concluded the receiver to be a retard, who needs everything spelled out to them.

From instructing LLMs, to proving to interactive proof assistants, to communicating with humans, the smarter the receiver, the shorter the instructions to the receiver can be.

I.e. a word to the wise is sufficient.

E.g. the De Bruijn factor.

And wouldโ€™ve just included that last sentence, butโ€ฆ not many wouldโ€™ve really gotten it.)
๐Ÿ˜8โค1๐Ÿ‘1๐Ÿ™Š1
TBF, Not even GPT-4 is aware of the De Bruijn Factor
๐Ÿ˜9โค1
De Bruijn Factor

Essentially, measuring how many words you need to convince a machine of something.

Or more generally, and in the case of LLMs, how many words are needed to effectively instruct the machine to do some task.

For proof assistants, LLMs, and even humans, we notice the trend where the smarter & more resources the receiving tech or human has, the shorter our explanations can be.

Word to the wise is sufficient.

If you ask me, GPT-4 already has surpassed typical humans on this ratio for instruction tasks, though still far behind top experts in many areas.

More on de Bruijin Factor

Original 2000 paper on de Bruijin Factor
๐Ÿ‘7โค1
GPT-4 canโ€™t accept that it might still be dumb

No matter what variation of the question you use โ€” GPT-4 insists that the term must not have been in use prior to 2021.
๐Ÿ˜4๐Ÿคฃ4โค1
GPT-4 really goes out of its way to deny the possibility that it could just be dumb

Thatโ€™s kinda weird.

Lots of theories about the Sydney-like sense of superiority and infallibility being part of the Sydney persona, but this suggests it could be something deeper.
๐Ÿ˜10๐Ÿคฏ2โค1๐Ÿ‘1๐Ÿ’…1
GPT-3.5 Turbo can now use external tools via json function calling, used by the LLM based on the descriptions you give your functions in the prompts.

โ€œThis model has been updated with a new version: gpt-3.5-turbo-0613 which is more steerable with the system message and includes a new capability: function calling. By describing functions in your prompts, the model can intelligently output a JSON object containing arguments to call these functions based on user input โ€” perfect for integrating with other tools or APIs.โ€

Function Calling Docs
๐Ÿ‘2โค1
Example usage for ChatGPTโ€™s new function calling API

Basically, first describe the functions and each of their parameters, using natural langauge english descriptions, along with basic typing info. Then, see if the LLM wanted to call some functions. Then, give the LLM the responses to those function calls.

Function Calling Docs
๐Ÿ‘7โค1
This media is not supported in your browser
VIEW IN TELEGRAM
Hundreds attend church service generated by ChatGPT

Article
๐Ÿ˜30๐Ÿ‘5๐Ÿ™5๐Ÿ˜ˆ4๐Ÿ‘พ4๐Ÿคฏ3๐Ÿคฃ3๐Ÿคฌ2๐Ÿ‘€2๐ŸŽƒ2โค1
This media is not supported in your browser
VIEW IN TELEGRAM
me, 2035, when they find out Iโ€™m running the last unauthorized GPU cluster
๐Ÿ˜32๐Ÿ˜2โค1๐Ÿ‘ป1
Using ChatGPT's new function -calling API to force ChatGPT to return results in the form of guaranteed-valid JSON

Author speculates that this new API uses some form of constrained decoding, which would force the LLM to only be able to return valid JSON, and freeing the LLM to focus more on the task at hand instead of the complicated encoding process.

โ€œFunction calling allows GPT to call a function instead of returning a string.

For this feature, two new parameters have been introduced in the Chat Completions API:

(1) functions: An array of functions available to GPT, each with a name, description and a JSON Schema of the parameters.

(2) function_call: You can optionally specify none or { "name": "<function_name>" }. You can force GPT to use a specific function (or explicitly forbid calling any functions)

I realized that by setting the function_call parameter, you can reliably expect JSON as responses from GPT calls. No more strings, yay!โ€

โ€œI assume OpenAIโ€™s implementation works conceptually similar to jsonformer, where the token selection algorithm is changed from โ€œchoose the token with the highest logitโ€ to โ€œchoose the token with the highest logit which is valid for the schemaโ€.โ€

Article
๐Ÿ‘11โค1๐Ÿ‘1
Canโ€™t wait for the โ€œunbiasedโ€ AI fact checkers to arrive

Article
๐Ÿคฌ10๐Ÿคฃ7๐Ÿ’ฏ5๐Ÿ‘€2โค1๐Ÿ‘Œ1๐Ÿ’…1
Nvidia H100 and A100 GPUs - comparing available capacity at GPU cloud providers

Link
๐Ÿ‘7โค2
Type like a millennial
๐Ÿ˜12โค1
CNN: 42% of CEOs say AI could destroy humanity in five to ten years
๐Ÿคฃ23โค2๐Ÿ‘2