41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Welcome to the AI-mediating-all-interactions future

“Amazon just locked a man out of his smart home for a week because a delivery driver reported him as a racist after mishearing something from the doorbell – the guy wasn’t even at home.”

“A man found himself locked out of his smart house powered by Amazon because, while he wasn't home, an Amazon delivery driver mistakenly thought he heard a racist remark come from the man's doorbell, reported it to Amazon, and Amazon immediately locked down the account, locking the man out of his home.”

“The Eufy doorbell had issued an automated response: “Excuse me, can I help you?” The driver, who was walking away and wearing headphones, must have misinterpreted the message. Nevertheless, by the following day, my Amazon account was locked, and all my Echo devices were logged out.”

Medium Article
🤬15😱4👍3🤣32😐1
Shut it all down

Link
🫡112👍2👏2😁2
Credible rumors that ChatGPT-4 has been crippled and is now much dumber, just like ChatGPT-3.5-turbo is a crippled version of GPT-3.5
🤬251👍1
Wimps
🤣171👍1
“If I Had More Time, I Would Have Written a Shorter Letter…”
👍171
Short → Long → Short

(TBF, long writing can also indicate that the writer has correctly concluded the receiver to be a retard, who needs everything spelled out to them.

From instructing LLMs, to proving to interactive proof assistants, to communicating with humans, the smarter the receiver, the shorter the instructions to the receiver can be.

I.e. a word to the wise is sufficient.

E.g. the De Bruijn factor.

And would’ve just included that last sentence, but… not many would’ve really gotten it.)
😁81👍1🙊1
TBF, Not even GPT-4 is aware of the De Bruijn Factor
😁91
De Bruijn Factor

Essentially, measuring how many words you need to convince a machine of something.

Or more generally, and in the case of LLMs, how many words are needed to effectively instruct the machine to do some task.

For proof assistants, LLMs, and even humans, we notice the trend where the smarter & more resources the receiving tech or human has, the shorter our explanations can be.

Word to the wise is sufficient.

If you ask me, GPT-4 already has surpassed typical humans on this ratio for instruction tasks, though still far behind top experts in many areas.

More on de Bruijin Factor

Original 2000 paper on de Bruijin Factor
👍71
GPT-4 can’t accept that it might still be dumb

No matter what variation of the question you use — GPT-4 insists that the term must not have been in use prior to 2021.
😁4🤣41
GPT-4 really goes out of its way to deny the possibility that it could just be dumb

That’s kinda weird.

Lots of theories about the Sydney-like sense of superiority and infallibility being part of the Sydney persona, but this suggests it could be something deeper.
😁10🤯21👍1💅1
GPT-3.5 Turbo can now use external tools via json function calling, used by the LLM based on the descriptions you give your functions in the prompts.

“This model has been updated with a new version: gpt-3.5-turbo-0613 which is more steerable with the system message and includes a new capability: function calling. By describing functions in your prompts, the model can intelligently output a JSON object containing arguments to call these functions based on user input — perfect for integrating with other tools or APIs.”

Function Calling Docs
👏21
Example usage for ChatGPT’s new function calling API

Basically, first describe the functions and each of their parameters, using natural langauge english descriptions, along with basic typing info. Then, see if the LLM wanted to call some functions. Then, give the LLM the responses to those function calls.

Function Calling Docs
👍71
This media is not supported in your browser
VIEW IN TELEGRAM
Hundreds attend church service generated by ChatGPT

Article
😐30👍5🙏5😈4👾4🤯3🤣3🤬2👀2🎃21
This media is not supported in your browser
VIEW IN TELEGRAM
me, 2035, when they find out I’m running the last unauthorized GPU cluster
😁32😐21👻1
Using ChatGPT's new function -calling API to force ChatGPT to return results in the form of guaranteed-valid JSON

Author speculates that this new API uses some form of constrained decoding, which would force the LLM to only be able to return valid JSON, and freeing the LLM to focus more on the task at hand instead of the complicated encoding process.

“Function calling allows GPT to call a function instead of returning a string.

For this feature, two new parameters have been introduced in the Chat Completions API:

(1) functions: An array of functions available to GPT, each with a name, description and a JSON Schema of the parameters.

(2) function_call: You can optionally specify none or { "name": "<function_name>" }. You can force GPT to use a specific function (or explicitly forbid calling any functions)

I realized that by setting the function_call parameter, you can reliably expect JSON as responses from GPT calls. No more strings, yay!”

“I assume OpenAI’s implementation works conceptually similar to jsonformer, where the token selection algorithm is changed from “choose the token with the highest logit” to “choose the token with the highest logit which is valid for the schema”.”

Article
👍111👏1
Can’t wait for the “unbiased” AI fact checkers to arrive

Article
🤬10🤣7💯5👀21👌1💅1
Nvidia H100 and A100 GPUs - comparing available capacity at GPU cloud providers

Link
👍72
Type like a millennial
😁121
CNN: 42% of CEOs say AI could destroy humanity in five to ten years
🤣232👍2