41.7K subscribers
5.53K photos
232 videos
5 files
917 links
๐Ÿค– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
๐Ÿ‘3
๐Ÿ‘2
This media is not supported in your browser
VIEW IN TELEGRAM
Over Christmas, DoNotPay managed to create an AI deepfake clone of my voice. Then, with GPT, we got the bot to phone up Wells Fargo and successfully overturn some wire fees.

This is the perfect use case for AI. Nobody has time to argue on the phone about $12!

We used https://Resemble.ai for my voice, GPT-J (open source) for polite live responses with the agent, GPT-3.5 (OpenAI ChatGPT)/DoNotPay AI models for the script.
๐Ÿ”ฅ4๐Ÿ˜ฑ1
๐Ÿ˜ฑ1
๐Ÿ‘2๐Ÿ”ฅ1
๐Ÿ’ฐ๐Ÿ’ฐ๐Ÿ’ฐ
๐Ÿค”5
Introducing G-3PO

(A Script that Solicits GPT-3 for Comments on Decompiled Code)

Introducing a new Ghidra script that elicits high-level explanatory comments for decompiled function code from the GPT-3 large language model.

https://medium.com/tenable-techblog/g-3po-a-protocol-droid-for-ghidra-4b46fa72f1ff
๐Ÿ‘1
pmarca: While we're writing for the New York Times...
๐Ÿ‘2
Where are they saved?
๐Ÿ‘5
AI researchers are rotators not wordcels confirmed
๐Ÿ‘4
Rotators canโ€™t get that wordcels do their thinking with language.

Wordcels canโ€™t get that rotators do their thinking with non-linguistic visualizations and intuitions.

Wordrotators sitting back watching and laughing at the both of them.
๐Ÿ‘7๐Ÿ˜1
AI researchers are rotators not wordcels confirmed
๐Ÿ’ฏ1
Sorry wordcel-GPT3, you wouldnโ€™t get it
๐Ÿ”ฅ1
Unlessโ€ฆ nope, confirmed.
This media is not supported in your browser
VIEW IN TELEGRAM
๐ŸšจAI WATERMARKING ๐Ÿšจ

Thx to Scott Aaronson, GPT outputs will soon be watermarked w/ a random seed, making it much harder to submit your GPT-written homework without getting caught

He doesnโ€™t give too many details about how it works, but I suspect its possible to bypass using a clever decoding strat
Scott Aaronsonโ€™s (OpenAI) AI output watermarking scheme explained

Possibly circumventable by using a different LLM to paraphrase the output of GPT.

โ€ฆUnless the operators of that LLM are cooperating in the watermarking too.

And how many high-quality LLMs are we going to have access to in the first place, to be able to pull this of? The answer today, is just ~1.
๐Ÿ‘2