This media is not supported in your browser
VIEW IN TELEGRAM
Over Christmas, DoNotPay managed to create an AI deepfake clone of my voice. Then, with GPT, we got the bot to phone up Wells Fargo and successfully overturn some wire fees.
This is the perfect use case for AI. Nobody has time to argue on the phone about $12!
We used https://Resemble.ai for my voice, GPT-J (open source) for polite live responses with the agent, GPT-3.5 (OpenAI ChatGPT)/DoNotPay AI models for the script.
This is the perfect use case for AI. Nobody has time to argue on the phone about $12!
We used https://Resemble.ai for my voice, GPT-J (open source) for polite live responses with the agent, GPT-3.5 (OpenAI ChatGPT)/DoNotPay AI models for the script.
π₯4π±1
Introducing G-3PO
(A Script that Solicits GPT-3 for Comments on Decompiled Code)
Introducing a new Ghidra script that elicits high-level explanatory comments for decompiled function code from the GPT-3 large language model.
https://medium.com/tenable-techblog/g-3po-a-protocol-droid-for-ghidra-4b46fa72f1ff
(A Script that Solicits GPT-3 for Comments on Decompiled Code)
Introducing a new Ghidra script that elicits high-level explanatory comments for decompiled function code from the GPT-3 large language model.
https://medium.com/tenable-techblog/g-3po-a-protocol-droid-for-ghidra-4b46fa72f1ff
π1
This media is not supported in your browser
VIEW IN TELEGRAM
π¨AI WATERMARKING π¨
Thx to Scott Aaronson, GPT outputs will soon be watermarked w/ a random seed, making it much harder to submit your GPT-written homework without getting caught
He doesnβt give too many details about how it works, but I suspect its possible to bypass using a clever decoding strat
Thx to Scott Aaronson, GPT outputs will soon be watermarked w/ a random seed, making it much harder to submit your GPT-written homework without getting caught
He doesnβt give too many details about how it works, but I suspect its possible to bypass using a clever decoding strat
Scott Aaronsonβs (OpenAI) AI output watermarking scheme explained
Possibly circumventable by using a different LLM to paraphrase the output of GPT.
β¦Unless the operators of that LLM are cooperating in the watermarking too.
And how many high-quality LLMs are we going to have access to in the first place, to be able to pull this of? The answer today, is just ~1.
Possibly circumventable by using a different LLM to paraphrase the output of GPT.
β¦Unless the operators of that LLM are cooperating in the watermarking too.
And how many high-quality LLMs are we going to have access to in the first place, to be able to pull this of? The answer today, is just ~1.
π2