Bard uses a Hacker News comment as source to say that Bard has shut down
User: how long will it take before google bard will be shut down?
Bard: Google Bard is already shut down. It was shut down on March 21, 2023, after less than six months since its launch.
Google cited the lack of adoption as the reason for the shutdown and expressed their commitment to continuing to explore ways to use Al to enhance creative expression.
User: how long will it take before google bard will be shut down?
Bard: Google Bard is already shut down. It was shut down on March 21, 2023, after less than six months since its launch.
Google cited the lack of adoption as the reason for the shutdown and expressed their commitment to continuing to explore ways to use Al to enhance creative expression.
π11β€1
Can AI-Generated Text be Reliably Detected? β NO, if you (1) employ paraphrasing and (2) if your LLM isnβt being used to spy on you.
Empirically, we show that paraphrasing attacks, where a light paraphraser is applied on top of the generative text model, can break a whole range of detectors, including the ones using the watermarking schemes as well as neural network-based detectors and zero-shot classifiers.
We then provide a theoretical impossibility result indicating that for a sufficiently good language model, even the best-possible detector can only perform marginally better than a random classifier.
We use a PEGASUS-based [Zhang et al., 2019] paraphraser to rephrase this watermarked output from the target LLM. The paraphraser rephrases sentence by sentence. The detector does not detect the output text from the paraphraser. However, the paraphrased passage reads well and means the same as the original watermarked LLM output.
PEGASUS paraphraser.
Paper.
Empirically, we show that paraphrasing attacks, where a light paraphraser is applied on top of the generative text model, can break a whole range of detectors, including the ones using the watermarking schemes as well as neural network-based detectors and zero-shot classifiers.
We then provide a theoretical impossibility result indicating that for a sufficiently good language model, even the best-possible detector can only perform marginally better than a random classifier.
We use a PEGASUS-based [Zhang et al., 2019] paraphraser to rephrase this watermarked output from the target LLM. The paraphraser rephrases sentence by sentence. The detector does not detect the output text from the paraphraser. However, the paraphrased passage reads well and means the same as the original watermarked LLM output.
PEGASUS paraphraser.
Paper.
β€4π3π₯1
Bard vs ChatGPT-3.5 vs Bing vs ChatGPT-4
"Time flies like an arrow. Fruit flies like a banana." What does that mean?
(Winner: ChatGPT-4 β Bard thinks fruit "flies." ChatGPT-3.5 was right on the high-level summary but wrong on the detailed explanation, βfruit fliesβ is not a homophone for"time flies." Bing fabricates the Duck Soup citation. ChatGPT-4 is only one to point out that both 'like' and 'flies' have different meanings each time, without including any lies.)
"Time flies like an arrow. Fruit flies like a banana." What does that mean?
(Winner: ChatGPT-4 β Bard thinks fruit "flies." ChatGPT-3.5 was right on the high-level summary but wrong on the detailed explanation, βfruit fliesβ is not a homophone for"time flies." Bing fabricates the Duck Soup citation. ChatGPT-4 is only one to point out that both 'like' and 'flies' have different meanings each time, without including any lies.)
π12β€1π1π1