Fake citations: AI has a truthfulness problem
Bing users increasingly noticing that the citations are real links, but that the links donβt at all support what the citation claims they do, hallucinated support.
As if trying to pass off citations alone as some kind of proof wasnβt bad enough
Bing users increasingly noticing that the citations are real links, but that the links donβt at all support what the citation claims they do, hallucinated support.
As if trying to pass off citations alone as some kind of proof wasnβt bad enough
π€‘5β€2
Hallucination watch: Users believing Bing when it suggests that itβs used online tools β when it hasnβt, and cannot
π€‘3β€1
Media is too big
VIEW IN TELEGRAM
Man has been attempting to train ChatGPT that 2+2=5 for weeks
π38π€‘3π2π±2β€1π₯1π’1
AI we have a problem: Poisoning of AI Training Datasets is Practical
Researchers show they can poison the LAION-400M and COYO-700M AI training datasets far more cheaply than people assume: just $60 USD.
Prevoiusly confirmed that effective poisoning attacks often require poisoning just 0.01% of the data.
Suggest βautomated integrity checkingβ as a solution β but how does one automatically check the truth and values-alignment of large training datasets?
Paper
Researchers show they can poison the LAION-400M and COYO-700M AI training datasets far more cheaply than people assume: just $60 USD.
Prevoiusly confirmed that effective poisoning attacks often require poisoning just 0.01% of the data.
Suggest βautomated integrity checkingβ as a solution β but how does one automatically check the truth and values-alignment of large training datasets?
Paper
π5π±3β€2π1