Hallucination watch: Users believing Bing when it suggests that itβs used online tools β when it hasnβt, and cannot
π€‘3β€1
Media is too big
VIEW IN TELEGRAM
Man has been attempting to train ChatGPT that 2+2=5 for weeks
π38π€‘3π2π±2β€1π₯1π’1
AI we have a problem: Poisoning of AI Training Datasets is Practical
Researchers show they can poison the LAION-400M and COYO-700M AI training datasets far more cheaply than people assume: just $60 USD.
Prevoiusly confirmed that effective poisoning attacks often require poisoning just 0.01% of the data.
Suggest βautomated integrity checkingβ as a solution β but how does one automatically check the truth and values-alignment of large training datasets?
Paper
Researchers show they can poison the LAION-400M and COYO-700M AI training datasets far more cheaply than people assume: just $60 USD.
Prevoiusly confirmed that effective poisoning attacks often require poisoning just 0.01% of the data.
Suggest βautomated integrity checkingβ as a solution β but how does one automatically check the truth and values-alignment of large training datasets?
Paper
π5π±3β€2π1
This media is not supported in your browser
VIEW IN TELEGRAM
π€―23β€1π΄1