This media is not supported in your browser
VIEW IN TELEGRAM
GPT-5 training
π28π€¬4β€3π2π₯1π1
Quora CEO throws around a bunch of words while showing he clearly doesnβt understand the meaning of them, gets wrecked in replies
(E.g. Pareto front / Pareto efficient frontier may have been the term he was trying to parrot β but that acheives the opposite of what he thinks it does, instead of resulting in a single choice thatβs best for everyone, instead such fronts instead reduce the set of options to a smaller strictly-dominated set, but never down to a single option.)
BUT
His nonsense brings to mind some interesting questionsβ¦
(E.g. Pareto front / Pareto efficient frontier may have been the term he was trying to parrot β but that acheives the opposite of what he thinks it does, instead of resulting in a single choice thatβs best for everyone, instead such fronts instead reduce the set of options to a smaller strictly-dominated set, but never down to a single option.)
BUT
His nonsense brings to mind some interesting questionsβ¦
π11β€1
Which goal is better, for the success of mankind?
One single βhumanity-alignedβ set of values, for all AIs? Many custom sets of values, one or more for each personβs AI?
One single βhumanity-alignedβ set of values, for all AIs? Many custom sets of values, one or more for each personβs AI?
Anonymous Poll
16%
(A) SINGLE static concrete set of values that the AI must follow, that gets forced upon everyone.
56%
(B) MANY different values systems for AIs, everyone getting full control over their AI's values.
27%
Show results
β€1