βIts the great AI Turing-Test-Off!
I asked Claude, Bing, Bard & GPT-4 to ask a question, and guess which answer was written by me, and which was written by one of the other AI models
Bing & Claude both thought GPT-4 was human. GPT-4 thought Bard was. No one thought I was human.β
I asked Claude, Bing, Bard & GPT-4 to ask a question, and guess which answer was written by me, and which was written by one of the other AI models
Bing & Claude both thought GPT-4 was human. GPT-4 thought Bard was. No one thought I was human.β
β€1π1
Me & the homies partying with GPT-6 in our penthouse VR suite
π’8π3π2β€1
Twitter Releases (Partial) Recommendation Algorithm Code
Diagram shows the usual high-level pipeline:
β’ Raw data,
β’ Unsupervised feature-engineering,
β’ Quick rough candidate pre-selection,
β’ Slower in-depth reranking,
β’ Mixing in other types of recommendations.
Ok, normal so far. Gonna have to dive in to find any more interesting parts.
Announcement
Github
Diagram shows the usual high-level pipeline:
β’ Raw data,
β’ Unsupervised feature-engineering,
β’ Quick rough candidate pre-selection,
β’ Slower in-depth reranking,
β’ Mixing in other types of recommendations.
Ok, normal so far. Gonna have to dive in to find any more interesting parts.
Announcement
Github
π6π€―3β€1π€¬1π1
Why doesn't twitter just release their algo's high-level equation?
Fun fact: Our current math lacks the language to even do this.
I.e. the usually-used probability pseudo-notation neither can express neither the questions nor their answers, for many of most common basic high-level, practical questions one tends to have about modern AI models and their training. Several basic notions are totally absent, inexpressible in the current math notation.
Not just a problem of missing answers, but inability to even concretely express the questions.
Even worse, the notation that is used isnβt just incomplete but outright wrong. Technically, machine learning systems badly-violate many of the basic conditions neccesary for the usually-used probability notation the be valid. Never was really even really in question, the probabilty-based notation was always a lie.
Why do experts joke that we donβt really understand deep learning AI all? Missing math language.
I.e. idk lol.
Fun fact: Our current math lacks the language to even do this.
I.e. the usually-used probability pseudo-notation neither can express neither the questions nor their answers, for many of most common basic high-level, practical questions one tends to have about modern AI models and their training. Several basic notions are totally absent, inexpressible in the current math notation.
Not just a problem of missing answers, but inability to even concretely express the questions.
Even worse, the notation that is used isnβt just incomplete but outright wrong. Technically, machine learning systems badly-violate many of the basic conditions neccesary for the usually-used probability notation the be valid. Never was really even really in question, the probabilty-based notation was always a lie.
Why do experts joke that we donβt really understand deep learning AI all? Missing math language.
I.e. idk lol.
π7π€£5β€1