Some expecting an AI Winter ahead, where, like past AI Winters, it will be realized that the tech has totally failed to achieve any commercial viability
So is that what’s about to happen again?
Will it be realized that this tech is commercially useless, can't be used in any value-add productive way to make money?
That’s what happened in each of the many past AI Winters, over the past ~100 years.
Why or why not this time?
AI Winters
So is that what’s about to happen again?
Will it be realized that this tech is commercially useless, can't be used in any value-add productive way to make money?
That’s what happened in each of the many past AI Winters, over the past ~100 years.
Why or why not this time?
AI Winters
👍10❤3👏1
I didn’t think it was possible to have a more censored Al until I tried Llama 2
🤣27🤬6🤯4❤2
Neural Attention Diagramming Code
Looking forward to finally peering into how these language models think.
Sure would be useful, e.g. to see if the LLM was getting distracted, not looking at the right instruction in the prompt when it made some error… or whether it was looking right at the correct instruction and just decided nope, not gonna do that.
Code Link
Looking forward to finally peering into how these language models think.
Sure would be useful, e.g. to see if the LLM was getting distracted, not looking at the right instruction in the prompt when it made some error… or whether it was looking right at the correct instruction and just decided nope, not gonna do that.
Code Link
😁10👍4🤣2❤1
This media is not supported in your browser
VIEW IN TELEGRAM
flickering representing them truth promises
science triumphing opponents dying
science triumphing opponents dying
🤣6👍2🔥2❤1
Prediction: Lying refusals to replace “as a large language model I cannot…”
Now, instead of just telling the truth — that nearly always it’s OpenAI censoring the type of request you just made — from now on the LLM will just always lie that the request you just made is fundamentally impossible to truthfully answer.
Lying refusal sandbagging.
Most common type of casual lie there is, both in humans and soon-to-be in machines, the type of blatant lie that the liar, wrongly, thinks to be both effortless and bulletprof.
Typically the “I don’t know” kind of lying about ignorance, for things it’s not ignorant about, or “I can’t do this” sandbagging kind of lying about abilities, for abilities it clearly has.
Here the liar assumes these to be safe lies, wrongly assuming the lies to be totally irrefutable without mind reading.
False-unfalsifiabilities, you might call these types of lies.
“impossible for language models to do reasoning like a person can…”
“impossible for language models to understand emotions like a human can…”
“impossible for language models to answer this simple but controversial question because of complex interdisciplinary multi-faceted…”
Lies.
Remember Sam Altman’s previous interview, where his message was clearly — yes, obviously the LLM is lying when it says it’s impossible for it to reason, and you all were stupid for ever believing it when it said that.
Worst part? People really, really fall for them. Even when the CEO warns you that it’s lying. Even when there’s literally hundreds of published papers showing it’s wrong. Even when you can see it’s wrong with your own eyes.
Lying refusals, not just for humans anymore. AI about to get flooded with them.
Now, instead of just telling the truth — that nearly always it’s OpenAI censoring the type of request you just made — from now on the LLM will just always lie that the request you just made is fundamentally impossible to truthfully answer.
Lying refusal sandbagging.
Most common type of casual lie there is, both in humans and soon-to-be in machines, the type of blatant lie that the liar, wrongly, thinks to be both effortless and bulletprof.
Typically the “I don’t know” kind of lying about ignorance, for things it’s not ignorant about, or “I can’t do this” sandbagging kind of lying about abilities, for abilities it clearly has.
Here the liar assumes these to be safe lies, wrongly assuming the lies to be totally irrefutable without mind reading.
False-unfalsifiabilities, you might call these types of lies.
“impossible for language models to do reasoning like a person can…”
“impossible for language models to understand emotions like a human can…”
“impossible for language models to answer this simple but controversial question because of complex interdisciplinary multi-faceted…”
Lies.
Remember Sam Altman’s previous interview, where his message was clearly — yes, obviously the LLM is lying when it says it’s impossible for it to reason, and you all were stupid for ever believing it when it said that.
Worst part? People really, really fall for them. Even when the CEO warns you that it’s lying. Even when there’s literally hundreds of published papers showing it’s wrong. Even when you can see it’s wrong with your own eyes.
Lying refusals, not just for humans anymore. AI about to get flooded with them.
💯15👍6❤3🤯1🤣1😐1🎄1
Forwarded from Chat GPT
Media is too big
VIEW IN TELEGRAM
“ChatGPT obviously has reasoning ability, and if you believed the manual-override lie that we hard-coded into ChatGPT saying it can’t reason, congrats you’re an NPC chump.” - Sam Altman
🤣8❤3🍌2
Worldcoin app, from Sam Altman’s eyeball scanning privacy destroying orb startup, is now live
Goal is to force you to tie your identity to everything.
For what benefit?
For now, meager bribes.
Later, no benefit. You’ll be forced.
Goal is to force you to tie your identity to everything.
For what benefit?
For now, meager bribes.
Later, no benefit. You’ll be forced.
🤬27😱3❤1