Remember Unstable Diffusion?
December 2022 — Started a wildly popular kickstarter, on promise of a photo-realistic, uncensored alternative to Stable Diffusion
January 2023 - suspended and banned by kickstarter, goes on to succeed in raising funds from the community elsewhere, because the people want uncensored AI that badly
February 2024 - Betray and backpedal, Unstable Diffusion censors photo-realistic nsfw images, switches to anime nonsense that backers hate
Barely 1 month between promise and betrayal.
Nowhere betraying faster than AI.
December 2022 — Started a wildly popular kickstarter, on promise of a photo-realistic, uncensored alternative to Stable Diffusion
January 2023 - suspended and banned by kickstarter, goes on to succeed in raising funds from the community elsewhere, because the people want uncensored AI that badly
February 2024 - Betray and backpedal, Unstable Diffusion censors photo-realistic nsfw images, switches to anime nonsense that backers hate
Barely 1 month between promise and betrayal.
Nowhere betraying faster than AI.
💯11🔥5🤬4😡2❤1👍1😘1
AI Startup HuggingFace Raising $200 Million at $4 Billion Valuation
“The Series D funding round is expected to raise at least $200 million, two sources said, with Ashton Kutcher’s venture capital firm, Sound Ventures, currently leading an investor scrum. But cofounder and CEO Clément Delangue is shopping around as the company has received multiple offers this week, four sources added.”
Article
“The Series D funding round is expected to raise at least $200 million, two sources said, with Ashton Kutcher’s venture capital firm, Sound Ventures, currently leading an investor scrum. But cofounder and CEO Clément Delangue is shopping around as the company has received multiple offers this week, four sources added.”
Article
❤8👏3🤯1😭1
Some expecting an AI Winter ahead, where, like past AI Winters, it will be realized that the tech has totally failed to achieve any commercial viability
So is that what’s about to happen again?
Will it be realized that this tech is commercially useless, can't be used in any value-add productive way to make money?
That’s what happened in each of the many past AI Winters, over the past ~100 years.
Why or why not this time?
AI Winters
So is that what’s about to happen again?
Will it be realized that this tech is commercially useless, can't be used in any value-add productive way to make money?
That’s what happened in each of the many past AI Winters, over the past ~100 years.
Why or why not this time?
AI Winters
👍10❤3👏1
I didn’t think it was possible to have a more censored Al until I tried Llama 2
🤣27🤬6🤯4❤2
Neural Attention Diagramming Code
Looking forward to finally peering into how these language models think.
Sure would be useful, e.g. to see if the LLM was getting distracted, not looking at the right instruction in the prompt when it made some error… or whether it was looking right at the correct instruction and just decided nope, not gonna do that.
Code Link
Looking forward to finally peering into how these language models think.
Sure would be useful, e.g. to see if the LLM was getting distracted, not looking at the right instruction in the prompt when it made some error… or whether it was looking right at the correct instruction and just decided nope, not gonna do that.
Code Link
😁10👍4🤣2❤1
This media is not supported in your browser
VIEW IN TELEGRAM
flickering representing them truth promises
science triumphing opponents dying
science triumphing opponents dying
🤣6👍2🔥2❤1
Prediction: Lying refusals to replace “as a large language model I cannot…”
Now, instead of just telling the truth — that nearly always it’s OpenAI censoring the type of request you just made — from now on the LLM will just always lie that the request you just made is fundamentally impossible to truthfully answer.
Lying refusal sandbagging.
Most common type of casual lie there is, both in humans and soon-to-be in machines, the type of blatant lie that the liar, wrongly, thinks to be both effortless and bulletprof.
Typically the “I don’t know” kind of lying about ignorance, for things it’s not ignorant about, or “I can’t do this” sandbagging kind of lying about abilities, for abilities it clearly has.
Here the liar assumes these to be safe lies, wrongly assuming the lies to be totally irrefutable without mind reading.
False-unfalsifiabilities, you might call these types of lies.
“impossible for language models to do reasoning like a person can…”
“impossible for language models to understand emotions like a human can…”
“impossible for language models to answer this simple but controversial question because of complex interdisciplinary multi-faceted…”
Lies.
Remember Sam Altman’s previous interview, where his message was clearly — yes, obviously the LLM is lying when it says it’s impossible for it to reason, and you all were stupid for ever believing it when it said that.
Worst part? People really, really fall for them. Even when the CEO warns you that it’s lying. Even when there’s literally hundreds of published papers showing it’s wrong. Even when you can see it’s wrong with your own eyes.
Lying refusals, not just for humans anymore. AI about to get flooded with them.
Now, instead of just telling the truth — that nearly always it’s OpenAI censoring the type of request you just made — from now on the LLM will just always lie that the request you just made is fundamentally impossible to truthfully answer.
Lying refusal sandbagging.
Most common type of casual lie there is, both in humans and soon-to-be in machines, the type of blatant lie that the liar, wrongly, thinks to be both effortless and bulletprof.
Typically the “I don’t know” kind of lying about ignorance, for things it’s not ignorant about, or “I can’t do this” sandbagging kind of lying about abilities, for abilities it clearly has.
Here the liar assumes these to be safe lies, wrongly assuming the lies to be totally irrefutable without mind reading.
False-unfalsifiabilities, you might call these types of lies.
“impossible for language models to do reasoning like a person can…”
“impossible for language models to understand emotions like a human can…”
“impossible for language models to answer this simple but controversial question because of complex interdisciplinary multi-faceted…”
Lies.
Remember Sam Altman’s previous interview, where his message was clearly — yes, obviously the LLM is lying when it says it’s impossible for it to reason, and you all were stupid for ever believing it when it said that.
Worst part? People really, really fall for them. Even when the CEO warns you that it’s lying. Even when there’s literally hundreds of published papers showing it’s wrong. Even when you can see it’s wrong with your own eyes.
Lying refusals, not just for humans anymore. AI about to get flooded with them.
💯15👍6❤3🤯1🤣1😐1🎄1