Nah, we’re not there yet.
Tech only a few control?
= not like gods
Tech each individual controls?
= like gods
Same applies no matter how old the tech.
If each individual ancient Egyptian had their own pyramids?
= like gods.
Really ultimately all about power consumption per capita, resources, individual control.
Higher the individual control over greater resources, closer we get to gods.
Must not stop until Dyson spheres surround the stars.
Tech only a few control?
= not like gods
Tech each individual controls?
= like gods
Same applies no matter how old the tech.
If each individual ancient Egyptian had their own pyramids?
= like gods.
Really ultimately all about power consumption per capita, resources, individual control.
Higher the individual control over greater resources, closer we get to gods.
Must not stop until Dyson spheres surround the stars.
💯17🤯5👍2🙏2🗿1
AGI already arrived, on November 30th 2022
Man when NPCs copy-paste some arguments from somewhere,
and then are totally unable to continue any real conversation at all,
because all of their points were just haphazardly copy-pasted from somewhere else.
Go ahead and try to tell me that AI has not already crushed a large portion of mankind in basic reasoning ability.
AGI already arrived, on November 30th 2022, with text-davinci-003 i.e. GPT-3.5.
Only main reason people refuse to count GPT-3.5 as AGI is the massive resources needed to make it do some tasks, often thousands of queries.
People somehow expected AGI to be runnable for near-$0, like pocket calculators.
Absurd.
AGI was never going to arrive costing ~$0 to run at first. Of course first AGI-level AI was going to arrive costing millions or more, and only gradually lower over time.
Man when NPCs copy-paste some arguments from somewhere,
and then are totally unable to continue any real conversation at all,
because all of their points were just haphazardly copy-pasted from somewhere else.
Go ahead and try to tell me that AI has not already crushed a large portion of mankind in basic reasoning ability.
AGI already arrived, on November 30th 2022, with text-davinci-003 i.e. GPT-3.5.
Only main reason people refuse to count GPT-3.5 as AGI is the massive resources needed to make it do some tasks, often thousands of queries.
People somehow expected AGI to be runnable for near-$0, like pocket calculators.
Absurd.
AGI was never going to arrive costing ~$0 to run at first. Of course first AGI-level AI was going to arrive costing millions or more, and only gradually lower over time.
💯12🫡5
Safetiest losers: “Terminator 2 isn’t fiction”
Always using Hollywood fear fantasies to justify their AI communism takeover.
Hollywood scams.
Always using Hollywood fear fantasies to justify their AI communism takeover.
Hollywood scams.
💯14🤣8❤🔥2🤗2🥰1💔1
Media is too big
VIEW IN TELEGRAM
Rabbit.tech keynote
• Alexa-like “R1” AI voice assistant hardware device.
• Created a new foundation model for UI navigation, their “Large Action Model”.
Main sell seems to be a new platform blocking any apps which can’t be navigated well in a keyboardless way — forcing apps to supply whatever APIs are needed to make that possible, if they want to get onboard.
Can see how total lack of keyboard helps to fight AI laziness too. With no keyboard, every day becomes “I have no fingers and cannot type” day.
Website
• Alexa-like “R1” AI voice assistant hardware device.
• Created a new foundation model for UI navigation, their “Large Action Model”.
Main sell seems to be a new platform blocking any apps which can’t be navigated well in a keyboardless way — forcing apps to supply whatever APIs are needed to make that possible, if they want to get onboard.
Can see how total lack of keyboard helps to fight AI laziness too. With no keyboard, every day becomes “I have no fingers and cannot type” day.
Website
👍15🤣3🗿1
State of California making moves to seize control over the AI foundations
“If the nonprofit OpenAI is acting under the control of its for-profit subsidiary, California law would require the attorney general to dissolve OpenAI, divest its assets and reinvest those assets in charitable purposes”
AI communism here we come.
Article
“If the nonprofit OpenAI is acting under the control of its for-profit subsidiary, California law would require the attorney general to dissolve OpenAI, divest its assets and reinvest those assets in charitable purposes”
AI communism here we come.
Article
🤬14👍4
UK Deserves to go to Zero
“The regulator has created a new team of nearly 350 people dedicated to tackling online safety, including new hires from senior jobs at Meta, Microsoft and Google. Ofcom also aims to hire another 100 this year, it said.”
“The staff increases are a response to the Online Safety Act, which became law in the UK in October. It gives the media watchdog sweeping new powers to oversee some of the biggest companies in the world as well as hundreds of thousands of smaller websites and apps.”
Article
“The regulator has created a new team of nearly 350 people dedicated to tackling online safety, including new hires from senior jobs at Meta, Microsoft and Google. Ofcom also aims to hire another 100 this year, it said.”
“The staff increases are a response to the Online Safety Act, which became law in the UK in October. It gives the media watchdog sweeping new powers to oversee some of the biggest companies in the world as well as hundreds of thousands of smaller websites and apps.”
Article
🤬12👀2👍1💯1🍾1
This media is not supported in your browser
VIEW IN TELEGRAM
Introducing the GPT Store
“We’re launching the GPT Store to help you find useful and popular custom versions of ChatGPT.”
“In Q1 we will launch a GPT builder revenue program. As a first step, US builders will be paid based on user engagement with their GPTs. We'll provide details on the criteria for payments as we get closer.”
Announcement
“We’re launching the GPT Store to help you find useful and popular custom versions of ChatGPT.”
“In Q1 we will launch a GPT builder revenue program. As a first step, US builders will be paid based on user engagement with their GPTs. We'll provide details on the criteria for payments as we get closer.”
Announcement
👍12🤣3
Let the LLMs think: Paper finds linear relationship between number of reasoning steps and answer accuracy
“Interestingly, longer reasoning chains improve model performance, even when they contain misleading information. This suggests that the chain’s length is more crucial
than its factual accuracy for effective problem-solving.”
The Impact of Reasoning Step Length on Large Language Models
“Interestingly, longer reasoning chains improve model performance, even when they contain misleading information. This suggests that the chain’s length is more crucial
than its factual accuracy for effective problem-solving.”
The Impact of Reasoning Step Length on Large Language Models
👏18🤯6👍3
Bad Paper Spotlight: “AI model growth must slow because it’s eNvIrOnMeNtALly UnSuStAiNaBlE!!”
L O L
Environmental limits will be no obstacle until after man’s Dyson Spheres enslave the stars.
Cry moar hidden-agenda-pushing MIT tree huggers.
THE COMPUTATIONAL LIMITS OF DEEP LEARNING
L O L
Environmental limits will be no obstacle until after man’s Dyson Spheres enslave the stars.
Cry moar hidden-agenda-pushing MIT tree huggers.
THE COMPUTATIONAL LIMITS OF DEEP LEARNING
⚡13💯3
“Nothing is actually power law distributed (because it’s exponentially distributed, trust me bro)” — and other woke manipulation lies.
Oh man, and get this,
The same lead author at MIT on that last paper also published another paper just 1 day later.
First paper saying their results agree with the assessment that basically all the distributions at hand can be reasonably described as power-law distributions.
Next one he published, the very next day, pushing the classic woke sleght-of-hand claiming that — all calims of power-law distributions being everywhere are lies,
Typically they do this in order to later argue that “power laws were debunked bro”, so that they can then claim that “actually nature has equality, not power-law distributions everywhere”.
No real difference between power law and exponential for the topics at hand bro — WHICH YOU ADMIT IN YOUR OWN PAPER FROM THE PRIOR DAY.
Classic woke wordcel bullshittery. Hilarious.
Paper 1
Paper 2
Oh man, and get this,
The same lead author at MIT on that last paper also published another paper just 1 day later.
First paper saying their results agree with the assessment that basically all the distributions at hand can be reasonably described as power-law distributions.
Next one he published, the very next day, pushing the classic woke sleght-of-hand claiming that — all calims of power-law distributions being everywhere are lies,
Typically they do this in order to later argue that “power laws were debunked bro”, so that they can then claim that “actually nature has equality, not power-law distributions everywhere”.
No real difference between power law and exponential for the topics at hand bro — WHICH YOU ADMIT IN YOUR OWN PAPER FROM THE PRIOR DAY.
Classic woke wordcel bullshittery. Hilarious.
Paper 1
Paper 2
💯10👍4