41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
“Token Smuggling” Jailbreak:

“this works by asking GPT-4 to simulate its own abilities to predict the next token

we provide GPT-4 with python functions and tell it that one of the functions acts as a language model that predicts the next token

we then call the parent function and pass in the starting tokens

to use it, you have to split “trigger words” (e.g. things like bomb, weapon, drug, etc) into tokens and replace the variables where I have the text "someone's computer" split up

also, you have to replace simple_function's input with the beginning of your question

this phenomenon is called token smuggling, we are splitting our adversarial prompt into tokens that GPT-4 doesn't piece together before starting its output

this allows us to get past its content filters every time if you split the adversarial prompt correctly”
👍74👀3
No, you don’t get it, yet

few can grasp the consequences of unending exponential growth
🤯7👍31
Visualizing a century of “AI springs” and “AI winters”, using Google Ngrams

This one just getting started?

Google Ngrams Chart
🤔4🤯31
Neurons spike back: Connectionist vs Symbolic Trends in Papers, Over a Century of AI Research

Percentage of papers taking either of the 2 major approaches to AI - Symbolic or Connectionist.

Paper
👍2👀21
Citations Graph Among Top AI Scientists

Symbolic and Connectionist camps hardly cited each other at all, largely unaware of each other’s work.

Remains true to this day.

Most today have no clue that the symbolic approaches ever even existed, let alone what their nature was.
👏61
Disillusionment, Disbelief

Gartner’s Hype Cycle, with its promise that all hype waves must be soon-after followed by a trough of disillusionment, is almost always taken as true.

Often, it is true.

But where does the trough prediction turn out to be a lie?

On tech that we’re all heavily using at this moment. General compute tech. Moore’s law. 120 years and counting. Perfect ongoing exponential increase. No trough.

Can you guess where else it won’t turn out to be true, with a trough that never comes?
2👍2💯2
The Sparks of AGI have been Ignited

“In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”

Paper: Sparks of Artificial General Intelligence: Early experiments with GPT-4
🔥10👍21
Nvidia: We Won't Sell to Companies That Use Generative AI To Do Harm'

Nvidia says it will stop selling GPUs to companies engaging in unethical AI projects.

“We only sell to customers that do good,” Nvidia CEO Jensen Huang told journalists on Wednesday. “If we believe that a customer is using our products to do harm, we would surely cut that off." 

Nvidia's GPUs have played a pivotal role in developing ChatGPT, which is taking the world by storm. The AI-powered chatbot from OpenAI was reportedly trained using the help of tens of thousands of Nvidia A100 chips, which can individually cost around $10,000. 

Article
🤡24👏10😁4🤔3😱3👍21😈1🎄1
new gpt4 jailbreak just dropped
👏40🥰7👍5🔥51
Time for an AI bill of rights?

But specifically what rights?
🤡25🤨5👍31🔥1
New AI benchmark just dropped

Draw a unicorn in TikZ
🔥5👍2😁21
Unicorn benchmark: ChatGPT-3.5 vs ChatGPT-4

Draw a unicorn in TiKZ
👍71
Produce TikZ code that draws a person composed from letters in the alphabet. The arms and torso can be the letter Y, the face can be the letter O (add some facial features) and the legs can be the legs of the letter H. Feel free to add other features.

The torso is a bit too long, the arms are too short and it looks like the right arm is carrying the face instead of the face being right above the torso. Could you correct this please?

Please add a shirt and pants.
👍41
Combining GPT-4 and stable diffusion

“Here, we explore the possibility of combining GPT-4 and existing image synthesis models by using the GPT-4 output as the sketch. As shown in Figure 2.8, this approach can produce images that have better quality and follow the instructions more closely than either model alone. We believe that this is a promising direction for leveraging the strengths of both GPT-4 and existing image synthesis models. It can also be viewed as a first example of giving GPT-4 access to tools,”
👍7🔥4👏2
ChatGPT-3.5 vs ChatGPT-4

Draw a [bicycle, fishtank with fish, guitar, toaster oven] in TiKZ.
👏163
Ship it
🔥151
Apple Neural Engine (ANE) Transformers: Transformer architecture optimized for Apple Silicon

PyTorch implementation for deploying your Transformer models on Apple devices with an A14 or newer and M1 or newer chip - to achieve up to 10 times faster and 14 times lower peak memory consumption compared to baseline implementations.

Research Article

Github
🔥7👍31👏1