41.7K subscribers
5.53K photos
232 videos
5 files
917 links
πŸ€– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Big tech and the pursuit of AI dominance

A new kind of monopoly is coming
πŸ’―8❀1πŸ‘1πŸ€”1
This media is not supported in your browser
VIEW IN TELEGRAM
CliGPT – Less Time Searching, More Time Commanding

Streamline your terminal experience by generating Linux commands from natural language queries, reducing the need to leave the terminal for manual web searches.

Prompt used:

You are my Command Line Interface generator and will assist me to navigate my linux. All my questions are related to this. Now, how can I: [task description]

Github
πŸ‘8❀5🀩1
122 Years of Moore’s Law
πŸ‘11❀1😁1
Curves that are NOT Moore’s Law - compute per second per dollar - many of which have fallen off
πŸ‘2πŸ™2❀1
ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks

Using a sample of 2,382 tweets, we demonstrate that ChatGPT outperforms crowd-workers for several annotation tasks, including relevance, stance, topics, and frames detection. Specifically, the zero-shot accuracy of ChatGPT exceeds that of crowd-workers for four out of five tasks, while ChatGPT's intercoder agreement exceeds that of both crowd-workers and trained annotators for all tasks. Moreover, the per-annotation cost of ChatGPT is less than $0.003 -- about twenty times cheaper than MTurk.

ChatGPT outperforms humans on 4/5 tasks, while being 20x cheaper.

Paper
πŸ‘8πŸ‘€4😱2❀1πŸ”₯1πŸ—Ώ1
Today’s AI: AI creates this masterpiece.

GPT-5: We make 100x bigger models smart enough to draw regular salmon swimming in a lake.

GPT-6: We make 10,000x bigger models smart enough to recreate the original masterpiece.
❀19πŸ‘2😁1🀯1🍌1
Boooooo
😁9πŸŽ„4❀1πŸ‘1πŸ•Š1😭1
Masterpiece
😁20🀣4πŸ‘2πŸŽ„2❀1πŸ†’1
Starting to see where the current AI field is getting it all wrong

Nah bro, that’s not the meaning or nature of β€œgrounding” at all. Quite possibly one of the worst definitions of grounding I’ve ever seen.

Linguistic determinism in action.

(His words -vs- age sketch is legit though.)
πŸ‘2❀1
Words of input -vs- Age, for humans and GPT-3

GPT-3 receives 10,000x to 100,000x more words than humans do, but in many areas humans still crush it.

Something is missing. Something that could enable AI to achieve higher intelligence with 10,000x to 100,000x less external training input.

Upper bound from: Cognitive science in the era of artificial intelligence: A roadmap for reverse-engineering the infant language-learner

Lower bound from: What Do North American Babies Hear? A large-scale cross-corpus analysis
πŸ‘7πŸ‘2❀1
Ask me a 5 different questions and analyse how smart do you think I am according to my answers. ask all the question in one go and I will reply it
πŸ‘6
Verbal analogies edition, and then feeding the question to GPT-4 in a different session, and then feeding the answers back into the first.

Of course GPT-4 rates itself as brilliant.

Ask me 5 different multiple-choice verbal analogy questions and analyze how smart you think I am according to my answers. Ask all the questions in one go and I will reply.
πŸ‘2😁2❀1πŸŽ‰1
Verbal Analogies Test: GPT-3.5 edition.

GPT-4 makes questions,
GPT-3.5 answers questions,
GPT-4 grades them.

Result:
GPT-4 says GPT-3.5 is a retard.
😁20πŸ‘5❀2
πŸ—Ώ
πŸ—Ώ125😁10❀2🀬2🀑2
Now: OpenAI Partial Outage Across All Systems

Status Page
🀬19❀10πŸŽ‰6πŸ‘3😁1😒1🌭1
Airstrike-Eliezer Yudkowsky Envisions Autocratic Empire
😱7🀬6🫑2❀1😴1πŸ‘¨β€πŸ’»1