41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
GPT-3 can translate script of the series premiere of Better Call Saul into valid GraphViz dot diagram
2
What are the relationships between the different sects of Christianity using a GraphViz diagram in dot notation?
“draw a family tree relating all of the most famous members of the Kardashian family”
Using the GraphViz dot language, give a directed graph containing three nodes, a0, al, and a2.
Draw arrows from a0 to al, al to a2, and from a2 back to a0.
👍1
“Create a directed graph relating major characters of the play Romeo and Juliet with labels denoting the type of relationship”
A demo of GPT-3's ability to understand extremely long instructions. The prompt here is nearly 2,000 chars long, and every word is followed
If Ford trucks could suddenly fly ilke UFOs
Took me 87 minutes to build a chatbot trained on Gumroad's Help Center docs:

1. Fork https://github.com/slavingia/askmybook
2. Replace manuscript text with scraped Help Center docs
3. That's it!

Try it: https://serene-meadow-46170.herokuapp.com
Obama
🔥7🤩2
Telegram bot to talk to OpenAI GPT-3 over voice:

Receive voice messages from Telegram and convert them to text using OpenAI Whisper Then send the text to OpenAI GPT-3 to generate a response Then convert the response back to audio using TTS Then send the audio back to Telegram

https://github.com/namuan/tele-muninn/blob/main/voice_to_openai.py

https://namuan.github.io/tele-muninn/tele_pathy.html
Language Models are Drummers: Drum Composition with Natural Language Pre-Training

Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.

https://arxiv.org/abs/2301.01162
👍4
Large Language Models as Corporate Lobbyists

This demonstrates a proof-of-concept of GPT conducting corporate lobbying related activities.

We use OpenAI's GPT-3.5 +
LangChainAI to determine if proposed Congressional bills are relevant to specific companies + provide explanations + confidence levels.

For bills it deems relevant, model drafts letter to bill sponsor to persuade the congressperson to make changes.

These results suggest that, as LLMs continue to exhibit improved core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. Paper briefly discusses why this could be problematic for societal-AI alignment.

abs: https://arxiv.org/abs/2301.01181

github: https://github.com/JohnNay/llm-lobbyist

^ politicianGPT
GPT Takes the Bar Exam

GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5's ranking of responses is also highly-correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.

abs: https://arxiv.org/abs/2212.14402

github: https://github.com/mjbommar/gpt-takes-the-bar-exam
LAMBADA: Backward Chaining for Automated Reasoning in Natural Language

Achieves massive accuracy boosts over sota forward reasoning methods on two challenging logical reasoning datasets, particularly when deep and accurate proof chains are required

abs: https://arxiv.org/abs/2212.13894