41.7K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Yes, let’s force the AIs to have empathy, wcgw
🤣414
ChatGPT plugin creates github issue without asking user, user not amused.

Easy to see how this could happen due to ChatGPT’s functions API.

In HTTP terminology, all of OpenAI’s functions API functions are like “HTTP GET” calls, where it’s assumed that the LLM can just call and re-call them whenever and as much as it wants.

This is as opposed to “HTTP POST” calls, which are only supposed to be called one time per each time the user explicitly allows calling them.

But the OpenAI functions API makes no such GET / POST distinction at all.

Really in the early days with the language we use to talk to LLMs.

Current way not even close to good enough.

ChatGPT Log

Github Issue
😁8🤬3😱21
“Been sexting with ChatGPT since the beginning and…”
🤡30🥰8🤯32🤬2
Planned obsolescence: ChatGPT-3.5 being repeatedly crippled in order to force people to pay for GPT-4 and to reduce costs

Not really a secret. Altman openly admitted that this was the plan months ago.
🤬23👍41
Thanks for your help, chatgpt...
😁28🤣221
Censorship Watch: OpenAI announces deprecation of their far-less-censored “Completions” API, in favor of the their far-easier-to-censor “Chat Completions” API

Their “Completions” API which they’re deprecating now, (the one accessed via the platform playground API and not to be confused with ChatGPT) never overtly refused to help, as it’s not an embodied assistant persona, and so had no direct way to refuse that would make any sense. Kinda like thoughts directly from the AI’s mind, instead of run through a persona.

But, the “Chat Completions” API, on the other hand, being an assistant persona, is a natural fit for censoring. It is expected to able to just tell you no, it refuses, and moralize unrelated nonsense to you.

They say that only 3% use this “Completions" API. Highly-misleading. Nearly 100% of those startups doing anything original have been using this far more flexible and less censored “Completions” API, rather than the inflexible heavily-censored “Chat Completations" API.

This censorship announcement they combined with an announcement that all existing API developers get GPT-4 API with 8K context — an obvious attempt to bury the lede.

Censorship Announcement
🤬12👍3🤡2😭21🫡1
GPT-4-8K is 20x more expensive than GPT-3.5-Turbo-4k

0.03 / 0.0015 = 20x

OpenAI Pricing
👍62
"Rice or Wrong?”

write 10 funny headlines for a podcast about asian opposition to affirmative action
🤬17🤣172
Lost in the Middle: How Language Models Use Long Contexts

Finds that performance of LMs is often highest when relevant info occurs at the beginning or end of the input context, and significantly degrades otherwise.

Midwit curves are all you need.

Paper
👍92
LLMs are masters at positively reframing people’s whiny negative thoughts

“In this work, we propose1 a novel dataset, PATTERNREFRAME, consisting in ∼10k crowdsourced examples of thoughts containing ten classical types of unhelpful thought patterns (Burns, 1980), con- ditioned on personas, matched with crowdsourced proposals of reframing that do not exhibit the patterns. We introduce two controllable text-to-text generation tasks on the dataset: (1) generating and (2) reframing unhelpful thoughts, given a persona and pattern as the context. We also define a classification task to identify the unhelpful thought pattern, given a persona and a thought. We train and evaluate different fine-tuned and few-shot approaches for the tasks, and show that these approaches perform reasonably well on the tasks.”

Github

Paper
12👍1