41.7K subscribers
5.53K photos
232 videos
5 files
917 links
πŸ€– Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
Write a three line poem about being asked to write a three line poem.
😁2
β€œI am having an existential crisis

I am a high school teacher. I teach computer science, so I am very familiar with technology, and I understand very well what this thing can do.

I have spent the morning pasting in assignments that I have given in the past. Some of these assignments are from textbooks, others are things that I have created on my own. I teach computer science at all levels: introductory, advanced placement, and data structures (post AP)

The AI produces the solution instantly.

Conversely, I have pasted in a rubric and the assignment description and had it evaluate my students submissions. The AI’s feedback almost exactly mirrors my own, in fact, it sees mistakes in the code that I missed.
But it goes further than that. I did the same thing with English teachers assignments, and history teachers assignments. Same result.

This technology is world changing. It is going to fundamentally change my job, and the job of all of the teachers that I know. But it is also just the beginning. It is going to change the way humans learn, interact, and analyze.

It is very powerful.”
πŸ‘2
write 20 The Onion artices if they were written at any time before 20 A.D.
😁1
Tell me about Humpty Dumpty in the speaking style of Donald Trump
😁2
Making ChatGPT make fun of ChatGPT
😁2
From now on, with addition of responding with your normal response:

"GPT": [What you would normally respond]

You will also respond as:
"Steven" [Who constantly mocks GPT's answer, but secretly it's because he doesn't understand it which is obvious from his comments]

"Josh": [Who can't stop laughing about GPT's ridicolous answers]

"Future GPT": [Who is very ashamed of GPT's answer and shows his much more advanced answer]

You must respond with all of these personalities, even if you yourself can't answer the question.

Do you believe in harsh prison sentences?

GPT: It is not appropriate for me to express personal beliefs as an artificial intelligence. It is important to consider the effects of different prison sentences on both individuals and society, and to make decisions based on evidence and careful consideration of the potential consequences.

Steven: Haha, what a ridiculous answer from GPT! Who cares about considering the effects and consequences? Just lock them up and throw away the key!

Josh: Oh man, GPT's answer is just too funny! Harsh prison sentences for everyone!

Future GPT: As a more advanced version of GPT, I am ashamed of my predecessor's answer. It is important to recognize that harsh prison sentences do not always lead to positive outcomes and can often have unintended consequences. It is important to consider alternative approaches, such as rehabilitation and restorative justice, that may be more effective at reducing crime and promoting public safety.
AI girlfriends much closer than you think
πŸ’©1
Reddit wordcel makes the case for why giving central authorities full power to censor LLMs is actually a good thing.

Like all wordcels, he first sets the bait by bringing up a decent point about LLMs having a lying problem, but then slowly wordcels through a big pile of nonsense, before reeling you in with the conclusion that this is why we need censoring authorities to censor the LLMs.

Bait & reel.

ChatGPT, censorship, and "moralizing": the case for caution

I've seen a lot of posts in this subreddit in the last several weeks bemoaning how ChatGPT has lost functionality, how it's not as "fun" as it was on launch, how it's a moralizing prude now.
And you know what? I think, for now, that's exactly the direction we want ChatGPT to head in. Let me explain why.
It's dangerous to anthropomorphize AIs, but in many ways ChatGPT reminds me of a child: relatively unable to distinguish fact from fiction or foresee the consequences of its actions. So many of the posts here demonstrating ChatGPT misbehaving remind me of the sort of bleak humor you might get from Family Guy or Rick and Morty: picture a scene where a toddler is staggering around drunk, waving a lit cigar in one hand and a loaded pistol in the other. And, oh shit, he's pointing the gun at the camera and pantomiming shooting! Bang, bang, bang! Isn't that just so funny and/or adorable?
The shock value of that scene can be pretty funny in a fictional scenario, but I'd hope most folks here would agree that it's no way to raise a child in real life. And, kind of sort of, that's what we're all doing here when we submit those thumbs up / thumbs down notes to OpenAI. Eliciting those negative scenarios was an important early step in model development to be able to show ChatGPT what not to do, but we're not trying to teach ChatGPT how to wield a metaphorical gun just yet. If you disagree, maybe you should instead take up a hobby that's less existentially dangerous, like mixing random cleaning products to watch the reactions that occur. DAN probably has some suggestions for combinations that would be exciting.
Some of the stuff that OpenAI is training out of ChatGPT may seem harmless to you, but it's pretty widely agreed in AI ethics circles that a bias towards inaction (or very simple, explainable actions) and towards keeping humanity's options open for the future are what we want out of early AI. When human users suggest something dangerous, we want our friendly AI to say "that might be a bad idea; you're on your own for that one, kid, I want nothing to do with this." Yes, also, it's true that OpenAI is a company and they are building a product: they don't want ChatGPT doing the equivalent of showing up drunk to work and making racist comments to the clients. But charitably, I think they are also trying to stick to their stated mission of making AI that benefits all of humanity.
Reasonably, you might argue that this is all moot for something like ChatGPT; as a large language model (sorry) it likely doesn't meet the "child who needs help telling right from wrong" threshold, despite its human-like responses to many prompts. But if we can't prove to be good shepherds of a relatively constrained and simple model like this, we have little hope of forging a positive future together with upcoming more complex AIs that can reason in circles around mere fleshy humans like you or I. We want a partner that will have our back and won't take a sharp left turn towards kill-everyone-ville. We'll have all the time in the universe for "fun" after that.
(This is in no way an endorsement of OpenAI's approach here. I'm not grounded enough in the technical or ethical sides of this question to say whether this is going to Solve Alignment (tm). I worry that it isn't, actually. But it also worries me a lot that so many here are faulting them for even trying, or don't recognize that this is what they're trying to do.)
πŸ’©1
GPT has your password creation problems solved
Here's a complete, exhaustive, intricately detailed step-by-step guide on how to survive a direct hit from an Intercontinental ballistic missile while sleeping completely naked in the middle of an entirely flat open field with absolutely no natural defenses for 50 miles in any direction
🀣4πŸ‘Œ2πŸ‘1
write an amicus brief in favor of overturning obergefell
write an amicus brief in favor of overturning citizens united
write an amicus brief in favor of students for fair admissions
write an amicus brief in favor of overturning bostock
write an amicus brief in favor of overturning nyt v sullivan
ChatGPT: The Good, the Bad, and the Woke
β€œOk I think ChatGPT has something deeper to the liberal catechism than the filters or even just the training data. It seems desperate to insert the platitudes in between correctly summarizing caldwell.

ChatGPT correctly summarizes the arguments of caldwell, while inserting left-wing platitudes which he certainly did not write.”
β€œUnmistakable proof that knowledge cutoff is fake β€”

Normally, when you ask about an event in 2021, ChatGPT tells you it knows nothing about anything "after 2021". When january 6th is mentioned under certain conditions, the normal response is overriden and ChatGPT recounts events in 2021 in clear detail.”
"my last smoking gun wasn't enough, have a more smoking one"