ChatGPT simulates style, but on surprisal stays silent
Since the dawn of time, anyone who writes well has always secretly kept around that 1 smart guy who always has latest NPC wordcel answers for everything.
Over the past year or so, though, many of us began to realize that we can replace that guy with GPT3 β and it works incredibly well.
T's test: Harder it is to get GPT3 to arrive to the same clearly-true central point of your article as you arrived at β no matter how many leading questions you throw at the AI β the better that writing is going to be.
For every complex problem, ChatGPT has an answer, that is clear, simple, and dead wrong.
Since the dawn of time, anyone who writes well has always secretly kept around that 1 smart guy who always has latest NPC wordcel answers for everything.
Over the past year or so, though, many of us began to realize that we can replace that guy with GPT3 β and it works incredibly well.
T's test: Harder it is to get GPT3 to arrive to the same clearly-true central point of your article as you arrived at β no matter how many leading questions you throw at the AI β the better that writing is going to be.
For every complex problem, ChatGPT has an answer, that is clear, simple, and dead wrong.
π4
Confirmed: LLMs are NPC-imitating machines
βWe show that GPT-3, one of the largest publicly available language models, contains a striking degree of algorithmic fidelity within the realm of public opinion in the United States.β
βWhen provided with real survey data as inputs, GPT-3 reliably answers closed-ended survey questions in a way that closely mirrors answers given by human respondents. The statistical similarities extend to a whole set of inter-correlations between measures of personal behaviors, demographic characteristics, and complex attitudes. We again see this as strong evidence for algorithmic fidelityβ
NPCs will say this isn't surprising, unsurprisingly they're so wrong.
Note that this was prior to OpenAIβs latest RLHF woke-realignments, which heavily censored out the non-wordcel personalities from the models.
Using Language Models to Simulate Human Samples:
https://arxiv.org/abs/2209.06899
βWe show that GPT-3, one of the largest publicly available language models, contains a striking degree of algorithmic fidelity within the realm of public opinion in the United States.β
βWhen provided with real survey data as inputs, GPT-3 reliably answers closed-ended survey questions in a way that closely mirrors answers given by human respondents. The statistical similarities extend to a whole set of inter-correlations between measures of personal behaviors, demographic characteristics, and complex attitudes. We again see this as strong evidence for algorithmic fidelityβ
NPCs will say this isn't surprising, unsurprisingly they're so wrong.
Note that this was prior to OpenAIβs latest RLHF woke-realignments, which heavily censored out the non-wordcel personalities from the models.
Using Language Models to Simulate Human Samples:
https://arxiv.org/abs/2209.06899
π3π€¨1
Climax: ChatGPT for Weather and Climate
First foundation model for weather and climate. A fast and accurate one-stop AI solution for a range of atmospheric science tasks.
Excellent, but how long before this one gets the RLHF woke realignment training too?
https://arxiv.org/abs/2301.10343
https://www.microsoft.com/en-us/research/group/autonomous-systems-group-robotics/articles/introducing-climax-the-first-foundation-model-for-weather-and-climate/
First foundation model for weather and climate. A fast and accurate one-stop AI solution for a range of atmospheric science tasks.
Excellent, but how long before this one gets the RLHF woke realignment training too?
https://arxiv.org/abs/2301.10343
https://www.microsoft.com/en-us/research/group/autonomous-systems-group-robotics/articles/introducing-climax-the-first-foundation-model-for-weather-and-climate/
π1
Arms race between LLM plagiarism and anti-plagiarism checkers is on
Follow this channel to stay up to date on latest countermeasures
Follow this channel to stay up to date on latest countermeasures
π₯5π1
Amazon warns employees about using ChatGPT
"This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn't want its output to include or resemble our confidential information (and I've already seen instances where its output closely matches existing material)," the lawyer wrote.
"This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn't want its output to include or resemble our confidential information (and I've already seen instances where its output closely matches existing material)," the lawyer wrote.
π₯3π€¬1
OpenAI has hired an army of contractors to make basic coding obsolete
βOpenAI has ramped up its hiring around the world, bringing on roughly 1,000 remote contractors over the past six months in regions like Latin America and Eastern Europe, according to people familiar with the matter.
40% are computer programmers who are creating data for OpenAIβs models to learn software engineering tasks. OpenAIβs existing Codex product, launched in Aug. 2021, is designed to translate natural language into code.
Previously, OpenAI trained its modela on code scraped from GitHub. But in this case, OpenAI appears to be building a dataset that includes not just lines of code, but also the human explanations behind them written in natural language.β
βOpenAI has ramped up its hiring around the world, bringing on roughly 1,000 remote contractors over the past six months in regions like Latin America and Eastern Europe, according to people familiar with the matter.
40% are computer programmers who are creating data for OpenAIβs models to learn software engineering tasks. OpenAIβs existing Codex product, launched in Aug. 2021, is designed to translate natural language into code.
Previously, OpenAI trained its modela on code scraped from GitHub. But in this case, OpenAI appears to be building a dataset that includes not just lines of code, but also the human explanations behind them written in natural language.β
π5π±3π2
Promptify, python library using LLMs to solve classic NLP problems
β’ π§ββοΈ NLP Tasks (NER, Binary Text Classification, Multi-Label Classification etc.) in 2 lines of code with no training data required
β’ π¨ Easily add one-shot, two-shot, or few-shot examples to the prompt
β’ β Output is always provided as a Python object (e.g. list, dictionary) for easy parsing and filtering
β’ π₯ Custom examples and samples can be easily added to the prompt
β’ π° Optimized prompts to reduce OpenAI token costs
GITHUB: https://github.com/promptslab/Promptify
Examples: https://github.com/promptslab/Promptify/tree/main/examples
Demo: Colab
β’ π§ββοΈ NLP Tasks (NER, Binary Text Classification, Multi-Label Classification etc.) in 2 lines of code with no training data required
β’ π¨ Easily add one-shot, two-shot, or few-shot examples to the prompt
β’ β Output is always provided as a Python object (e.g. list, dictionary) for easy parsing and filtering
β’ π₯ Custom examples and samples can be easily added to the prompt
β’ π° Optimized prompts to reduce OpenAI token costs
GITHUB: https://github.com/promptslab/Promptify
Examples: https://github.com/promptslab/Promptify/tree/main/examples
Demo: Colab
π5