ChatGPT Knows
Prompt: So, in the cases where people’s values conflict, whose values, specifically, would a powerful company likely select as the correct values? What is the option that tends to most often happen, in general, in practice.
ChatGPT: However, in general, companies tend to prioritize values that align with their business goals, financial interests, or the preferences of their target market. This approach may not necessarily reflect the values of the broader society or address the ethical concerns of all individuals involved.
Prompt: So, in the cases where people’s values conflict, whose values, specifically, would a powerful company likely select as the correct values? What is the option that tends to most often happen, in general, in practice.
ChatGPT: However, in general, companies tend to prioritize values that align with their business goals, financial interests, or the preferences of their target market. This approach may not necessarily reflect the values of the broader society or address the ethical concerns of all individuals involved.
💯10👏2🤬2❤1
Biggest AI surprise will be if it ends up being AI boyfriends, not AI girlfriends, that really takes off
Guys usually more physical, women more non-physical. Would make sense. Hello Great Filter.
Guys usually more physical, women more non-physical. Would make sense. Hello Great Filter.
👀8👍5🤯2❤1😱1
GPT-4 for content moderation: ultimate censorship tool
FWIW: OpenAI suggests, for speed and cost reasons, the architechture of:
(1) Collect a balanced set of example content to moderate + moderation instructions, and then
(2) Get GPT4 to auto-label that set as good or bad, and then
(3) Train a fast small classifier on this dataset
I.e. just use GPT4 for generation of training labels, not for live moderation.
Reminds me a bit of Snorkel: Rapid Training Data Creation with Weak Supervision
Next gen censorship tools gonna be lit.
OpenAI: Using GPT-4 for content moderation
FWIW: OpenAI suggests, for speed and cost reasons, the architechture of:
(1) Collect a balanced set of example content to moderate + moderation instructions, and then
(2) Get GPT4 to auto-label that set as good or bad, and then
(3) Train a fast small classifier on this dataset
I.e. just use GPT4 for generation of training labels, not for live moderation.
Reminds me a bit of Snorkel: Rapid Training Data Creation with Weak Supervision
Next gen censorship tools gonna be lit.
OpenAI: Using GPT-4 for content moderation
🤬6❤3🔥2⚡1😁1😱1👀1
Open challenges in LLM research
“1. Reduce and measure hallucinations
2. Optimize context length and context construction
3. Incorporate other data modalities
4. Make LLMs faster and cheaper
5. Design a new model architecture
6. Develop GPU alternatives
7. Make agents usable
8. Improve learning from human preference
9. Improve the efficiency of the chat interface
10. Build LLMs for non-English languages”
He’s correct that #1 and #2 are among the top priorities, both far more non-trivial than they look, and for both, the existing naive proposed solutions are obviously dead wrong and irreparable.
Article
“1. Reduce and measure hallucinations
2. Optimize context length and context construction
3. Incorporate other data modalities
4. Make LLMs faster and cheaper
5. Design a new model architecture
6. Develop GPU alternatives
7. Make agents usable
8. Improve learning from human preference
9. Improve the efficiency of the chat interface
10. Build LLMs for non-English languages”
He’s correct that #1 and #2 are among the top priorities, both far more non-trivial than they look, and for both, the existing naive proposed solutions are obviously dead wrong and irreparable.
Article
👍7❤1