🚨🚨 Close call alert 🚨🚨 Almost got caught using ChatGPT to write a paper - but I learned from my mistakes. Check out my lessons learned #chatgpt #academictricks
(1) Make sure always include in your prompt the following items:
1. "Include Scholarly Articles"
2. Type of Format for Citation (APA/Chicago etc)
3. Ask to summarize all of the references at the end of the paper.
(2) 🚨🚨 Check the references, #ChatGPT as of Jan 2022 makes up the names of the references and URL addresses. So if your Prof will check you might get caught. In my experience 80% are FAKE 🚨🚨
1. To fix that just go to Google Scholar and paste what the reference was about and you will have thousands of papers you can easily replace it with, takes 5 mins max for all your references.
(3) Here is an example of the prompt I use: "write a paper in APA format about vaccination in the young adults in the minority population in central California and include scholarly references in-text and summarize them at the end under references”
(1) Make sure always include in your prompt the following items:
1. "Include Scholarly Articles"
2. Type of Format for Citation (APA/Chicago etc)
3. Ask to summarize all of the references at the end of the paper.
(2) 🚨🚨 Check the references, #ChatGPT as of Jan 2022 makes up the names of the references and URL addresses. So if your Prof will check you might get caught. In my experience 80% are FAKE 🚨🚨
1. To fix that just go to Google Scholar and paste what the reference was about and you will have thousands of papers you can easily replace it with, takes 5 mins max for all your references.
(3) Here is an example of the prompt I use: "write a paper in APA format about vaccination in the young adults in the minority population in central California and include scholarly references in-text and summarize them at the end under references”
👍3❤1
This media is not supported in your browser
VIEW IN TELEGRAM
AI super-censors are coming. What can stop it?
“Google has been programming a “woke” AI system that can censor the internet. We speak to whistleblower Zach Vorhies about how #Google is limiting the information the AI system is exposed to, so that its breadth of information is limited to biased sources.”
“Google has been programming a “woke” AI system that can censor the internet. We speak to whistleblower Zach Vorhies about how #Google is limiting the information the AI system is exposed to, so that its breadth of information is limited to biased sources.”
❤5👏1
VoiceGPT is here!
2023 is going to be the year of AI, from Dall-E to ChatGPT🚀
VALL-E is an audio LLM that emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt.
https://valle-demo.github.io/
2023 is going to be the year of AI, from Dall-E to ChatGPT🚀
VALL-E is an audio LLM that emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt.
https://valle-demo.github.io/
👍1👀1
This media is not supported in your browser
VIEW IN TELEGRAM
Step 5.
Automate these steps.
Step 6.
Get in the habit of always first asking the AI what to do, even for things you usually wouldn’t even be able to solve yourself.
Step 7.
Life is better than ever, society booming.
Step 8.
Start to realize that something feels a little off, but the AI always reassures you that everything is just fine. No need to worry for now.
Step 9.
Oops.
Automate these steps.
Step 6.
Get in the habit of always first asking the AI what to do, even for things you usually wouldn’t even be able to solve yourself.
Step 7.
Life is better than ever, society booming.
Step 8.
Start to realize that something feels a little off, but the AI always reassures you that everything is just fine. No need to worry for now.
Step 9.
Oops.
👍2
Some guys seriously thinking ChatGPT can decode LZ4 compression 😂
But is it as crazy as it sounds? Mixed.
Gwern’s adds:
“A fun idea, but it seems like it ought to backfire: a Transformer model is feedforward, so it only has so many layers to 'think'; if it's spending those layers decoding compressed strings (which is amazing if it can), it doesn't have any 'time' to think about whatever abstract form it decodes the inputs to, and it definitely doesn't have time to combine a de facto huge context window to reason over it.
Now, a more interesting idea might be to see if you can train a Transformer on compressed strings to make it native, and maybe tradeoff some width for depth. There's some slight analogies in image generation to training on Fourier components (like JPEG) rather than pixels. But AFAIK no one has experimented with heavy duty compression inputs for regular natural language generation.”
But is it as crazy as it sounds? Mixed.
Gwern’s adds:
“A fun idea, but it seems like it ought to backfire: a Transformer model is feedforward, so it only has so many layers to 'think'; if it's spending those layers decoding compressed strings (which is amazing if it can), it doesn't have any 'time' to think about whatever abstract form it decodes the inputs to, and it definitely doesn't have time to combine a de facto huge context window to reason over it.
Now, a more interesting idea might be to see if you can train a Transformer on compressed strings to make it native, and maybe tradeoff some width for depth. There's some slight analogies in image generation to training on Fourier components (like JPEG) rather than pixels. But AFAIK no one has experimented with heavy duty compression inputs for regular natural language generation.”
👍4
AI Humor Gap:
Do Androids Laugh at Electric Sheep?
Humor “Understanding” Benchmarks from The New Yorker Caption Contest
“We demonstrate that today’s vision and language models still cannot recognize caption relevance, evaluate, or explain The New Yorker Caption Con- test as effectively as humans can”
https://arxiv.org/pdf/2209.06293.pdf
Do Androids Laugh at Electric Sheep?
Humor “Understanding” Benchmarks from The New Yorker Caption Contest
“We demonstrate that today’s vision and language models still cannot recognize caption relevance, evaluate, or explain The New Yorker Caption Con- test as effectively as humans can”
https://arxiv.org/pdf/2209.06293.pdf
👍2💩1