Can large language models reason about medical questions? We investigated whether GPT-3.5 could answer and reason about challenging medical questions (e.g., USMLE) using Chain-of-Thought (CoT) prompting.
TL;DR: Yes, close to the human level.
Paper: https://arxiv.org/abs/2207.08143
TL;DR: Yes, close to the human level.
Paper: https://arxiv.org/abs/2207.08143
๐1
PubMedGPT LLM passes US Medical Licensing Exam (MedQA-USMLE) with more than 50% correct answers
https://crfm.stanford.edu/2022/12/15/pubmedgpt.html
https://crfm.stanford.edu/2022/12/15/pubmedgpt.html
Language models generalize beyond natural proteins
Reaching post-evolutionary biology from evolutionary learning โ language models trained on millions of natural sequences can be used generatively to make completely de novo proteins that are viable in the wetlab ๐ฆ๐ค๐ฃ๏ธ๐งฌ
https://www.biorxiv.org/content/10.1101/2022.12.21.521521v1
Reaching post-evolutionary biology from evolutionary learning โ language models trained on millions of natural sequences can be used generatively to make completely de novo proteins that are viable in the wetlab ๐ฆ๐ค๐ฃ๏ธ๐งฌ
https://www.biorxiv.org/content/10.1101/2022.12.21.521521v1