医疗大模型
- LLM
- AlpaCare: Instruction-tuned Large Language Models for Medical Application.
-
- BianQue: Balancing the Questioning and Suggestion Ability of Health LLMs with Multi-turn Health Conversations Polished by ChatGPT
- Qilin-Med: Multi-stage Knowledge Injection Advanced Medical Large Language Model
- Zhongjing: Enhancing the Chinese Medical Capabilities of Large Language Model through Expert Feedback and Real-world Multi-turn Dialogue
- LVM
LLM
FROM BEGINNER TO EXPERT: MODELING MEDICAL KNOWLEDGE INTO GENERAL LLMS
未开源
提出了三阶段训练方法:
- 医疗领域Post- Training
- 通用QA 微调
- 通过C-play 增强下游场景任务。
第一阶段训练数据:
Taiyi: A Bilingual Fine-Tuned Large Language Model for Diverse Biomedical Tasks. 2023
提出了两阶段sft,差不多是大量的低质量多样性预料学习领域知识,医疗数据sft。
开源了数据集清单。
AlpaCare: Instruction-tuned Large Language Models for Medical Application.
提出了类似于self-instruct的医疗数据生成方法,并开源了52k sft data.通过Rouge-L去重。
BianQue: Balancing the Questioning and Suggestion Ability of Health LLMs with Multi-turn Health Conversations Polished by ChatGPT
主张多轮问询CoQ,通过好大夫开源的问答数据,清洗了一遍,然后用ChatGPT润色。
Qilin-Med: Multi-stage Knowledge Injection Advanced Medical Large Language Model
分为三阶段:CPT + SFT + DPO ,公开了使用的数据清单。
Zhongjing: Enhancing the Chinese Medical Capabilities of Large Language Model through Expert Feedback and Real-world Multi-turn Dialogue
开源了数据清单,跑通了CPT + SFT + PPO
LVM
LLava-Med
https://arxiv.org/pdf/2306.00890.pdf
Qilin-Med-VL: Towards Chinese Large Vision-Language Model for General Healthcare
https://arxiv.org/pdf/2310.17956.pdf
BioMed-CLIP
https://arxiv.org/pdf/2303.00915.pdf
RadFM
https://arxiv.org/abs/2308.02463
visual-med alpaca
https://cambridgeltl.github.io/visual-med-alpaca/
ophgml
https://arxiv.org/abs/2306.12174
文章评论