II-Medical-8B is a newly developed advanced large-scale language model from an intelligent internet company. It is specifically designed to enhance AI capabilities for medical reasoning. It represents a significant improvement over the previous II-Medical-7B-Preview, substantially enhancing medical question-answering capabilities. This model is based on the Qwen/Qwen3-8B model and optimizes its performance by using supervised fine-tuning (SFT) on a medical domain-specific inference dataset and training DAPO (a possible optimization method) on a hard inference dataset. Related research papers are available. 1.4 Million Open-Source Distilled Reasoning Dataset to Empower Large Language Model Training .
This tutorial uses resources for a single RTX 4090 card.
2. Project Examples
3. Operation steps
1. After starting the container, click the API address to enter the Web interface
If "Model" is not displayed, it means the model is being initialized. Since the model is large, please wait about 1-2 minutes and refresh the page.
2. After entering the webpage, you can start a conversation with the model
How to use
4. Discussion
🖌️ If you see a high-quality project, please leave a message in the background to recommend it! In addition, we have also established a tutorial exchange group. Welcome friends to scan the QR code and remark [SD Tutorial] to join the group to discuss various technical issues and share application effects↓
Citation Information
Thanks to Github user xxxjjjyyy1 Deployment of this tutorial. The reference information of this project is as follows:
@misc{2025II-Medical-8B,
title={II-Medical-8B: Medical Reasoning Model},
author={Intelligent Internet},
year={2025}
}
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.
II-Medical-8B is a newly developed advanced large-scale language model from an intelligent internet company. It is specifically designed to enhance AI capabilities for medical reasoning. It represents a significant improvement over the previous II-Medical-7B-Preview, substantially enhancing medical question-answering capabilities. This model is based on the Qwen/Qwen3-8B model and optimizes its performance by using supervised fine-tuning (SFT) on a medical domain-specific inference dataset and training DAPO (a possible optimization method) on a hard inference dataset. Related research papers are available. 1.4 Million Open-Source Distilled Reasoning Dataset to Empower Large Language Model Training .
This tutorial uses resources for a single RTX 4090 card.
2. Project Examples
3. Operation steps
1. After starting the container, click the API address to enter the Web interface
If "Model" is not displayed, it means the model is being initialized. Since the model is large, please wait about 1-2 minutes and refresh the page.
2. After entering the webpage, you can start a conversation with the model
How to use
4. Discussion
🖌️ If you see a high-quality project, please leave a message in the background to recommend it! In addition, we have also established a tutorial exchange group. Welcome friends to scan the QR code and remark [SD Tutorial] to join the group to discuss various technical issues and share application effects↓
Citation Information
Thanks to Github user xxxjjjyyy1 Deployment of this tutorial. The reference information of this project is as follows:
@misc{2025II-Medical-8B,
title={II-Medical-8B: Medical Reasoning Model},
author={Intelligent Internet},
year={2025}
}
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.