Command Palette
Search for a command to run...
vLLM + Open WebUI Deployment gemma-3-270m-it
1. Tutorial Introduction
gemma-3-270m-it is a lightweight instruction fine-tuning model in the Gemma 3 series, released by Google on March 12, 2025. Built with 270M (270 million) parameters, it focuses on efficient dialogue interaction and lightweight deployment. This lightweight and efficient model requires only 1GB+ of VRAM on a single GPU, making it suitable for edge devices and low-resource scenarios. It supports multi-turn dialogue, with specific fine-tuning for everyday question-and-answer and simple task instructions, focusing on text generation and understanding (it does not support multimodal inputs such as images), and supports a 32K tokens context window, enabling it to handle long text dialogues. Related research papers are available. Gemma 3 Technical Report .
This tutorial uses resources for a single RTX 4090 card.
2. Project Examples

3. Operation steps
1. After starting the container, click the API address to enter the Web interface

2. After entering the webpage, you can start a conversation with the model
If "Model" is not displayed, it means the model is being initialized. Since the model is large, please wait about 2-3 minutes and refresh the page.
How to use

4. Discussion
🖌️ If you see a high-quality project, please leave a message in the background to recommend it! In addition, we have also established a tutorial exchange group. Welcome friends to scan the QR code and remark [SD Tutorial] to join the group to discuss various technical issues and share application effects↓

Citation Information
The citation information for this project is as follows:
@article{gemma_2025,
title={Gemma 3},
url={https://arxiv.org/abs/2503.19786},
publisher={Google DeepMind},
author={Gemma Team},
year={2025}
}Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.