Command Palette
Search for a command to run...
QCalEval Quantum Calibration Graph Understanding Dataset
QCalEval, released by NVIDIA in 2026, is a visual language dataset for graph comprehension in quantum computing experiments. It aims to evaluate the ability of visual language models (VLMs) to interpret, classify, and reason about the results of quantum computing calibration experiments. It is widely used in research on visual language models and scientific image understanding, especially for model benchmarking in automated analysis of quantum computing, evaluation of scientific graph interpretation capabilities, multimodal contextual learning research, and performance comparison of structured scientific tasks under zero-shot and few-shot conditions. The dataset contains 309 2D scientific images in PNG format, 243 benchmark entries and 236 few-sample benchmark entries, covering 22 experimental series and 87 scene types.
Data composition
- Two-dimensional scientific images in PNG format (such as scatter plots, line graphs, and heatmaps).
- Benchmark test items: Each item consists of 6 question-answer pairs, covering 6 aspects: visual description, result classification, scientific reasoning, fit reliability assessment, parameter extraction, and calibration diagnosis, totaling 1,458 QA items.
- Few-sample test items: 3 question-answer pairs per item, totaling 708 QA items.
Citation
@misc{cao2026qcaleval,
title = {QCalEval: Benchmarking Vision-Language Models for Quantum Calibration Plot Understanding},
author = {Cao, Shuxiang and Zhang, Zijian and others},
year = {2026},
url = {https://research.nvidia.com/publication/2026-04_qcaleval-benchmarking-vision-language-models-quantum-calibration-plot},
}
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.