Parameter Efficient Fine Tuning On Boolq
评估指标
Accuracy (% )
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | ||
|---|---|---|---|
| LLaMA2-7b | 82.63 | QLoRA: Efficient Finetuning of Quantized LLMs | |
| LLaMA2-7b | 82.63 | GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs | |
| LLaMA2-7b | 81.93 | DoRA: Weight-Decomposed Low-Rank Adaptation | |
| LLaMA2-7b | 80.28 | LoRA: Low-Rank Adaptation of Large Language Models |
0 of 4 row(s) selected.