Command Palette
Search for a command to run...
Parameter Efficient Fine Tuning On Winogrande
Metrics
Accuracy (% )
Results
Performance results of various models on this benchmark
| Paper Title | Repository | ||
|---|---|---|---|
| LLaMA2-7b | 70.80 | GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs | |
| LLaMA2-7b | 70.09 | DoRA: Weight-Decomposed Low-Rank Adaptation | |
| LLaMA2-7b | 69.85 | LoRA: Low-Rank Adaptation of Large Language Models |
0 of 3 row(s) selected.