Code Documentation Generation On 2
评估指标
Smoothed BLEU-4
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | ||
|---|---|---|---|
| CodeBERT (MLM) | 26.79 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
| CodeBERT (MLM+RTD) | 26.66 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
| pre-train w/ code only | 26.39 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
| RoBERTa | 26.09 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
| CodeBERT (RTD) | 26.02 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
| seq2seq | 23.48 | CodeBERT: A Pre-Trained Model for Programming and Natural Languages | |
| CodeTrans-TF-Large | 19.54 | CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing |
0 of 7 row(s) selected.